Thursday, September 21, 2017

What we didn't get

I recently wrote a fairly well-received Twitter thread about how the cyberpunk sci-fi of the 1980s and early 1990s accurately predicted a lot about our current world. Our modern society is totally wired and connected, but also totally unequal - "the future is here, it's just not evenly distributed", as Gibson was fond of saying. Hackers, cyberwarfare, and online psyops are a regular part of our political and economic life. Billionaires build spaceships and collaborate with the government to spy on the populace, while working-class people live out of shipping crates and drink poison water. Hobbyists are into body modifications and genetic engineering, while labs are researching artificial body parts and brain-computer interfaces. The jetpack is real, but there's only one of it, and it's owned by a rich guy. Artificial intelligences trade stocks and can beat humans at Go, deaf people can hear, libertarians and criminals funnel billions of dollars around the world with untraceable private crypto-money. A meme virus almost as crazy as the one in Snow Crash swept an insane man to the presidency of the United States, and in Texas you can carry a sword on the street like a street samurai in Neuromancer. There are even artificial pop stars and murderous cyborg super-athletes.

We are, roughly, living in the world the cyberpunks envisioned.

This isn't the first time a generation of science fiction writers has managed to envision the future with disturbing accuracy. The early industrial age saw sci-fi writers predict many inventions that would eventually become reality, from air and space travel to submarines, tanks, television, helicopters, videoconferencing, X-rays, radar, robots, and even the atom bomb. There were quite a few misses, as well - no one is going back in time or journeying to the center of the Earth. But overall, early industrial sci-fi writers got the later Industrial Revolution pretty right. And their social predictions were pretty accurate, too - they anticipated consumer societies and high-tech large-scale warfare.

But there have also been eras of sci-fi that mostly got it wrong. Most famously, the mid-20th century was full of visions of starships, interplanetary exploration and colonization, android servitors and flying cars, planet-busting laser cannons, energy too cheap to meter. So far we don't have any of that. As Peter Thiel - one of our modern cyberpunk arch-villains - so memorably put it, "We wanted flying cars, instead we got 140 characters."

What happened? Why did mid-20th-century sci fi whiff so badly? Why didn't we get the Star Trek future, or the Jetsons future, or the Asimov future?

Two things happened. First, we ran out of theoretical physics. Second, we ran out of energy.

If you watch Star Trek or Star Wars, or read any of the innumerable space operas of the mid-20th century, they all depend on a bunch of fancy physics. Faster-than-light travel, artificial gravity, force fields of various kinds. In 1960, that sort of prediction might have made sense. Humanity had just experienced one of the most amazing sequences of physics advancements ever. In the space of a few short decades, humankind discovered relativity and quantum mechanics, invented the nuclear bomb and nuclear power, and created the x-ray, the laser, superconductors, radar and the space program. The early 20th century was really a physics bonanza, driven in large part by advances in fundamental theory. And in the 1950s and 1960s, those advances still seemed to be going strong, with the development of quantum field theories.

Then it all came to a halt. After the Standard Model was completed in the 1970s, there were no big breakthroughs in fundamental physics. There was a brief period of excitement in the 80s and 90s, when it seemed like string theory was going to unify quantum mechanics and gravity, and propel us into a new era to match the time of Einstein and Bohr and Dirac. But by the 2000s, people were writing pop books about how string theory has failed. Meanwhile, the largest, most expensive particle collider ever built has merely confirmed the theories of the 1970s, leaving little direction for where to go next. Physicists have certainly invented some more cool stuff (quantum teleporation! quantum computers!), but there have been no theoretical breakthroughs that would allow us to cruise from star to star or harness the force of gravity.

The second thing that happened was that we stopped getting better sources of energy. Here is a brief, roughly chronological list of energy sources harnessed by humankind, with their specific energies (usable potential energy per unit mass) listed in units of MJ/kg. Remember that more specific energy (or, alternatively, more energy density) means more energy that you can carry around in your pocket, your car, or your spaceship.

Protein: 16.8

Sugars: 17.0

Fat: 37

Wood: 16.2

Gunpowder: 3.0

Coal: 24.0 - 35.0

TNT: 4.6

Diesel: 48

Kerosene: 42.8

Gasoline: 46.4

Methane: 55.5

Uranium: 80,620,000

Deuterium: 87,900,000

Lithium-ion battery: 0.36 - 0.875

This doesn't tell the whole story, of course, since availability and recoverability are key - to get the energy of protein, you have to kill a deer and eat it, or grow some soybeans, while deposits of coal, gas, and uranium can be dug up out of the ground. Transportability is also important (natural gas is hard to carry around in a car).

But this sequence does show one basic fact: In the industrial age, we got better at carrying energy around with us. And then, at the dawn of the nuclear age, it looked like we were about to get MUCH better at carrying energy around with us. One kilogram of uranium has almost two million times as much energy in it as a kilogram of gasoline. If you could carry that around in a pocket battery, you really might be able to blow up buildings with a handheld laser gun. If you could put that in a spaceship, you might be able to zip to other planets in a couple of days. If you could put that in a car, you can bet that car would fly. You could probably even use it to make a deflector shield.

But you can't carry uranium around in your pocket or your car, because it's too dangerous. First of all, if there were enough uranium to go critical, you'd have a nuclear weapon in your garage. Second, uranium is a horrible deadly poison that can wreak havoc on the environment. No one is going to let you have that. (Incidentally, this is also probably why you don't have a flying car yet - it has too much energy. The people who decide whether to allow flying cars realize that some people would choose to crash those high-energy objects into buildings. Regular cars are dangerous enough!)

Now, you can put uranium on your submarine. And you can put it in your spaceship, though actually channeling the power into propulsion is still a problem that needs some work. But overall, the toxicity of uranium, and the ease with which fission turns into a meltdown, has prevented widespread application of nuclear power. That also holds to some degree for nuclear electricity.

As for fusion power, we never managed to invent that, except for bombs.

So the reason we didn't get the 1960s sci-fi future was twofold. A large part of it was apparently impossible (FTL travel, artificial gravity). And a lot of the stuff that was possible, but relied on very high energy density fuels, was too unsafe for general use. We might still get our androids, and someday in the very far future we might have nuclear-powered spaceships whisking us to Mars or Europa or zero-G habitats somewhere. But you can't have your flying car or your pocket laser cannon, because frankly, you're probably just too much of a jerk to use them responsibly.

So that brings us to another question: What about the most recent era of science fiction? Starting in the mid to late 1990s, until maybe around 2010, sci-fi once again embraced some very far-out future stuff. Typical elements (some of which, to be fair, had been occasionally included in the earlier cyberpunk canon) included:

1. Strong (self-improving) AI, artificial general intelligence, and artificial consciousness

2. Personality upload

3. Self-replicating nanotech and general assemblers

4. A technological Singularity

These haven't happened yet, but it's only been a couple of decades since this sort of futurism became popular. Will we eventually get these things?

Unlike faster-than-light travel and artificial gravity, we have no theory telling us that we can't have strong AI or a Singularity or personality upload. (Well, some people have conjectures as to reasons we couldn't, but these aren't solidly proven theories like General Relativity.) But we also don't really have any idea how to start making these things. What we call AI isn't yet a general intelligence, and we have no idea if any general intelligence can be self-improving (or would want to be!). Personality upload requires an understanding of the brain we just don't have. We're inching closer to true nanotech, but it still seems far off.

So there's a possibility that the starry-eyed Singularitan sci-fi of the 00s will simply never come to pass. Like the future of starships and phasers, it might become a sort of pop retrofuture - fodder for fun Hollywood movies, but no longer the kind of thing anyone thinks will really happen. Meanwhile, technological progress might move on in another direction - biotech? - and another savvy generation of Jules Vernes and William Gibsons might emerge to predict where that goes.

Which raises a final question: Is sci-fi least accurate when technological progress is fastest?

Think about it: The biggest sci-fi miss of all time came at the peak of progress, right around World War 2. If the Singularitan sci-fi boom turns out to have also been a whiff, it'll line up pretty nicely with the productivity acceleration of the 1990s and 00s. Maybe when a certain kind of technology - energy-intensive transportation and weapons technology, or information-intensive computing technology - is increasing spectacularly quickly, sci-fi authors get caught up in the rush of that trend, and project it out to infinity and beyond. But maybe it's the authors at the very beginning of a tech boom, before progress in a particular area really kicks into high gear, who are able to see more clearly where the boom will take us. (Of course, demonstrating that empirically would involve controlling for the obvious survivorship bias).

We'll never know. Nor is this important in any way that I can tell, except for sci-fi fans. But it's certainly fun to think about.

The margin of stupid

Every so often, I see a news story or tweet hyping the fact that a modest but non-negligible percent of Americans said some crazy or horrible thing in a survey. Here are two examples:

The most chilling findings, however, involved how students think repugnant speech should be dealt with...It gets even worse. Respondents were also asked if it would be acceptable for a student group to use violence to prevent that same controversial speaker from talking. Here, 19 percent said yes. 

Racial slurs that have cropped up chants, e-mails and white boards on America's college campuses have some people worried about whether the nation's diverse and fawned-over millennial generation is not as racially tolerant as might be expected. 


So, from these two examples -- both of them in the Washington Post -- I'm supposed to believe that Millennials are a bunch of unreconstructed racists, except for the ones who go to college, who are a pack of intolerant leftists. 

It seems to me like there's something inherently suspicious about judging a group of people based on sentiments expressed by only 15 or 20 percent of those people. But beyond that, there's another problem here - the problem of whether we can really trust these surveys. 

Surveys give a false sense of precision, by reporting a "margin of error" (confidence interval). But that confidence interval comes purely from the fact that the sample is finite. It does not capture systematic error, like selection bias (Are the people who answer this survey representative of the population being sampled?). And it definitely doesn't capture the errors people themselves make when responding to surveys.

When I did happiness survey research with Miles Kimball, there was always the nagging question of whether people are really able to know how happy they are. Of course, the whole question of what "happiness" should mean is a difficult one, but presumably there are some neurochemical tests you could do to determine how good someone feels, at least relative to how they felt in the past. How well do survey responses reflect this "true" emotion? Do people in different countries have cultural pressures that make them respond differently? Do Americans feel the need to say they're happy all the time, while British people would be ashamed to admit happiness? And are people measuring their happiness relative to yesterday, or to their youth, or to how happy they think they ought to be?

These errors were things that we lumped into something we called "response style" (psychologists call it response bias). It's very very hard to observe response style. But I'd say we can make a pretty good guess that Americans - and possibly everyone - do a lot of random responding when it comes to these sorts of surveys.

For example, a 2014 survey reported that 26 percent of Americans said that the sun goes around the Earth. 

Now, maybe there are a bunch of pre-Copernican geocentrists out there in America (there certainly are the flat-earthers!). Or maybe people just don't think very hard about how they answer these questions. Maybe some people are confused by the questions. Maybe some are trolling. 

Whatever the cause, it seems like you can get 20 to 25 percent of Americans to say any ridiculous thing imaginable. "Do you think eating raccoon poop reduces the risk of brain cancer?" "23 percent of Americans say yes!" "Would you be willing to cut your toes off with a rotary saw if it meant your neighbor had to do the same?" "17 percent of Americans say they would!" Etc.

You can also see this just from looking at some of the crosstabs in the first survey above. 20 percent of Democrats and 22% of Republicans say it's OK to use violence to shut down speakers you don't like. This sounds kind of nuts, given the panic on the right over lefty violence against campus speakers. Why would Republicans even more likely than Democrats to condone this sort of violence? It makes no sense at all...unless you can get ~20 percent of Americans to say pretty much any ridiculous thing on a survey. 

I call this the margin of stupid. Unlike the margin of error, it's not even a roughly symmetric error -- because you can't have less than 0% of people give a certain answer on a survey, the margin of stupid always biases surveys toward showing some non-negligible amount of support for any crazy or stupid or horrible position. 

Whenever you read a survey like this, you must take the margin of stupid into account. Yes, there are Americans who believe crazy, stupid, and horrible things. But dammit, there aren't that many. Next time you see some poll breathlessly claiming that 21 percent of Americans support executing anyone whose name starts with "G", or that 18 percent of Millennials believe themselves to be the reincarnation of Kublai Khan, take it with a grain of salt. It's a lot easier to give a stupid answer on a survey than to actually truly hold a nuts belief.

Sadly, the margin of stupid also probably applies to voting.

Sunday, September 10, 2017

a16z podcast on trade

I recently had the pleasure of appearing on the a16z podcast (a16z stands for Andreessen Horowitz, the venture capital firm). The topic was free trade, and the other guest was Russ Roberts of EconTalk.

Russ is known for making the orthodox case for free trade, and I've expressed some skepticism and reservations, so it seemed to me that my role in this podcast was to be the trade skeptic. So I thought of three reasons why pure, simple free trade might not be the optimal approach.

Reason 1: Cheap labor as a substitute for automation

Getting companies and inventors to innovate is really, really hard. Basically, no one ever captures the full monetary benefit of their innovations, so society relies on a series of kludges and awkward second-best solutions to incentivize innovative activity.

One of the ideas that has always fascinated me is the notion that cheap labor reduces the incentive for labor-saving innovation. This is the Robert Allen theory of the Industrial Revolution - high wages and cheap capital forced British businesspeople to start using machines, which then opened up a bonanza of innovation. It also pops up in a few econ models from time to time.

I've written about this idea in the context of minimum wage policy, but you can also apply it to trade. In the 00s, U.S. manufacturing employment suddenly fell off a cliff, but after about 2003 or so manufacturing productivity growth slowed down (despite the fact that you might expect it to accelerate as less productive workers were laid off first). That might mean that the huge dump of cheap Chinese labor onto the world market caused rich-world businesses to slack off on automation.

That could be an argument for limiting the pace at which rich countries open up trade with poor ones. Of course, even if true, this would be a pretty roundabout way of getting innovation, and totally ignores the well-being of the people in the poor country.

Also, this argument is more about the past than the future. China's unit labor costs have risen to the point where the global cheap labor boom is effectively over (since no other country or region is emerging to take China's place as a high-productivity cheap manufacturing base).

Reason 2: Adjustment friction

This is the trade-skeptic case that everyone is waking up to now, thanks to Autor, Dorn and Hanson. The economy seems to have trouble adjusting to really big rapid trade shocks, and lots of workers can end up permanently hurt.

Again, though, this is an argument about the past, not the future. The China Shock is over and done, and probably won't be replicated within our lifetime. So this consideration shouldn't affect our trade policy much going forward.

Reason 3: Exports and productivity

This is another productivity-based argument. It's essentially the Dani Rodrik argument for industrial policy for developing countries, adapted to rich countries. There is some evidence that when companies start exporting, their productivity goes up, implying that the well-known correlation between exports and productivity isn't just a selection effect.

So basically, there's a case to be made that export promotion - which represents a deviation from classic free trade - nudges companies to enter international markets where they then have to compete harder than before, incentivizing them to raise their productivity levels over time. That could mean innovating more, or it could just mean boosting operational efficiency to meet international standards.

This is the only real argument against free trade that's about the future rather than the past. If export promotion is a good idea, then it's still a good idea even though the China Shock is over. I would like to see more efforts by the U.S. to nudge domestically focused companies to compete in world markets. It might not work, but it's worth a try.

Anyway, that's my side of the story. Russ obviously had a lot to say as well. So if you feel like listening to our mellifluous voices for 38 minutes, head on over to the a16z website and listen to the podcast! And thanks to Sonal Chokshi for interviewing us and doing the editing.

Friday, September 08, 2017

Realism in macroeconomic modeling

Via Tyler Cowen, I see that Ljungqvist and Sargent have a new paper synthesizing much of the work that's been done in labor search-and-matching theory over the past decade or so.

This is pretty cool (and not just because these guys are still doing important research at an advanced age). Basically, Ljungqvist and Sargent are trying to solve the Shimer Puzzle - the fact that in classic labor search models of the business cycle, productivity shocks aren't big enough to generate the kind of employment fluctuations we see in actual business cycles. A number of theorists have proposed resolutions to this puzzle - i.e., ways to get realistic-sized productivity shocks to generate realistic-sized unemployment cycles. Ljungqvist and Sargent look at these and realize that they're basically all doing the same thing - reducing the value of a job match to the employer, so that small productivity shocks are more easily able to stop the matches from happening:
The next time you see unemployment respond sensitively to small changes in productivity in a model that contains a matching function, we hope that you will look for forces that suppress the fundamental surplus, i.e., deductions from productivity before the ‘invisible hand’ can allocate resources to vacancy creation. 
The fundamental surplus fraction is the single intermediate channel through which economic forces generating a high elasticity of market tightness with respect to productivity must operate...The role of the fundamental surplus in generating that response sensitivity transcends diverse matching models... 
For any model with a matching function, to arrive at the fundamental surplus take the output of a job, then deduct the sum of the value of leisure, the annuitized values of layoff costs and training costs and a worker’s ability to exploit a firm’s cost of delay under alternating-offer wage bargaining, and any other items that must be set aside. The fundamental surplus is an upper bound on what the “invisible hand” could allocate to vacancy creation. If that fundamental surplus constitutes a small fraction of a job’s output, it means that a given change in productivity translates into a much larger percentage change in the fundamental surplus. Because such large movements in the amount of resources that could potentially be used for vacancy creation cannot be offset by the invisible hand, significant variations in market tightness ensue, causing large movements in unemployment.
That's a useful thing to know.

Of course, I suspect that recessions are mostly not caused by productivity shocks, and that these business cycle models will ultimately be improved by instead considering shocks to the various things that get subtracted from productivity in the "fundamental surplus". That should affect unemployment in much the same way as productivity shocks, but will probably have advantages in explaining other business cycle facts like prices. Insisting that the shock that drives unemployment be a productivity shock seems like a tic - a holdover from a previous age. But that's just my intution - hopefully some macroeconomist will do that exercise.

But anyway, I think the whole field of labor search-and-matching models is interesting, because it shows how macroeconomists are gradually edging away from the Pool Player Analogy. Milton Friedman's Pool Player Analogy, if you'll recall, is the idea that a model doesn't have to have realistic elements in order to be a good model. Or more precisely, a good macro model doesn't have to fit micro data, only macro data. I personally think this is silly, because it ends up throwing away most of the available data that could be used to choose between models. Also, it seems unlikely that non-realistic models could generate realistic results.

Labor search-and-matching models still have plenty of unrealistic elements, but they're fundamentally a step in the direction of realism. For one thing, they were made by economists imagining the actual process of workers looking for jobs and companies looking for employees. That's a kind of realism. Even more importantly, they were based on real micro data about the job search process - help-wanted ads in newspapers or on websites, for example. In Milton Friedman's analogy, that's like looking at how the pool player actually moves his arm, instead of imagining how he should move his arm in order to sink the ball.

It's good to see macroeconomists moving away from this counterproductive philosophy of science. Figuring out how things actually work is a much more promising route than making up an imaginary way for them to work and hoping the macro data is too fuzzy to reject your overall results. Of course, people and companies might not search and bargain in the ways that macroeconomists have so far assumed they do. But because labor search modelers tend to take micro data seriously, bad assumptions will probably eventually be identified, questioned, and corrected.

This is good. Chalk labor search theory up as a win for realism. Now let's see macroeconomists make some realistic models of business investment!


For some reason, a few people read this post as claiming that labor search theory is something new. It's not! I was learning this stuff in macro class back in 2008, and people have been thinking about the idea since the 70s. In fact, if anything, there seems to be a mild dampening of enthusiasm for labor search models recently, though this is hard to gauge. One exception is that labor search models have been incorporated into New Keynesian theory, which seems like a good development.

Sadly, though, I haven't seen any similar theory trend dealing with business investment. This post was supposed to be a plug for that.

Thursday, September 07, 2017

An American Whitopia would be a dystopia

In a recent essay about the racial politics of the Trump movement, Ta-Nehisi Coates concluded with a warning:
It has long been an axiom among certain black writers and thinkers that while whiteness endangers the bodies of black people in the immediate sense, the larger threat is to white people themselves, the shared country, and even the whole world. There is an impulse to blanch at this sort of grandiosity. When W. E. B. Du Bois claims that slavery was “singularly disastrous for modern civilization” or James Baldwin claims that whites “have brought humanity to the edge of oblivion: because they think they are white,” the instinct is to cry exaggeration. But there really is no other way to read the presidency of Donald Trump.
Yes, at first glance, the notion that Trumpian white racial nationalism is a threat to the whole world, or the downfall of civilization, etc. seems a bit of an exaggeration. Barring global thermonuclear war, Trump and his successors aren't going to bring down human civilization - the U.S. is powerful and important, but it isn't nearly that powerful or important.

But there's an important truth here. An America defined by white racial nationalism - an American Whitopia - would be an economic and cultural disaster movie. It would be a dysfunctional, crappy civilization, sinking into the fetid morass of its own decay. Some people think that an American Whitopia would be bad for people of color but ultimately good for whites, but this is dead wrong. Although nonwhite Americans would certainly suffer greatly, white American suffering under the dystopia of a Trumpist society would be dire and unending. 

Here is a glimpse of that dark future, and an explanation of why it would fail so badly.

Don't think Japan. Think Ukraine.

First, a simple observation: Racial homogeneity is no guarantee of wealth. Don't believe me? Just look at a night photo of North Korea and South Korea:

The red arrow and white outline point to North Korea. It's completely pitch dark at night because it's poor as hell. People starve there. But it's every bit as ethnically pure and homogeneous as its neighbor South Korea - in fact, it's the same race of people. North Korea, in fact, puts a ton of cultural emphasis on racial homogeneity. But that doesn't save their society from being a dysfunctional hellhole.

OK, so North and South Korea are an experiment. They prove that institutions matter - that a homogeneous society can either be rich and happy or poor and hellish, depending on how well it's run.

It's not just East Asia we're talking about, either. It's incredibly easy to find deeply dysfunctional white homogeneous countries. Ukraine, for instance. Ukraine's per capita GDP is around $8,300 at purchasing power parity. That's less than 1/6 of America's. It's also a deeply dysfunctional society, with lots of drug use and suicide and all of that stuff, and has been so since long before the Donbass War started. 

It's worth noting that Ukraine also has an economy largely based on heavy industry and agriculture - just the kind of economy Trump wants to go back to. So being a homogeneous all-white country with plenty of heavy industry and lots of rich farmland hasn't saved Ukraine from being a dysfunctional, decaying civilization. 

Alt-righters explicitly call for America to be a white racial nation-state. Some cite Japan as an example of a successful ethnostate. Japan is great, there's no denying it. But I know Japan, and let me assure you, an American Whitopia would not be able to be Japan. It definitely wouldn't be Sweden or Denmark or Finland. It couldn't even be Hungary or Czech or Poland. It would probably end up more like Ukraine. 

Here's why.

Where are your smart people?

Modern economies have always depended on smart people, but the modern American economy depends on them even more than others and even more than in the past. The shift of industrial production chains to China has made America more dependent on knowledge-based industries - software, pharmaceuticals, advanced manufacturing, research and design, business services, etc. Even the energy industry is a high-tech, knowledge-based industry these days. Take away those industries, and America will be left trying to compete with China in steel part manufacturing. How's that working out for Ukraine?

If you want to understand how important knowledge-based industries are, just read Enrico Moretti's book, "The New Geography of Jobs". Cities and towns with lots of human capital - read, smart folks - are flourishing, while old-line manufacturing towns are decaying and dying. Trump has sold people a fantasy that his own blustering bullshit can reverse that trend, but if you really believe that, I've got a bridge to sell you.

So here's the thing: Smart Americans have no desire to live in a Whitopia. First, let's just look at smart white people. Among white Americans with a postgraduate degree, Clinton beat Trump in 2016 by a 13-point margin, even though Trump won whites overall by a 22 point margin. Overall, education was the strongest predictor of which white people voted for Trump and which went for Clinton. Also note that close to 2/3 of the U.S.' GDP is produced in counties that voted for Clinton. 

Richard Florida has been following smart Americans around for a long time, and he has repeatedly noted how they like to live in diverse places. Turn America into an ethnostate, and the smart white people will bolt for Canada, Australia, Japan, or wherever else isn't a racist hellhole.

Now look beyond white people. A huge amount of the talent that sustains America's key industries comes from Asia. An increasing amount also comes from Africa and the Middle East, though Asia is still key. Our best science students are mostly immigrants. Our grad students are mostly immigrants. Our best tech entrepreneurs are about half immigrants You make America into Whitopia, and those people are gone gone gone.

I'm not saying every single smart American would leave an American white ethnostate. But most would, and many of those who remain wouldn't be happy. 

There's a clear precedent for this: Nazi Germany. Hitler's persecution of Jews made Jewish scientists leave. But it also prompted an exodus of non-Jewish scientists who weren't Jewish but who didn't like seeing their Jewish colleagues, friends, and spouses get persecuted - Erwin Schroedinger, for example, and Enrico Fermi. This resulted in a bonanza of talent for America, and it starved Nazi Germany of critical expertise in World War 2. Guess who built the atom bomb? 

How you get there matters

There are just about 197 million non-Hispanic white people in the United States. But the total population of the country is 323 million. That means that around 126 million Americans are nonwhite. Among young Americans, nonwhites make up an even larger percentage. 

To turn America into a white racial nation-state - into Whitopia - would require some combination of four things:

1. Genocide

2. Ethnic cleaning (expulsion of nonwhites)

3. Denial of legal rights to nonwhites

4. Partition of the country

To see how these would go, look to historical examples. 

Genocide is usually done against a group that's a small minority, like Armenians or Jews. Larger-scale genocides are occasionally attempted - for example, Hitler's plan to wipe out the bulk of the Slavs, or the general mass murder of 25% of the population in Pol Pot's Cambodia. These latter attempts at mega-genocide killed a lot of people (Hitler slaughtered 25 million Slavs or so), but eventually they failed, with disastrous consequences for both the people who engineered them and the countries that acquiesced to the policies.

Denial of legal rights to minorities also has a poor record of effectiveness. The Southern slavery regime in the U.S., the apartheid regime in South Africa, and the Jim Crow system in the U.S. all ended up collapsing under the weight of moral condemnation, economic inefficiency, and war. 

Ethnic cleansing and partition have somewhat less disastrous records - see India/Pakistan, or Israel/Palestine, or maybe the Iraqi Civil War that largely separated Sunni and Shia. But "less disastrous" doesn't mean "fine". Yes, India and Pakistan and Israel survived intact. But those bloody campaigns of separation and expulsion left scars that still haven't healed. The cost of Israeli partition was an endless conflict and a garrison state. The cost of Indian partition was a series of wars and an ongoing nuclear standoff, not to mention terrorism in both India and Pakistan. 

In America, a partition would lead to a long bloody war. Remember, 39% of whites voted for Hillary Clinton. And the 29% of Asians and Hispanics who voted for Trump are unlikely to express similar support for a policy that boots them out of their country or town. Furthermore, nonwhite Americans are not confined to a single region that could be spun off into a new country, but concentrated in cities all over the nation. Thus, any partition would involve a rearrangement of population on a scale unprecendented in modern history. That rearrangement would inevitably be violent - a civil war on a titanic scale. 

That war would leave lots of bitterness and social division in its wake. It would leave bad institutions in place for many decades. It would elevate the worst people in the country - the people willing to do the dirty deeds of ethnic cleansing. In an earlier post about homogeneity vs. diversity, I wrote about how a white ethnostate created byan exodus of whites from America or Europe would probably be populated by the most fractious, violent, division-prone subset of white people. A white ethnostate created by a titanic civil war and mass ethnic cleansing would be run by an even worse subset.

This is why a partition or ethnic cleansing of America would lead to lower social trust, bad institutions, a violent society, and a kakistocracy. In other words, a recipe for a country that looks more like Ukraine (or even North Korea) than it does like Japan. 

It's already happening

This isn't just theoretical, and it isn't just based on historical analogies either. There are already the first signs of dysfunction and dystopia in the new America that Trump, Bannon, Sessions, Miller, and others are working to create. 

First of all, the places that voted for Trump are not doing so well economically or socially. Not only do Trump counties represent only about a third of the nation's GDP, but they also tend to be suffering disproportionately from the opiate epidemic. States that shifted most strongly toward Trump from 2012 to 2016, like Ohio, tend to be Rust Belt states with low levels of education, low immigration, and low percentages of Asians and Hispanics. Imagine all the things that make Ohio slightly worse off than Texas or California or New York or Illinois, then multiply those things by 1000 - and take away all the good economic stuff in Ohio, like the diverse urban revival in Columbus - to see what a Trumpian Whitopia would look like. 

Second, Trump is already creating a kakistocracy. His administration, of course, is scandal-ridden and corrupt. His allies are the likes of Joe Arpaio, who is reported to have tortured undocumented immigrants. His regime has emboldened murderous Nazi types to march in the street, and his condemnation of those Nazis has been rather equivocal

That episode caused business leaders - some of the smartest, most capable Americans - to abandon the Trump administration. If even business leaders - who are mostly rich white men - abandon an administration with even a whiff of white nationalism, imagine who would be in charge in a Whitopia. It would not be the Tim Cooks and Larry Pages and Elon Musks of the world. It would be far less competent people. 

So already we're seeing the first few glimmerings of a dystopian Whitopia. We're still a long way off, of course - things could get a million times worse. But the Trump movement gives us a glimpse of what that path would look like, and it ain't pretty. 

Whitopia: a self-inflicted disaster of epic proportions

Refashioning America as a white ethnostate would be a self-inflicted catastrophe of epic, unprecedented proportions. It would drive America from the top rank of nations to the middle ranks. It would involve lots of pain and death and violence for everyone, but the white Americans stuck in Whitopia would suffer the longest. Nonwhite Americans would move away and become refugees, or die in the civil wars. But the ones who survived would escape the madness and begin new lives elsewhere, in more sane functional countries. 

Meanwhile, white Americans and their descendants would be trapped in the decaying corpse of a once-great civilization. A manufacturing-based economy making stuff no one else wanted to buy, bereft of the knowledge industries and vibrant diverse cities that had made it rich. A violent society suffering long-lasting PTSD from a terrible time of war and atrocity. A divided society, with simmering resentment underneath the surface, like Spain under Franco. A corrupt, thuggish leadership, with institutions that keep corrupt, thuggish leaders in power. 

This is what it would take to turn America from a diverse, polyracial nation into a white ethnostate. That is the price that white Americans, and their children, and their children's children would pay. 

It's not worth it.

Thursday, August 24, 2017

The Market Power Story

So, there's this story going around the econosphere, which says that the economy is being throttled by market power. I've sort of bought into this story. It certainly seems to be getting a lot of attention from top economists. Autor, Dorn, Katz, Patterson and van Reenen have blamed industrial concentration for the fall in labor's share of income. Now there's a new paper out by De Loecker and Eeckhout blaming monopoly power for much more than that - lower wages, lower labor force participation, slower migration, and slow GDP growth. The paper is getting plenty of attention.

That's a big set of allegations. Everyone knows that the U.S. economy has been looking anemic since the turn of the century, and now a growing chorus of papers by well-respected people is claiming that we've found the culprit. Monopoly power could potentially become Public Enemy #1 for economists, the way taxes and unions were in the 70s, and antitrust could become the new silver bullet policy.

With those kind of stakes, it was inevitable that pushback and skepticism would rev up - after all, you don't just let a big theory like that go unchallenged. My Bloomberg View colleague Tyler Cowen is one of the first to step up to the plate, with a blog post criticizing the De Loecker and Eeckhout paper (BTW I just spelled those both correctly from memory. I want some kind of prize.)

Tyler's post really made me think. It raises some important issues and caveats. But ultimately I don't think it does that much to derail the Market Power Story. Here are some of my my thoughts on Tyler's points.

1. Monopolistic Competition

There are two ways these mark-ups go could up: first there may be more outright monopoly, second there may be more monopolistic competition, with high mark-ups but also high fixed costs, and firms earning close to zero profits....Consider my local Chinese restaurant.  Maybe the fixed cost of a restaurant has gone up, due to rising rents and the need to invest in information technology.  That can mean higher fixed costs, but still a positive mark-up at the margin.
First of all, and most importantly, monopolistic competition is perfectly consistent with the Market Power Story. Monopolistic competition in general does not produce an efficient outcome. Though monopolistic competition doesn't generate long-term profits like monopoly does, it does generate deadweight losses. This is true even when market power comes from product differentiation, as in the typical Dixit-Stiglitz formulation. Monopolistic competition does involve market power, so could also explain the drop in labor share, wages, etc.

So this objection of Tyler's doesn't really go against the Market Power Story, which was always about monopolistic competition rather than outright monopoly.

What about markups vs. profits? In general, Tyler is right - higher markups could indicate higher fixed costs rather than higher profit margins.

But what would these fixed costs be? Tyler suggests rent, but that is a variable cost, not a fixed cost. He also suggests information technology costs -- buying computers for your office, software for the computers, point-of-sale tech, etc. But advances in IT seem just as likely to reduce fixed costs as to raise them. Typewriters cost as much in the 60s as computers do now, but computers can do infinitely more. So much business can be done on the internet, using freely available tools like Google Sheets and Google Docs and free chat apps for workplace communications. Internet outsourcing also dramatically lowers fixed costs by turning them into variable costs.

I'm open to the idea that fixed costs have increased, but I can't easily think of what those fixed costs would be. Maybe modern business organizations are more complex, and therefore require more up-front investment in firm-specific human capital? I'm just hand-waving here.

2. Profits

The authors consider whether fixed costs have risen in section 3.5.  They note that measured corporate profits have increased significantly, but do not consider these revisions to the data.  Profits haven’t risen by nearly as much as the unmodified TED series might suggest.
Tyler is referring to the fact that foreign sales aren't counted when calculating official profit margins, leading these margins to be overstated. Here is Jesse Livermore's corrected series, which uses gross value added in the denominator:

Profit margins are at an all-time high, but not that much higher than in the 50s and 60s.

A more accurate measure of true economic profits (i.e., what you'd expect market power to produce) would include opportunity costs (cost of capital) in the numerator. Simcha Barkai does this in a recent paper, also using gross value added in the denominator. Here's his graph for the last 30 years:

His series tells basically the same story as Livermore's - profits have gone up up up. But he doesn't extend back to the 50s, so it's not clear whether higher capital costs back then would reduce the high profit margins seen on Livermore's graph. Interest rates were similar in the 50s and 60s to what they are now, so it seems likely that Barkai's method would also produce a large-ish profit share back then as well.

So it does seem clear that profit has gone way up in recent decades. But a full account should say why profit was also high in the 50s and 60s, and whether this too was caused by market power.

Also, as an interesting side note, Barkai mentions how corporate investment has fallen. That's interesting, because it definitely doesn't square with the "increasing fixed costs" story. Here's Barkai's graph:

If this is a rise in fixed costs we're looking at, where's the investment spending?

3. Market Concentration

In most areas we have more choice, maybe much more choice, than before...ask yourself a simple question — in how many sectors of the American economy do I, as a consumer, feel that concentration has gone up and real choice has gone down?  Hospitals, yes.  Cable TV?  Sort of, but keep in mind that program quality and choice wasn’t available at all not too long ago.  What else There are Dollar Stores, Wal-Mart, Amazon, eBay, and used goods on the internet.  Government schools.  Hospitals.  Government.  Did I mention government?
Hmm. Autor et al. show that market concentration has increased in basically all broad industrial categories. On one hand, that doesn't take geography and local market power into account - if there's only one store in town, does it matter if it's an indie store or a Wal-Mart? But I think it gives us reliable information that Tyler's anecdotes don't. 

Also, Tyler is thinking only of consumer sectors. Much of the economy consists of intermediate goods and services - B2B. These could easily be getting more concentrated, even though we don't come into contact with them very often. 

(And one random note: Tyler at one point seems to equate product choice with market concentration, in the case of TV channels. But that's not right. If Netflix is the world's only distribution service, even if it has infinite movies and TV shows, it can jack up the price for watching TV and movies.)

That said, the example of retail is an interesting one. Autor shows that retail concentration has gone up, but I'm sure people now have more choice of retailers than they used to. I think the distinction between national concentration and local concentration probably matters a lot here. And that means maybe it matters for other industries too.

But as for which industries seem more concentrated before, just off the top of my head...let me think. Banks. Airlines (which is why they aren't now all going bankrupt). Pharma. Energy. Consumer nondurables. Food. Semiconductors. Entertainment. Heavy equipment manufacturing. So anecdotally, it does seem like there's a lot of this going on, and it's not just health care and government. 

4. Output restriction

Similarly, the time series for manufacturing output is a pretty straight upward series, especially once you take out the cyclical component.  If there is some massive increase in monopoly power, where does the resulting output restriction show up in that data?  Once you ask that simple question, the whole story just doesn’t add up.
This is an important point. The basic model of monopoly power is that it restricts output. That's where the deadweight loss comes from (and the same for monopolistic competition too). But overall output is going up in most industries. What gives?

I think the answer is that it's very hard to know a counterfactual. How many more airline tickets would people be buying if the industry had more competition? How much more broadband would we consume? How many more bottles of shampoo would we buy? How many more miles would we drive? It's hard to know these things.

Still, I think this question could and should be addressed with some event studies. Did big mega-mergers change output trends in their industries? That's a research project waiting to be done. 

So overall, I think that while Tyler raises some interesting and important points, and provides lots of food for thought, he doesn't really derail the Market Power Story. Even more importantly, that story relies on more than just the De Loecker and Eeckhout paper (and dammit, I had to look up the spelling this time!). The Autor et al. paper is important too. So is the Barkai paper. So are many other very interesting papers by credible economists. So is the body of work showing how antitrust enforcement has weakened in the U.S. To really take down the story, either some common problem will have to be found with all of these papers, or each one (and others to come) will have to be debunked independently, or some compelling alternate explanation will have to be found.

The Market Power Story is still alive, and still worrying. 


Forgot to mention this in the original post, but basically I see the case of the Market Power Story - or any big economic story like this - as detective work. We're collecting circumstantial evidence, and while no piece of evidence is a smoking gun, each adds to the overall picture. IF the economy were being throttled by increased market power, we'd expect to see:

1. Increased market concentration (Check! See Autor et al.)

2. Increased markups (Check! See De Loecker and Eeckhout)

3. Increased profits (Check! See Barkai)

4. Decreased investment (Check! See Gutierrez and Philippon)

5. Increased prices following mergers (Probably check! See Blonigen and Pierce)

6. Weakened antitrust enforcement (Check! See Kwoka)

7. Decreased output (Not sure yet)

So, as I see it, the evidence is piling up from a number of sides here. Economists need to investigate the question of whether output has been restricted. But those who want to come up with an alternate story for the recent changes in industrial organization need one that's consistent with the various facts found by these various sleuthing detectives.

Update 2

Robin Hanson and Karl Smith both have posts responding to De Loecker and Eeckhout's paper and attacking the Market Power Story. Both give reasons why they think rising markups indicate monopolistic competition, rather than entry barriers. But both seem to forget that monopolistic competition causes deadweight loss. Just because it has the word "competition" in it does NOT mean that monopolistic competition is efficient. It is not.  

Update 3

Tyler has another post challenging the De Loecker and Eeckhout paper and the Market Power Story in general. His new post makes a variety of largely unconnected points. Briefly...

Tyler on general equilibrium:
If every sector of an economy becomes monopolistic, output will contract in each sector, and it might appear that productivity will decline.  But for the most part this output reduction will not be achieved by burning crops in the fields.  Rather, less will be produced and factors of production will be freed up for elsewhere.  New sectors will arise, and offer goods and services too, perhaps with monopolies as well... 
You can cite the deadweight loss of monopoly all you want, but we’re getting more outputs of other stuff.  Value-added could be either higher or lower, productivity too.
This seems like a hand-waving argument that economic distortions in one sector are never bad, because they free up resources to be used elsewhere. That's obviously wrong, though. To see this, suppose the government levied a 10000% tax on food. Yes, the labor and capital freed up from the contraction of the food industry would get used elsewhere. NO, overall this outcome would not be good for the economy. Monopoly acts like a tax, so a similar principle applies. 

No, resource reallocation does not make market distortions efficient. 

Tyler on innovation: 
The Schumpeterian tradition, of course, suggested that market power would boost innovation.  There are at least two first-order effects pushing in this direction.  First, the monopoly has more “free cash” for R&D, and second there is a lower chance of the innovation benefiting competing firms too.  I don’t view the “monopoly boosts innovation” hypothesis as confirmed, but it probably has commanded slightly more sympathy from researchers than the opposite point of view.  Bell Labs did pretty well.
This is actually a good and important point, and I don't think we can dismiss it at all. There are economists who argue monopoly reduces innovation, and others who argue it increases it. 

Tyler on product diversity:
[Y]ou must compare [the efficiency loss from monopolistic competition] to the rise in product diversity that follows from monopolistic competition.
Does market power increase product diversity? That was certainly Edward H. Chamberlin's theory back in the 1930s. When you start getting technical, the question becomes less clear.

Tyler on De Loecker and Eeckhout, again:
But under those same conditions, profits are zero and so the mark-up arguments from the DeLoeker and Eeckhout paper do not apply and indeed cannot hold.
That seems incorrect to me. The fact that long-term profits are zero does NOT make monopolistic competition efficient. So the De Loecker and Eeckhout argument can indeed hold, quite easily. This basic fact - the inefficiency of monopolistic competition in standard theory - keeps coming up again and again. It appears to be a key fact the bloggers now rushing to attack the De Loecker and Eeckhout paper have not yet taken into account.

Thursday, August 17, 2017

"Theory vs. Data" in statistics too

Via Brad DeLong -- still my favorite blogger after all these years -- I stumbled on this very interesting essay from 2001, by statistician Leo Breiman. Breiman basically says that statisticians should do less modeling and more machine learning. The essay has several responses from statisticians of a more orthodox persuasion, including the great David Cox (whom every economist should know). Obviously, the world has changed a lot since 2001 -- where random forests were the hot machine learning technique back then, it's now deep learning -- but it seems unlikely that this overall debate has been resolved. And the parallels to the methodology debates in economics are interesting.

In empirical economics, the big debate is between two different types of model-makers. Structural modelers want to use models that come from economic theory (constrained optimization of economic agents, production functions, and all that), while reduced-form modelers just want to use simple stuff like linear regression (and rely on careful research design to make those simple models appropriate).

I'm pretty sure I know who's right in this debate: both. If you have a really solid, reliable theory that has proven itself in lots of cases so you can be confident it's really structural instead of some made-up B.S., then you're golden. Use that. But if economists are still trying to figure out which theory applies in a certain situation (and let's face it, this is usually the case), reduced-form stuff can both A) help identify the right theory and B) help make decently good policy in the meantime.

Statisticians, on the other hand, debate whether you should actually have a model at all! The simplistic reduced-form models that structural econometricians turn up their noses at -- linear regression, logit models, etc. -- are the exact things Breiman criticizes for being too theoretical! 

Here's Breiman:
[I]n the Journal of the American Statistical Association JASA, virtually every article contains a statement of the form: "Assume that the data are generated by the following model: ..." 
I am deeply troubled bythe current and past use of data models in applications, where quantitative conclusions are drawn and perhaps policy decisions made... 
[Data generating process modeling] has at its heart the belief that a statistician, by imagination and by looking at the data, can invent a reasonably good parametric class of models for a complex mechanism devised bynature. Then parameters are estimated and conclusions are drawn. But when a model is fit to data to draw quantitative conclusions... 
[t]he conclusions are about the model’s mechanism, and not about nature’s mechanism. It follows that...[i]f the model is a poor emulation of nature, the conclusions maybe wrong... 
These truisms have often been ignored in the enthusiasm for fitting data models. A few decades ago, the commitment to data models was such that even simple precautions such as residual analysis or goodness-of-fit tests were not used. The belief in the infallibility of data models was almost religious. It is a strange phenomenon—once a model is made, then it becomes truth and the conclusions from it are [considered] infallible.
This sounds very similar to the things reduced-form econometric modelers say when they criticize their structural counterparts. For example, here's Francis Diebold (a fan of structural modeling, but paraphrasing others' criticisms):
A cynical but not-entirely-false view is that structural causal inference effectively assumes a causal mechanism, known up to a vector of parameters that can be estimated. Big assumption. And of course different structural modelers can make different assumptions and get different results.
In both cases, the criticism is that if you have a misspecified theory, results that look careful and solid will actually be wildly wrong. But the kind of simple stuff that (some) structural econometricians think doesn't make enough a priori assumptions is exactly the stuff Breiman says (often) makes way too many

So if even OLS and logit are too theoretical and restrictive for Breiman's tastes, what does he want to do instead? Breiman wants to toss out the idea of a model entirely. Instead of making any assumption about the DGP, he wants to use an algorithm - a set of procedural steps to make predictions from data. As discussant Brad Efron puts it in his comment, Breiman wants "a black box with lots of knobs to twiddle." 

Breiman has one simple, powerful justification for preferring black boxes to formal DGP modeling: it works. He shows lots of examples where machine learning beat the pants off traditional model-based statistical techniques, in terms of predictive accuracy. Efron is skeptical, accusing Breiman of cherry-picking his examples to make machine learning methods look good. But LOL, that was back in 2001. As of 2017, machine learning - in particular, deep learning - has accomplished such magical feats that no one now questions the notion that these algorithmic techniques really do have some secret sauce. 

Of course, even Breiman admits that algorithms don't beat theory in all situations. In his comment, Cox points out that when the question being asked lies far out of past experience, theory becomes more crucial:
Often the prediction is under quite different conditions from the data; what is the likely progress of the incidence of the epidemic of v-CJD in the United Kingdom, what would be the effect on annual incidence of cancer in the United States of reducing by 10% the medical use of X-rays, etc.? That is, it may be desired to predict the consequences of something only indirectly addressed by the data available for analysis. As we move toward such more ambitious tasks, prediction, always hazardous, without some understanding of underlying process and linking with other sources of information, becomes more and more tentative.
And Breiman agrees:
I readily acknowledge that there are situations where a simple data model maybe useful and appropriate; for instance, if the science of the mechanism producing the data is well enough known to determine the model apart from estimating parameters. There are also situations of great complexity posing important issues and questions in which there is not enough data to resolve the questions to the accuracy desired. Simple models can then be useful in giving qualitative understanding, suggesting future research areas and the kind of additional data that needs to be gathered. At times, there is not enough data on which to base predictions; but policydecisions need to be made. In this case, constructing a model using whatever data exists, combined with scientific common sense and subject-matter knowledge, is a reasonable path...I agree [with the examples Cox cites].
In a way, this compromise is similar to my post about structural vs. reduced-form models - when you have solid, reliable structural theory or you need to make predictions about situations far away from the available data, use more theory. When you don't have reliable theory and you're considering only a small change from known situations, use less theory. This seems like a general principle that can be applied in any scientific field, at any level of analysis (though it requires plenty of judgment to put into practice, obviously).

So it's cool to see other fields having the same debate, and (hopefully) coming to similar conclusions.

In fact, it's possible that another form of the "theory vs. data" debate could be happening within machine learning itself. Some types of machine learning are more interpretable, which means it's possible - though very hard - to open them up and figure out why they gave the correct answers, and maybe generalize from that. That allows you to figure out other situations where a technique can be expected to work well, or even to use insights gained from machine learning to allow the creation of good statistical models.

But deep learning, the technique that's blowing everything else away in a huge array of applications, tends to be the least interpretable of all - the blackest of all black boxes. Deep learning is just so damned deep - to use Efron's term, it just has so many knobs on it. Even compared to other machine learning techniques, it looks like a magic spell. I enjoyed this cartoon by Valentin Dalibard and Peter Petar Veličković (tweeted by Dendi Suhubdy):

Deep learning seems like the outer frontier of atheoretical, purely data-based analysis. It might even classify as a new type of scientific revolution - a whole new way for humans to understand and control their world. Deep learning might finally be the realization of the old dream of holistic science or complexity science - a way to step beyond reductionism by abandoning the need to understand what you're predicting and controlling.

But this, as they say, would lead us too far afield...

(P.S. - Obviously I'm doing a ton of hand-waving here, I barely know any machine learning yet, and the paper I'm writing about is 16 years out of date! I'll try to start keeping track of cool stuff that's happening at the intersection of econ and machine learning, and on the general philosophy of the thing. For example, here's a cool workshop on deep learning, recommended by the good folks at r/badeconomics. It's quite possible deep learning is no longer anywhere near as impenetrable and magical as outside observers often claim...)