Monday, August 31, 2009

Youthful exuberance: Age and the housing bubble

The internet does not produce any non-self-referential data; it simply makes the data easier to find and faster to obtain. Still, if this drops below some threshold, some people might get into investigation who might not otherwise would have bothered -- and that's as good as producing new data, since unexamined data do not really exist. Probably the best result of this is the ability of people with computers to figure out if a popular story is true or not.

One story that was everywhere in the media not too long ago -- although it has probably since been crowded out by all the news about the collapse of the global economy -- was that young people today refuse to grow up. In particular, they were supposed to be opting more than ever before to live at home and freeload off their parents well into their 30s. This story broke into the mainstream when Time Magazine ran a cover story about twixters, the goofy name for these aging slackers. Around the same time, MTV devoted an episode of True Life (a popular hour-long documentary show with different topics each week) to young people who were moving back with their parents.

These stories were circulating during 2005, at the height of the real estate euphoria, so the inference is that young people were being left out of the housing bubble. The greatest beneficiaries must have been older people starting a family, or perhaps middle-aged speculators looking to "flip that house." But rather than take the tales at face value, let's see how well or how badly the bubble treated young people, using housing data from the Statistical Abstract of the United States, which begin in 1982 and go through 2007.

First, here is the distribution of homeowners by age, using coarser and finer-grained age groups:

It would seem from the first chart that homeowners were becoming grayer, but the youngest category is pretty wide -- "under 35." Looking at the finer-grained chart, we see that those under 30 seemed to be doing pretty well, especially since their low in the early 1990s. This looks especially true for the under-25 group, but it's hard to see in the chart. To get a better view, here is the homeownership rate for the under-25 age group over time:

Indeed, they were treated quite well by housing bubble that began in the late 1990s. Just eye-balling it, there is a gain of about 10 percentage points -- or about a 67% increase. But maybe they were nothing special, and the other age groups had similar increases. Let's have a look at how their rates changed from the early-'90s low to the peak in the mid-2000s:

The middle-aged age groups and above tend to have homeownership rates of at least 70%, often 80% or higher, so there is only so much room for them to gain before they hit 100%. That's why their increases aren't so great. The increases among the elderly reflect a steady increase since 1982, unlike the ups and downs that every other age group went through, so they're a different story.

But for the younger age groups, all of them had lots of room to gain, and yet it was still the very youngest whose rates shot up the most in relative terms. Clearly we were all fed a bunch of bull about 20-somethings refusing to make it on their own, mooching off their parents forever, and so on. A priori, that story should have sounded implausible since the 20-somethings of 2005 had come of age during two frenetic economic booms separated by a quite tepid recession. It sounds instead like the world was their oyster.

(I was a mid-20-something living and working in Barcelona at the time-- something I couldn't dream of doing now, with the Spanish economy in ruins. And even though housing became cheap after their real estate bubble burst, it wouldn't be any fun this time around since the euphoria has dissipated.)

How can we get a good intuitive feel for how spectacular the housing bubble was for young people? Let's compare their percent increase in homeownership rates to other not-so-credit-worthy groups. Many commentators have pointed to the increase of homeownership among Blacks and Hispanics, but their rates only increased by just over 10% and 20%, respectively. They started fairly low too -- not like the middle-aged -- so their increase was similar to the 25 - 29 and 30 - 34 age groups. Nothing close to the gains of the under-25 group. The same is true for single mothers, whose rates increased by under 20%.

As far as I can tell, young people were the greatest beneficiaries of the drive to debauch lending standards, especially the move to do away with down payments. After all, how much money could you have saved up by that age to put down? Forget the fact that what little you did earn you probably piddled away on your car, electronics, beer, shoes, handbags, and so on.

These data underscore the importance of paying attention to age as a demographic variable, something that we rarely do, except to whine about how youth-obsessed our culture is (or is becoming). That's not true either, but in any case, we already have lots of analysis based on race or ethnicity, sex, class, and even sexual orientation. Age is typically a far stronger cause of differences than any of those, yet we neglect it too often. This may reflect how homogeneous our social circles are by age, so that differences across age groups don't automatically spring to mind. Still, we should make a conscious effort to pay better attention to age when trying to figure out how people and their societies work.

Saturday, August 29, 2009

Brief: Have we gotten more or less sympathetic since Adam Smith's time?

In his Theory of Moral Sentiments, Adam Smith made the observation that we care less about the disasters that befall others if they are remote and faceless, while we panic at much smaller hardships of our own. But he wrote that before the Industrial Revolution really took off, and so before the peace-and-life-valuing merchant classes genetically replaced the old warring aristocratic class. In the meantime, capital punishment has been widely banned across Europe and its off-shoots, we have laws against cruelty to animals, and we have TV and other media that provide us with vivid daily images of the troubles that beset people in far-off places. So, does his observation still hold up in our more bleeding heart times?

To check this, I searched the NYT for its coverage of a rival first-world country, Japan, and a non-threatening third-world country, Indonesia. If newspapers cover a country out of sympathy for their plight -- and this supply would reflect demand for such coverage -- then there should be declining coverage of Japan during their incredible boom of the 1980s, but increasingly more coverage as they slid into the Lost Decade of the 1990s and even early half of the 2000s. Similarly, coverage of Indonesia should spike during 2004 - 2005 in the wake of the disastrous tsunami, which hit Indonesia much harder than other countries. (This event was pretty close to Smith's hypothetical earthquake that swallowed all of China.) There should also be a spike during their financial crisis of 1997 - 1998, although this was not a deadly catastrophe that could provide the level of gory detail as a tsunami, so this spike should be smaller than that of the tsunami coverage.

Here is the NYT's coverage of these countries:

Counter to the sympathy hypothesis, coverage of Japan shot up during the 1980s, as Americans began to fear more and more that Japan was going to economically take over our country. When Japan slid into a long recession, those fears evaporated, and the supply of alarming stories declined as a result, all the way to the present. This is not to say that there was no demand for sympathy stories about the Japanese recession, only that fear is a stronger driver of coverage than sympathy.

The Indonesia data give some support to the sympathy hypothesis. There is indeed a two-year increase for 2004 - 2005, reversing a previous steady decline. After that moment of sympathy, though, coverage starts to decline again. Moreover, the upward blip during the tsunami is tiny compared to the skyrocketing coverage of 1997 - 1998, when Indonesia was rocked by a financial crisis. This reflects our panic that the Asian Financial Crisis would infect our own economy. Before this threat to us, we paid relatively little -- and consistently little -- attention to Indonesia.

On the whole, the coverage data support Adam Smith's claim, despite our being more emotionally sensitive than people were during his time. However, there is no real paradox here if we view the demand for sympathy stories -- and the coverage that supplies that demand -- as a function of both some baseline concern for others plus a component that responds to actual disasters. Let's say that sympathy is a simple, linear function of disasters:

S = b + r*D

Where S is level of sympathy, b is our baseline level in the absence of news about disasters, D is the level of disasters that we hear about, and r measures how responsive our sympathy is to those disasters.

All that Smith was saying is that r is greater for disasters that are nearer to us than for disasters that are more remote. He made no claim about our baseline level. The genetic and cultural proliferation of the merchant classes -- and the concomitant doing away with public executions, slavery, etc. -- may have increased the baseline without affecting how r responds to disasters at different social distances. This seems like a useful distinction to draw, as it clears up a lot of the confusion about whether we've become more selfish or sensitive in recent centuries.

Brief: Relationship anger by political views for men and women

The stereotype is that liberal women are more combative and temperamental in relationships, compared to the more docile and even-headed conservative women. Let's take a look at the GSS and see if it's true. We'll look at the same question for men.

The survey asks about some recent event when the respondent was angry, and it follows up to ask who they were angry at. I pooled all three degrees of liberal, and all three degrees of conservative, to get roughly equal sample sizes for liberal, moderate, and conservative. The pattern is clear even without doing this, but this is to make the presentation easier. Here are the results (the y-axis should read "fraction angry," not "% angry"):

First, note that women are about twice as likely to be angry at their relationship partner than are men. No surprise there. Also note that the pattern by political views isn't so different between the sexes. Among females, moderates and conservatives aren't really different from each other, but both are twice as likely as liberals to get angry at their partner. Among males, moderates and conservatives are about 1.5 times as likely as liberals to get angry at their partner.

There are a million different ad hoc reasons I could give for why the stereotype isn't true, but I'd have no way of verifying them with just these data. Whatever the cause, liberal women are actually a safer bet if you're looking for peace in the relationship.

GSS variables used: madat1, polviews, sex

Sunday, August 23, 2009

The changing social climate of young people from 1870 to present

[This will end up being quite long, so I will post it in three stages for easier digestion. Expect two updates during the week. First update on generations added.]

OK, so the title is a bit ambitious, but I've finally found a way to measure this. Anyone who's read my personal blog knows that one of my obsessions is documenting what's going on among young people. They typically leave very little in terms of a written record, and as adults most people erase the memories of their adolescence -- to the benefit of their mental health -- and replace them with whatever accords with their grown-up worldview. For example, they may imagine adolescents as relatively innocent creatures, whereas their memories -- if unearthed -- would remind them of what an anarchic jungle secondary school was, in contrast to the much more tranquil social lives of adults.

So, the need for a clearer picture of young people's lives is great, yet clearly unmet. As luck would have it, The Harvard Crimson (the undergraduate newspaper) has placed all of its content online, stretching back to the paper's origin in 1873, and it is fully searchable. This allows me to search for some signal of the zeitgeist year by year and plot the strength of this signal over time. If you read my three-part series at GNXP on the changing intellectual climate, as judged by what appears in academic journals, the approach is the same. (Here are those articles: Death of Marxism, etc., A follow-up, and Popularity of science in studying humans.)

In short, I count the number of articles in a given year that have some keyword -- "Marxism," for example -- and then standardize this by dividing by a very common neutral word. This protects against seeing an imaginary trend that is simply due to the newspaper pumping out many more articles over time (or fewer in hard times). Ideally, I would standardize by using "the" since it appears in every article and therefore gives us the total number of articles, answering the question "What percent of all articles in that year contained this keyword?" Unfortunately, the Crimson search engine won't allow me to use "the," but it did let me search for "one," which is also a highly frequent word and will be a decent substitute. Before getting into the meat of the post, here is the number of articles found by searching for "one" across time:

We were correct to standardize somehow, given the huge increase since the early 1990s (probably reflecting the greater ease of desktop publishing with cheap computers and software).

A real improvement over the previous series that used JSTOR is that the Crimson archives run right up to the present, whereas JSTOR typically has a 5-year lag between the article's being published and being archived in JSTOR. The only limitation here is that we only see what's going on among upper-middle class young people, rather than all of them, but I'm mostly looking at events or topics that permeate our society, rather than academic fads as before. (Although I will probably look into that as well using the Crimson database.) So these youngsters ought to be fairly representative.

Moving on to the substance, I've put together graphs on three broad topics that are of great importance to young people. Adults also tend to worry about where the next generation stands on these topics. They are identity politics, religion, and generational awareness.

Identity politics

Everyone knows that the 1960s brought a sea change in college student culture, but I don't think anyone has presented a clear picture of how the concern with racism, sexism, etc. began or how it has changed over time to the present. In particular, most people (in my experience) over-estimate how prevalent identity politics were in the '60s, while forgetting how widespread the corresponding hysteria was in the early 1990s.

I began at the earliest date that the word appears or 1960, whichever came first, and end in 2008. The y-axis shows the count of articles with the keyword divided by the count of articles with "one." I chose the following identity politics keywords: "racism" or "racist," "sexism" or "sexist," "homophobia" or "homophobic," "date rape," and "hate speech." I also made a composite which sums all of them up. Here are the results:

"Racism" increases sharply through the '60s, peaks in 1970, and declines moderately through the '70s and '80s. However, starting in the late '80s, there's another surge which peaks in 1992. After the early '90s panic, though, the preoccupation with racism has declined substantially, so much so that it is now back down to its pre-late-'60s level. You'll recall how little widespread disruption there was in the wake of Hurricane Katrina, the Jena Six, and the Duke lacrosse hoax, despite every professional activist -- and even Kanye West on a televised awards show -- struggling to whip up society into another revolution. But 2006 had none of the racial hysteria to fuel it that 1992 did, so it was impossible to spark another round of race riots in L.A.

As for "sexism," it only begins in 1970. Many people who weren't alive back then don't remember how little of a role women's liberation played in The Sixties. But I used to be a radical activist in my naive college days, and Z Magazine operator Michael Albert emphasized that second-wave feminism was largely a delayed response to the perceived sexism of the anti-war, civil rights, and anti-capitalist movements that formed the basis of the counter-culture's concerns. Although there is not much change during the '70s and '80s, it got a pretty high jump initially, so it's not as though there were few adherents -- there just wasn't a massive increase. But once again, starting in 1988 there's a resurgence that peaks in 1992. This is when third-wave feminism was born, and it coincided with the resurgence of racial politics.

However, just as with race, panic about sex plummeted shortly afterward and is currently even lower than its initial level. Again, recall how few massive protests there were -- if any -- about the Duke lacrosse hoax, whose bogus premise was a bunch of jocks raping a stripper. Or for that matter, how pathetic the response was to the Larry Summers brou-ha-ha of early 2005. It surely generated some controversy among academics, but even that didn't last very long, and no one outside of academia gave a shit. In particular, the students didn't care -- influence from their leftist professors notwithstanding. If the zeitgeist lacks an obsession with sex roles, their feminist professors can preach all they want, but it will only fall on deaf ears.

Paying attention to homophobia only begins in 1977 -- 8 full years after the Stonewall Riots. And remember, that was in 1969, a ripe time for liberationist and revolutionary movements. This underscores the importance of using quantitative data in reconstructing history, since most people nowadays would imagine that back in the turbulent '60s, they were surely fighting against homophobia the way that their counterparts do now. Far from it. Back in the '60s, and even for most of the '70s and early '80s, campus radicals and liberals couldn't have cared less. Again, back then it was all about civil rights, stopping the imperialist war machine, and maybe smashing capitalism -- not about gay marriage or getting more women into science careers.

As we saw with "racism" and "sexism," "homophobia" saw a sharp rise in the late '80s and peaked in 1989 or '90, before dropping precipitously afterward. Currently it is more or less where it was before the late '80s surge. Liberals and young people may support gay marriage, but the larger meta-narrative of homophobia, as the activists would say, does not interest them.

Now we come to two more specific topics. "Date rape" basically tracks third-wave feminism, although there is an isolated occurrence in 1979 (which I exclude from the graph to keep the trend clear). Quite simply, there is rape and there is not-rape. "Date rape" was a term that tried to criminalize sex that wasn't rape, but where the girl regretted it or was unsure of what was happening. Probably the guy was stinking drunk too. Importantly, claims about "date rape" drugs later turned out to be bogus, as doctors in the UK pointed out -- most of the women claiming to have been slipped a "date rape drug" had no such drug in their system, although they typically did have lots of alcohol or other hard drugs present. It was essentially a witch-hunt or moral panic. By now, we know what to expect -- a sharp rise during the late '80s that peaks in 1992 and falls off a cliff shorter thereafter. Nowadays no one takes the idea seriously.

"Hate speech" shows roughly the same pattern, although it doesn't get started quite as early. It appears suddenly in 1990, peaks in 1991, and plummets right away. Aside from an anomalous jump in 2002 - '03 (which may reflect some event specific to Harvard), it has remained very low for nearly 15 years. Compared to college students during the heyday of Generation X back in 1991 - '92, young people today are more likely to view the concept of "hate speech" as a thinly veiled attempt to censor unpopular viewpoints.

It's interesting to note the differences in scale among the five topics. The "racism" scale is 3 times as great as those for "sexism," "homophobia," and "date rape," confirming what many others have observed -- that in the struggle for a bigger piece of the identity politics pie, race has trumped sex or sexuality. The scale for "hate speech" is about 3 times smaller than the previously mentioned three topics, perhaps because it is so specific, while "racism" pops up in many more contexts.

The composite identity politics index shows roughly the same pattern that we've noted before. There's a jump during the '60s which peaks in 1970 -- this only reflects a concern with racism. There is little or slightly negative growth during the '70s and '80s, but this was only the calm before the storm. In the late '80s, another wave of panics sweep through, and there is a sustained peak from 1989 through 1992. I have often pointed out in my personal blog and less often at GNXP that the peak of the social hysteria is 1991 - 1992, and this confirms that claim.

As with other epidemics, eventually it burns out, and obsessing over identity politics is now at or below its pre-late-'60s level. This overall pattern does not change even if I remove the two topics of "date rape" and "hate speech" that might seem to bias things toward producing an early '90s peak. Hard as it may be to believe for anyone younger than the Baby Boomers, identity politics was simply a very small piece of the radical chic calling back in the '60s -- it was about civil rights, anti-war, and capital.

Generational awareness

We often speak of young people as an entire generation, but that only works when they have strong generational awareness or solidarity. Otherwise, they are merely a cohort. To give a personal example, I am too young to be part of Generation X, yet I'm a bit older than the Millennials. Almost no one my age (roughly mid-late 20s) is stuck in the culture of their adolescence and college years and never will be, in the way that many Baby Boomers are culturally stuck in 1968 and many Gen X-ers are stuck in 1992. It is possible to be a traitor to your generation if you're a Boomer or X-er, but since my cohort doesn't view itself as a Generation, the idea of defecting fails to make sense.

The same goes for the cohort born between roughly 1958 and 1964 -- their age-mates will not burn them at the stake for saying that they never liked disco music or punk rock, unlike a Boomer who said he didn't like the Beatles' later albums or a Gen X-er who said he always thought alternative rock was lame.

In any case, how do we quantify this sentiment? I simply searched the Crimson for "generation," which will turn up any time a member of a Generation writes about their age-mates as though they were a cohesive group, as well as when older people recognize the young people of today as being different and a cohesive group. I assume that its secondary use as a synonym for creation is roughly constant over time, so that changes reflect its primary use. The counts for the very early years are low, and the number of articles with "one" is also low, so I'm not so sure that the standardized measurement is telling us something real for that time period. Thus, along with the standardized graph, I've included one that shows just the counts. Here are the results:

The trendline is exponential, although other simple trends show roughly the same picture. It seems that during Gilded Age and Progressive Era, thinking in generational terms was not very common, although it does turn up toward the end of the Progressive Era. I'll get to my conjecture about this after surveying the other periods.

During the Roaring Twenties and through the Great Depression and WWII, the level is much greater than the historical trend and lasts very long. This may strike us as unexpected because we associate inter-generational conflict mostly with the late 1960s -- but if you thought young people were going crazy then, you should have seen them during the 1920s! Women started driving, smoking, swearing, and cutting their hair short like men -- take that, you dinosaurs! Jazz and later swing were just as much of a middle finger to older people as rock and roll was later on -- perhaps more so, given how much jazz broke with other forms of popular and classical music that formed the cultural background. Rock and roll, compared to its background, wasn't so different. One central way by which young people mark themselves off from older people is by inventing their own slang, and they indeed made a bunch of it in the 1920s, much of which is still with us today.

After WWII and through the first half of the 1960s -- the Golden Age of American Capitalism -- the level dips below the historical trend. The name "Silent Generation" was coined in 1951 to refer to young people at the time, and silent they were. If we must use "generation" instead of cohort, I prefer using distinct names for what I've been calling Generations -- Boomers, X-ers, etc. -- and numbered Silent Generations for everyone else (i.e., Silent Generation 1, 2, 3, ...). Of course, after them come the Baby Boomers, although surprisingly the late '60s level is not so far above the historical trend. Still, in absolute terms, it's as high as it was during the 1920s, even if it doesn't last as long.

There is a dip below trend for most of the 1980s, when Silent Generation 2 was in college. Then there's a spike in the early 1990s, reflecting Generation X finding its megaphone, to everyone's annoyance. It's hard to make out the end -- it looks mostly at the historical trend, perhaps with a recent jump. Sometime in the middle of the next decade, we'll be hit by another wave of social hysteria, and that will crystallize the young people then (say, age 16 to 24) into a new generation. We already have a name for them -- the Millennials -- but their generational self-awareness doesn't seem very deep to me right now. Voting for Obama has been the extent of them making us hear the voice of a new generation. But just wait.

I'm not too sure what causes the increasing historical trend -- maybe it just shows that the word is becoming more and more common to describe something we already talked about before, and hence the trend isn't interesting. Or it could mean that with an increasing pace of cultural change, we are able to pay better attention to generational changes, and we can therefore talk about them a lot more than before.

What's really interesting are the oscillations above and below the trend, which correspond pretty well to what we consider the heydays of Generations and Silent Generations, respectively. Trying to piece it all together, I think that generational awareness is low when times are prosperous and young people are small as a fraction of the population. This combination of good economic times and little competition means they can more easily establish themselves during early adulthood on their own. They don't feel gipped or embittered, so they aren't anti-establishment (which in practice means anti-elders). In contrast, when economic times are bad and there are a lot of young people competing to establish themselves, anxiety about the future sets in, many feel cheated -- they went to college and didn't end up making it rich -- and they have to band together to support each other.

This is really just a rough guess, and I don't have good data to present on the percent of the population that's 16 to 24, or what their average wages were, over this entire time period. And there are exceptions, of course. But looking at the whole thing, that's what I see.

Next update: religion, which will cover five topics and include a composite index.

Tuesday, August 18, 2009

The rate of invention from 0 to 2008 A.D.

A quick update: in case you didn't see, there is now a table of contents on the right, with links to each post, as well as an index of categories that I've written about, in order to ease browsing.

Moving on, here is a much more detailed follow-up to a GNXP article I wrote about the slowing of innovation after the government downsized or busted up the two main sources of invention in recent times -- AT&T's Bell Labs and the Department of Defense -- over concerns about monopolistic entities. Try to think of something invented after Bell Labs was broken up in 1984 - '85, and that would pass the "telling your grandkids" test. Is it something they will continue to use, or if not, something they would still find really neat? The compilers of "lists of really important inventions" truly struggle to come up with such things invented in the last 25 years, and typically they're highly derivative of other entries -- for example, including not just the cell phone, but the digital cell phone, and even the particular model of the iPhone!

Now let's put this decline in a greater historical perspective. After all, perhaps we could only expect to have one really good run, this run happened in the mid-to-late 20th C., and that's it. Something like the technological Renaissance. Well, we'll see. In contrast to the book that I used before -- Big Ideas: 100 Modern Inventions that have Transformed Our World -- there's a new book out called 1001 Inventions that Changed the World. Instead of starting in 1940, it begins far back into human evolution, including stone tools, clothing, and so on. Plus, it obviously has 10 times as many data-points. There isn't a whole lot before the Common Era, so I've restricted the dates to lie between 0 and 2008 A.D., leaving 855 data-points. (It took a few trips to Barnes & Noble to copy them all down!) It's written by several authors, so there is little chance that the entries reflect an eccentric's view. Flipping through it, just about everything made sense (again, with the exceptions of some desperate attempts to include very recent inventions).

It's not clear what the time scale should be for technological change, so I've included time series graphs at several time scales. They all show roughly the same picture, although fitting a model to these data would show less error in some rather than others. The time intervals are century, half-century, decade, year, and a 10-year moving average of the yearly data to smooth them out. The points for century, half-century, and decade are plotted at the mid-point of the interval (e.g., at 1825 for the decade of the 1820s). The y-axis measures count of inventions. Here are the results (click to enlarge):

If you just looked at the century-scale data, things would look pretty good! Starting in the Early Modern period, there's an accelerating increase that really takes off during the Industrial Revolution and apparently continues straight through the 20th C. Sure, the most recent increases shows diminishing returns, but maybe we just had a mediocre century and things will bounce back.

Unfortunately, zooming in to just the half-century-scale data gives us reason to be pessimistic. Here we see an S-shaped curve (such as the logistic) that clearly shows a near-saturation of the rate of invention. Focusing even closer to the decade-level data, the picture looks even gloomier and confirms the picture I presented in the previous GNXP post: after a fairly steady increase, there is something of a plateau (or perhaps two peaks) lasting from 1885 to 1975, after which there is a 30 year-long plummet to the present.

This recent decline is visible even at the yearly level: you can see the heavily shaded curve dip steadily downward toward the end. In the smoothed 10-year moving average graph, this stands out even more clearly. The peak is around 1984, and there is a steady decline afterward until the latest value at 1998. Now, there are certainly other periods of decline in this graph, but to find one that lasts so long, you have to go back to the decline from roughly 1790 to 1810. And the sheer magnitude of the drop is right up there with the other declines of the past two centuries. Things are not looking so good.

Ignoring the overall rise-and-fall trend, there is an apparent 20 to 25-year cycle, judging from the distance between peaks or between valleys. That's the length of a human generation, although this may be a coincidence. If not, then this would reflect generational changes in how encouraging of innovation the society is -- if they want plenty of cool new things, they may have to trade that off against increasingly monopolistic bodies like AT&T. Conversely, if the zeitgeist is for breaking up corporations that are "too big," the invention rate may suffer. Perhaps the sharp drop starting around 1905 was due to the Progressives and muckrakers.

Whatever the mechanism, it is clear that there need to be at least two groups interacting dynamically -- otherwise we would only see an increase. (This is from the study of differential equations.) Moreover, they need to interact in a way that includes growth and decay terms for each -- rather than, say, one group being gradually converted to the other. I'll leave the modeling aside for now and just note that any good model of invention needs these features.

Even at the level of the centuries-long trend up and down, we need these features. Aside from whatever interactions one generation has with another, which show up on the smaller time scale, it seems that there are two or more groups that people can fall into, and that interact to produce rises and falls over the centuries. For a long time, one of those groups appears to not even exist -- namely, before the Early Modern period. I'm reminded of Greg Clark's popularization of "the industrious revolution" in his book A Farewell to Alms, where English people suddenly became more industrious and future time-oriented, as a prelude to the Industrial Revolution. Under this view, there was a surge in the numbers of smart and hard-working people as the merchant classes genetically replaced the old aristocracy, which killed itself off through wars and feuds, the last of which was the War of the Roses in the late 1400s.

Institutions may matter too. The first patents were granted in Europe during the 15th C., although this requires believing in a century-long lag, given the dearth of inventions during the 15th C. -- it doesn't stand out against the previous 1400 years. At the same time, perhaps the effect took so long because patents -- governmental promises of protection -- mean little unless the state is powerful enough to deliver the tough protection it promised. And they weren't nearly as muscular in the 15th C. as they would become in the 16th.

Maybe it was modern centralized states and the rise of civil servants, bureaucrats, etc., which required new things to keep everything organized and flowing smoothly. Or more likely, it could have been the Military Revolution which began around that time -- the original military-industrial complex. Lots of cool gadgets are the direct fruit of military technology or are barely modified spin-offs (like commercial airplanes).

Whatever the genetic and institutional factors are, it's not hard to tell a plausible story about their decline in the late 20th C. Sure, elite fertility rates had been falling long before then (back to the 1700s in France), but maybe the invention rate is a saturating function of elite population size -- after the elite gets so big, no more inventors will be drawn from it, the rest preferring to go more into law, business, medicine, etc., or uncolonized career fields like community organizer. And certainly the decline in militarism, the skepticism of large state sectors of the economy, and an even greater disgust of monopolistic corporations than even the muckrakers had, all contribute to the decline in institutional support for invention.

Lamentably, the sociological view (even while grounded in individual's behavior) of interacting groups leaves little room for optimism. Given the right conditions on the parameters, the interactions between infected, uninfected, and recovered or immune people when an epidemic sweeps through may spell doom for the disease -- it will flare up but inevitably burn out, and there is nothing we can do to prevent that (a good thing). Of course, changing the parameters to some new combination may result in a qualitatively different outcome, such as the epidemic never infecting most people in the first place.

But we don't even know what the differential equations are that describe how invention changes over time, let alone have good estimates of the parameters involved. Still, gathering lots of data, seeing the rough patterns, and then making mathematical models, may eventually allow us to manipulate the invention rate just as we can affect the course of an epidemic with enough knowledge. That's really all we can do, and whatever extra power over society we get, we get.

Thursday, August 13, 2009

Brief: Do Asians consume boat loads of carbohydrates?

One thing that confuses many people about the value of low-carbohydrate diets is the presence of rice as a staple in East Asian diets -- if they eat so much rice, why aren't they as obese as we are? The key is that empty (or digestible) carbohydrates all have roughly the same effect -- to be converted into glucose, and thus raise our blood sugar and therefore our insulin levels over the long term.

Insulin is the primary hormone responsible for storing fat in fat cells, while just about every other hormone serves to break fat out of fat cells to be burned as fuel. Just think of when you get an adrenaline rush that prepares you for fight or flight -- you need to get lots of energy now, so get those fatty acids out of the fat cells. Chronically high insulin levels will therefore lead to weight gain, as well as extreme difficulty in losing weight if you go on some kind of diet and exercise program. Thus, it doesn't really matter that Asians consume more rice than we do. What we need is a total count of carbohydrates.

I went to NationMaster and checked per capita grain consumption for various countries, and their data come from the USDA. I'm sure there is more detailed information on the USDA website, but this is just a brief post. They include 6 food grains (corn, oats, rice, rye, sorghum, and wheat) and barley, which I take to refer to beer consumption. The lists are of the top 15 (roughly), and this list varies by grain, although some countries do show up across the board. All units are thousand metric tons per million population.

I've made two tables below, one with whatever data were available, and another where I replaced missing values with the minimum value to be conservative. (If a country didn't make it into the top 15, the greatest its value could be is the minimum of the top 15.) The ranking is essentially the same either way, though the bar chart below reflects the table with imputed values and leaves out barley (again the ranking doesn't really change, as you can see from the tables). I ignored a country if it didn't show up in at least one of the big three categories -- corn, rice, or wheat -- and retained those that showed up in many categories. Here are the results:

barley corn oats rice rye sorghum wheat sum w/o barley

min 3.00 11.66 0.46 13.13 0.00 1.91 47.40

7.41 63.87 161.86 161.86

184.52 184.52
South Africa

211.41 211.41
Iran 32.00


194.07 271.64 239.64
Japan 13.00 125.57
67.95 2.35 11.77 47.40 268.05 255.05
China 3.00 98.06 0.46 103.34 0.00 1.91 80.00 286.78 283.78
South Korea
103.12 1.03

300.90 300.90
198.80 1.85 43.52
5.91 53.46 303.55 303.55
Mexico 8.00 241.99 1.41

339.91 331.91
Russia 122.00
247.53 454.24 332.24
9.68 163.86 358.04 358.04
Australia 149.00

84.62 308.61 596.98 447.98
459.68 9.99

469.67 469.67
Canada 293.00 338.36 57.00
234.72 928.42 635.42
United States 19.00 700.02 11.57 13.13
18.04 112.27 874.03 855.03

barley corn oats rice rye sorghum wheat sum w/o barley

min 3.00 11.66 0.46 13.13 0.00 1.91 47.40

India 3.00 11.66 0.46 78.92 0.00 7.41 63.87 165.32 162.32
Indonesia 3.00 31.82 0.46 152.70 0.00 1.91 47.40 237.30 234.30
Iran 32.00 11.66 0.46 45.58 0.00 1.91 194.07 285.68 253.68
Japan 13.00 125.57 0.46 67.95 2.35 11.77 47.40 268.51 255.51
South Africa 3.00 196.19 0.46 15.22 0.00 1.91 47.40 264.20 261.20
China 3.00 98.06 0.46 103.34 0.00 1.91 80.00 286.78 283.78
Brazil 3.00 198.80 1.85 43.52 0.00 5.91 53.46 306.56 303.56
South Korea 3.00 196.75 0.46 103.12 1.03 1.91 47.40 353.68 350.68
Egypt 3.00 141.92 0.46 42.58 0.00 9.68 163.86 361.50 358.50
Russia 122.00 11.66 41.84 13.13 42.88 1.91 247.53 480.95 358.95
Mexico 8.00 241.99 1.41 13.13 0.00 88.51 47.40 400.45 392.45
Australia 149.00 11.66 54.75 13.13 0.00 84.62 308.61 621.78 472.78
Hungary 3.00 459.68 9.99 13.13 0.00 1.91 47.40 535.12 532.12
Canada 293.00 338.36 57.00 13.13 5.33 1.91 234.72 943.46 650.46
United States 19.00 700.02 11.57 13.13 0.00 18.04 112.27 874.04 855.04

As you can see, despite scoring a bit higher than Western European countries on rice consumption, most East Asian countries don't consume "boat loads" of it. And in any case, looking at their overall grain consumption shows that they don't consume much of any of the other types either. Although China, Japan, and South Korea are developed nations, they blend right in with second and third-world countries in terms of grain consumption (in the poorer countries, this is likely all they eat, in contrast to the mounds of pork and fish that Northeast Asians enjoy). Western Europe and its off-shoots are clearly unusual in the amount of non-fiber carbohydrates that they consume, the US in particular.

It's true that grains are only one source of empty carbohydrates, but including others would only strengthen the pattern here, whether starches like the potato or straight-up sugar bombs like snack cakes, soda, and fruit juice. Even in Chinese restaurants geared toward American tastes, there is rarely any dessert offered, and that's also true for bakeries in Chinatowns. (You've never sampled such bland pastries.) Including sweets might distinguish some of the low-grain countries -- e.g., lots of syrupy sweets available in India compared to Japan -- but the chasm between East Asian and Western European populations would only widen.

This exercise drives home the importance of quantitative data rather than mere rankings. China indeed ranks far above the US in rice consumption, but it is "only" by 90 thousand metric tons per million population, and they don't outrank us in consumption of any other grain. By contrast, we outrank China in corn consumption -- but here it is by 600 thousand metric tons per million population, and we also lead them on the order of 10 thousand metric tons per million population for oats, sorghum, and wheat. Looking at these finer-grained data, maybe the longer lives and overall better health of East Asians, compared to other developed countries, isn't so surprising.

Sunday, August 9, 2009

Intelligence and patronizing the arts in red and blue states

Continuing to explore whether the stereotypes about red vs. blue states are as strong as people made them out to be in recent years, or whether factors like social class matter more, let's take a look at going out to a performing arts event. The story was that blue staters breathe the arts, if only to lord their refinement over everyone else, while red staters were more suspicious of the arts -- what with all that my-kid-could-make-that garbage funded by the National Endowment for the Arts.

However, a more plausible account is that regional status shouldn't matter so much because what allows a person to enjoy the higher arts is greater brainpower. IQ researcher Linda Gottfredson says that one of the most reliable yet quick ways to tell if someone is smart is to simply ask a few questions to see if they like classical music at all. If so, they're smart.

So which matters more -- regional culture or intelligence? Let's turn to the GSS and find out. This time I restricted the respondents to Whites only, so that race is not a confounding factor when we look at red vs. blue states. The survey asked questions about attendance for various performing arts -- going to an art museum or gallery, a dance performance, a classical music or opera performance, and a non-musical drama. Unfortunately they don't ask these questions every time, so the sample sizes are a bit smaller than for questions about belief in god. Only one group was over 100, but I've kept the groups of people that were at least of size 40. The museum, dance, and classical music questions had IQ data available, but the non-musical drama did not, so I used years of education instead (in two-year bins).

Here are the relationships for both red and blue states (in those colors, respectively):

First, we notice that attendance increases as we move up the intelligence scale, which confirms what most people think. There are apparent differences in exactly how the line increases for red vs. blue states, but because we're dealing with somewhat smaller sample sizes, I wouldn't make too much of it. At the least, there's no consistent difference -- say, if the red - blue gap widened or attenuated as we moved up the intelligence scale.

And in general, the blue line is a bit above the red line, also confirming the stereotype about blue staters being bigger arts aficionados than red staters. Still, as we saw with religious fundamentalist beliefs and hunting and NASCAR preferences, the red - blue gap is miniscule compared to the gap between below-average and above-average IQ people (which ranges from about 25% to 45%, while the red - blue gap is typically 5% to 15%).

So far, we've seen that some measure of social class -- whether income, job prestige, IQ, or education level -- is by far more powerful in causing differences between people than is regional culture. That's true for voting patterns, religious beliefs, going hunting and fishing, watching NASCAR, and now patronizing the arts.

Of course, things don't have to be this way -- it could easily be that the elites would be more like the commoners of their region and very different from the elites of other regions. If red and blue states were literally at war with each other, we might imagine that they would define their identity based on region much more than on class. As things are, though, the different regions of the country cooperate a lot with each other. That only leaves "vertical" or social class distance as the dimension along which most inter-ethnic competition will be waged. Already during the twilight years of the Cold War, Americans didn't care about a potential battle between the USA and the Russkies -- they were more absorbed in the war between white trash and yuppie scum, as they called each other.

Perhaps this is the source of the red state - blue state mythology -- people want to engage in an Us vs. Them ethnic conflict based on "horizontal" distance, not just (or even primarily) based on "vertical" distance. But since there are no more Kaisers, Fuhrers, or Soviets, we had to invent a wide ethnic gulf. Surely we weren't going to pick any old way of splitting up the country, so we chose one with some plausibility -- after all, the data analyses so far do show that there is something to the red - blue stereotypes, even if they're profoundly exaggerated.

The other large source of horizontal ethnic conflict in America is based on allegiance to local sports teams. But this won't do very well because aside from a few teams like the Yankees, there aren't a handful of superpower teams that would provide as much dramatic conflict as the Allied vs. the Axis powers. Plus sports only appeal to a fraction of the population, so it would be hard to whip up everyone into a war mindset. But mention those latte-sipping liberals in blue states, or those low-IQ Biblical literalists in the red states, and suddenly everyone's passions become inflamed.

GSS variables used: race, region, wordsum, educ, visitart, dance, gomusic, drama

Thursday, August 6, 2009

Class and religious fundamentalism in red vs. blue states

In the lead-up to and just after the last presidential election, much of the quant parts of the blogosphere were talking about Andrew Gelman et al's excellent book Red State Blue State Rich State Poor State. In it, they showed that increasing income predicted increasing likelihood of voting Republican, no matter if you live in a red state or blue state. The only catch was that this trend is more pronounced in red states, and more muted in blue states.

So, despite all of the red state vs. blue state mythology that people began telling each other during the past five or so years, rich people even in liberal blue states still tended to vote fairly Republican, and working-class people in conservative red states still tended to vote fairly Democrat. In fact, the split between red and blue states increased as you went up the income scale -- it seems that the culture wars of the 1990s were mostly an intra-elite competition.

I briefly jumped on the bandwagon and showed how an interest in hunting, fishing, and watching NASCAR fit this pattern. That is, it's mostly the lower-IQ people who do these things, regardless of whether they're in red or blue states, although for a given IQ level, red staters do have a greater interest. Still, a smart red stater is far less likely to care about hunting than a lower-IQ blue stater. IQ differences matter much more than red state vs. blue state differences.

After the election, interest in this topic naturally faded, which is unfortunate because there must be plenty of other examples of red state elites looking mostly like blue state elites, in addition to voting Republican and not caring about hunting or NASCAR. Of course, aside from voting patterns, the thing that's supposed to show the widest gulf between red and blue states is religion. But since we've seen that working-class people tend to vote Democrat, even in red states, we suspect that this other part of the conventional "red vs. blue" mythology is wrong. So let's turn to the General Social Survey to see how class and religious beliefs are related in both red and blue states. *

I'm looking here at fundamentalist religious beliefs, since that was the story -- not just that elite red staters went to church more often than lower-class blue staters, but that fundamentalist beliefs pervaded red state culture, while a staunchly secular mindset characterized blue state culture. The GSS measures such beliefs in two ways -- by asking if you think the Bible is God's word and is to be taken literally, and by asking if you know God exists without a doubt (the latter might not be so fundamentalist, but it's worth looking at too). I simply found the percent of each group who agreed with each of these statements.

I measured class in four ways, just to ensure that it doesn't matter which part of class we focus on -- socioeconomic index (a measure of job prestige), real income, education level, and IQ. The SEI bins are of size 5 and range from 17 - 22 through 92 - 97, real income is measured in 1986 dollars (in $5000 bins, centered on values ending in 5 or 0), education level is simply years of school completed, and IQ is the number of vocabulary words correct on a 10-question test.

Here are the plots for how religious fundamentalist beliefs are related to these measures of class, for both red and blue states (shown with red and blue lines, resp.):

No matter which belief we look at, and no matter which measure of class we choose, the same pattern shows up: the relationship between class and fundamentalism is more or less the same in red and blue states, namely that it declines pretty steadily as we move up the class ladder. The red - blue gap is roughly the same across the class spectrum, unlike voting patterns, where there was a widening of the gap as you moved up the income scale. In all cases, the upper-class red staters are well below the lower-class blue staters in fundamentalism. So, despite all of the rhetoric about godless blue states and Bible-thumping red states, fundamentalism is mostly a class-based phenomenon.

At the same time, the red lines are generally above the blue lines, showing that when we ignore class, red states are more fundamentalist than blue states. However, this overall red - blue gap is usually under 20% at most, whereas the drop from lower-class to upper-class fundamentalism is usually at least 30%. In other words, class distinctions trump red - blue distinctions.

The two pieces that fit with the reigning mythology are easy to see -- i.e., highly secular elites in blue states and highly religious working-class people in red states. But the problem of popular stories like the red state - blue state narrative is that we only see what agrees with them, while we blind ourselves to what clashes with them. In this case, no one saw that working-class people in blue states were highly religious and that upper-class people in red states were very secular. Rather than take a narrative for granted and look for data that support it (and ignoring those that refute it), we should simply take an empirical approach and look at all the data -- only then should we come up with the story, which by that point is just a common-language phrasing of what the data say, not some grand vision of how the world is presumed to be.

* I counted the following GSS regions as blue states:

New England - Maine, New Hampshire, Vermont, Massachusetts, Rhode Island, Connecticut
Middle Atlantic - New York, New Jersey, Pennsylvania
East North Central - Ohio, Indiana, Illinois, Michigan, Wisconsin
Pacific - Washington, Oregon, California, Alaska, Hawaii

These regions are the red states:

West North Central - Minnesota, Iowa, Missouri, North Dakota, South Dakota, Nebraska, Kansas
South Atlantic - Delaware, Maryland, District of Columbia, Virginia, West Virginia, North Carolina, South Carolina, Georgia, Florida
East South Central - Kentucky, Tennessee, Alabama, Mississippi
West South Central - Arkansas, Louisiana, Oklahoma, Texas
Mountain - Montana, Idaho, Wyoming, Colorado, New Mexico, Arizona, Utah, Nevada

GSS variables used: bible, god, sei, realinc, educ, wordsum, region

Monday, August 3, 2009

Brief: science knowledge across the lifespan

For the first of the brief posts (which don't count toward the 20 articles you've paid for), let's take a quick look at how knowledge of basic concepts in math and science changes -- or doesn't -- with age. It seems a priori unlikely that knowledge would continue to increase over time, since most people don't read more and more science or math during their lives. For most, there's what they were taught in school, and that's it. So, it could be like vocabulary size, which stays pretty constant through adulthood (or increases very slightly), or it could be like matching names with faces, where you forget people's names after not having seen them for so long.

The GSS has asked respondents 13 such basic science and math questions, mostly in True / False format, as follows:

- The center of the Earth is very hot.

- Human beings, as we know them today, developed from earlier species of animals.

- It is the father's gene that decides whether the baby is a boy or a girl.

- Does the Earth go around the Sun, or does the Sun go around the Earth?

- Electrons are smaller than atoms.

- Lasers work by focusing sound waves.

- The continents on which we live have been moving their locations for millions of years and will continue to move in the future.

- All radioactivity is man-made.

- Antibiotics kill viruses as well as bacteria.

- The universe began with a huge explosion.

- How long does it take for the Earth to go around the Sun: one day, one month, or one year?

- A doctor tells a couple that their genetic makeup means that they've got one in four chances of having a child with an inherited illness. a. Does this mean that if their first child has the illness, the next three will not have the illness?

- b. Does this mean that each of the couple's children will have the same risk of suffering from the illness?

I treated a correct answer as worth 1 point and an incorrect answer as worth 0. So, the average score that an age group gets on some question is simply the fraction who got it right. I weighted all 13 questions equally and summed up the scores for each question. Thus, if everyone missed every question, the total score is 0, while if everyone got every question, the total is 13. This way, the data have a clear and simple meaning: if you picked someone at random from some age group, how many questions out of all 13 would you expect them to get right?

The age groups are in 3-year intervals, starting with 18 - 20 and ending with 63 - 65 (to keep sample sizes around 40 or more). Here are the total scores on this basic science quiz by age group:

Aside from the anomaly of the 42 - 44 year-olds -- or maybe it's a sign of a mid-life crisis -- the data are pretty flat from young adulthood through retirement. There's a clear jump from the late high school / early college years to the late college / post-college years, although it decays somewhat during the 20s -- perhaps people are glad to not have to use this stuff anymore, so they don't keep up on it. During the 30s and mid-40s, though, there is a pretty evident steady increase. After that, it declines somewhat or is flat.

Like I said, though, there isn't much variance across the lifespan, so it seems that these factoids are like vocabulary words -- parts of your crystallized intelligence that you acquire when you're maturing, and that more or less stay put in your mind for the rest of your life. Given how little most people apply these factoids to their daily lives, there doesn't seem to be a strong "use it or lose it" component to remembering them.

As a final reminder, since you're paying to read this site, I'm much more open to reader suggestions about what to look at, for both the in-depth and these briefer posts. I have plenty of new ideas myself, but I'm sure there are many more that are floating around in readers' heads. I'll try to get to all of them, although how soon the results appear will obviously depend on how easy it is to attack the question.

GSS variables used: age, boyorgrl, evolved, hotcore, earthsun, electron, lasers, condrift, radioact, viruses, bigbang, solarrev, odds1, odds2.

Is there a decline in arts appreciation? Evidence from theater

Whenever I hear about whether or not the arts are in decline, I see far more anecdotes than data. And even when data are presented, they rarely go back very far -- so how do we know if a supposed decline is really a decline or merely the down-swing of a cycle? Even worse, the data typically come in 5 or 10-year intervals, so we have no clue what's going on in between. Who cares if ballet attendance is down from 1987 -- we want to know whether it has gone steadily down during that interval.

To settle this once and for all, I've put together some time series for patronizing the theater. I dug through the arts & leisure chapters across many editions of the Statistical Abstract of the United States, and I've included two ways of measuring patronizing the theater -- attendance (on the demand side) and number of playing weeks (on the supply side). I've controlled for population size by making them per capita rates, since the population doubled from 1955 to the present. The playing weeks data go back to 1955, while the attendance data go back to 1976 for Broadway and 1985 for road shows. There are some years missing here and there at the earlier times, but all are more or less complete.

Here are the plots:

The attendance data show pretty much the same pattern as the playing weeks data, which isn't surprising since they're just two ways of measuring the same thing. So I'll stick to the playing weeks data for the discussion.

The first thing we notice is a downward trend, more so for Broadway than for road shows (down about 33% vs. 16%, respectively, from their 1955 values). Because we associate theater so strongly with Broadway in particular, we may have a somewhat exaggerated view of how bad the fall has been, since Broadway fell from a much greater height than road shows.

Despite this downward trend, there appears to be a cycle imposed on top. The mid-1970s to mid-1980s especially saw a resurgence of the theater's popularity. During this time, there was also a "ballet boom" (google ballet boom 1980s), so that the theater boom was part of a greater increase in arts appreciation in general. We don't normally associate the period from 1975 to 1985 as a cultural renaissance -- well, as someone who likes punk, disco, and new wave, I do, but most don't. For all I know, people who were alive then may actually recall vividly how culturally alive the country felt. But those of us born in 1980 or later have inherited Generation X's telling of the story -- and we all know they were no fun, so it wouldn't surprise me to find out that there's little basis for their accounts of cultural vapidity during a period that saw a theater and ballet boom.

Moving forward, notice how similarly the Broadway and road show data move from 1955 through the 1980s -- the parallel movements are pretty striking. However, from the late 1980s (it looks like 1987) to today, they move in opposite directions. There could be a very simple explanation for this -- if the rhythm of the cycle is slightly different between Broadway and the rest of the country, sometimes they will be in synch and other times out of synch, even though they are not influencing each other at all. This is what produces beats in sound, as when two faucets dripping at slightly different rhythms will sometimes be perfectly in synch and later out of synch.

But I doubt this, since Broadway and the rest of the country are in synch for a little over 30 years, and only recently have they gone out of synch, with little transition, as you'd expect if this were just an instance of beats. Not knowing much about the recent history of theater, I couldn't tell you what happened in the mid-to-late 1980s that might have set Broadway off along a different path from the rest of the country.

I also doubt that Broadway patrons began to purposefully do the opposite of whatever the masses were doing, as some naive theories assume. First, the idea makes little sense because the masses are completely out of sight and out of mind for Broadway patrons -- their real conflicts and competitions are held among themselves. Second, snobbery goes back much farther than 1987 -- why was Broadway patrons' behavior so in synch with that of the rest of us for the 30 years before then?

In any event, most people today will probably prefer to watch movies in the theaters or at home rather than see live performances, so the future of theater doesn't look so hot. Still, there are large swings around this downward trend, and we could certainly see another boom -- as indeed there was among road shows in the mid-2000s. And it's not as if people will stop going altogether -- the attendance rates have always been on the order of 1%, whatever the fluctuations have been.

In the future, I'll dig up some more data but for a different arts category. The Statistical Abstract also has data on symphonic performances, so that seems like the natural next step in this look at recent cultural history.

Sunday, August 2, 2009

Was there a decline in formality around 1920? Evidence from names

In Stanley Lieberson's excellent book on naming fashions, A Matter of Taste, he has really cool graphs on the decline in various symbols of formality during the 20th C. For example, he shows that the Sears catalog offered fewer and fewer dress hats for men and women, and dress gloves for women. This begins at the start of his dataset in 1920, and by about 1970 hardly any are offered at all.

He also shows the decline in people who use initials rather than their first and/or middle names, among some prestigious group (the board of directors of the Boston Symphony Orchestra, if memory serves). He interprets this -- and other anecdotal data, such as college students not wearing shirts and ties anymore -- as evidence for a "general decline in formality" process during the 20th C. That is, whatever is causing this change, it must apply at a very general level -- nothing to do with hats per se (such as the difficulty of getting into cars with them on), or gloves per se (such as having heated cars). It must account for changes in nicknames too.

Steven Pinker has also mentioned Lieberson's book, the data, and the "general decline of formality" interpretation in his own book, The Stuff of Thought, where I first learned of Lieberson's work. And I myself had believed this interpretation until reflecting more on it.

There's something wrong about calling this suite of changes a "decline in formality" -- Lieberson has shown a decline in particular symbols (albeit many such symbols), but why is the popularity of those particular symbols a proxy for "formality"? After all, once upon a time, men didn't wear hats at all -- yet the period was very formal. Sometimes they wore powdered wigs, and sometimes they didn't even bother with those, as you see here in this painting of Rene Descartes with Queen Christina of Sweden. This also shows that facial hair is not inherently formal or informal -- even when gloriously ungroomed, as most pictures of the Victorian gentleman Charles Darwin prove.

As for using first initials, which supposedly is more formal because it increases social distance -- only people close to you use your given names -- I looked up some lists of famous people known by their first initials (as in R.A. Fisher) or first initial and middle name (as in F. Scott Fitzgerald). See here and here for the lists. I then looked up their birth years to determine when they turned 20 -- that is, roughly the time in your life when you make a decision about how you'll be called as an adult. Here is how these initialled people are distributed by year of turning 20:

There are a few odd cases in the 18th C, and an increasing trend is visible by 1850. There's a peak from 1890 - 1920, and a decline afterward. The apparent resurgence near the end of the century reflects a fad among Generation X pro sports stars to use initials -- these are not the contemporary counterparts of G.K. Chesterton or J.B.S. Haldane. I've listed all of the post-1980 people in an Appendix, so that you can see. Even with sports stars, it was only a 10-year fad. So, among the cultural and social elite, the use of initials never recovered.

The trouble with using this symbol as a proxy for formality is obvious: it implies that there was hardly any formality before 1870, which we know isn't true. The same would follow if we looked at the presence of what we call "dress hats" among paintings or photographs of the elite, stretching back into history. Top hats, say, were a symbol of formality only for a limited time -- not before or after. The proper interpretation of Lieberson's data, then, is that the 20th C saw a steady decline in Victorian symbols of formality, not of formality in the abstract. Now, why do we single out Victorian symbols as those that "really count" as formal? My guess is that Baby Boomers are thinking about what their grandparents' lives were like -- "Why, my grandfather used to ____ , but no one does that anymore."

When you think about it, the reason that there can't have been a decline in formality in the abstract is that formality is simply a set of social conventions for how to present oneself in various contexts. It doesn't matter what they are, as long as everyone knows what they are and adheres to them most of the time. In game theory terms, we're playing a coordination game, where we do well as long as we do the same thing, and poorly if we do different things -- e.g., picking one side of the road to drive on. But a more apt example is the sound-meaning pairing that speakers of a language use -- after all, there's nothing palpably pavonine about the sounds in "peacock," but as long as we all adhere to the convention that those sounds refer to that animal, we're fine. Likewise, there's nothing inherently formal about wearing hats, but in some times and places, everyone agreed on hats as a formality symbol.

Seen in this light, what we have is something like a shift in the pronunciation of common words. (In game theory terms, we have moved from one of the multiple equilibria to another.) That is, we still have very rigid rules of formality that everyone understands pretty well -- it's just that they're different from before. Now, it is considered formal to wear a button-down shirt with no tie, and it is considered a violation of formality to wear a top hat and tails -- this would offend your peers' sense of propriety just as much as if you'd shown up in a purple leisure suit.

And even far enough back, it was not considered formal to use your first initials -- that too changed from being a violation of formality to a standard of formality, before falling out of use again. These rise-and-fall patterns are typical of changing tastes that recurrently replace each other. It may sound strange to talk about fashion in conventions -- shouldn't they be stable over long stretches of time? -- but there's a new generation that has to build its own conventions every year. Just as with language, most such changes of conventions occur among the young, who seek to mark themselves off from their elders.

Now, I'm sure that a Structuralist could look at all of the symbols of formality across space and time and discover underlying principles, and the formality symbols we actually see would just be variations on them. If we had a way of measuring those deep principles, then we could say whether the level of formality has changed over time. As it stands, though, all we can conclude from young people today using nicknames, wearing t-shirts rather than button-downs, and believing that Latin phrases are pretentious, is that the symbols of formality are changing. It is just like a generational change in the way a thing is pronounced. The taboo against violating formality is still incredibly strong, as a young person will immediately find out if he wears a jacket and tie in the classroom.

Of course, you may think that college students looked better back then (I do), or that Middle English sounds better than Modern, but we shouldn't confuse the aesthetic evaluation of different time periods with the empirical task of seeing if something deep about formality has changed or not.

Appendix, recent famous people known by initials:

k.d. lang
D.B.C. Pierre
P.J. Hogan
B.D. Wong
A.C. Green
B.J. Surhoff
A. L. Kennedy
J.K. Rowling
J.J. Lehto
B. J. Armstrong
R. Kelly
A.R. Rahman
F.P. Santangelo
C.J. Hunter
J.D. Roth
J.T. Snow
P.J. Brown
P.J. Harvey
P. T. Anderson
M. Night Shyamalan
J.J. Stokes
A.J. Langer
V.V.S. Laxman
O.J. Santiago
J. D. Drew
B.J. Ryan
R.W. McQuarters
J.C. Romero
A. J. Burnett
C.C. Sabathia
J.J. Redick