Monday, December 28, 2009

Estimating black-white racial tension from 1850 to present

Previously I looked at how much attention elite whites have given to blacks since the 1870s by measuring the percent of all Harvard Crimson articles that contained the word "negro." That word stopped being used in any context after 1970, which doesn't allow us to see what's happened since then. Also, it is emotionally neutral, so while it tells us how much blacks were on the radar screen of whites, it doesn't suggest what emotions colored their conversations about race.

When tensions flare, people will start using the more charged words more frequently. The obvious counterpart to "negro" in this context is "nigger." It could be used by white racists, non-racists who are quoting or decrying white racists, by blacks trying to "re-claim" the term, by those debating whether or not the term should be used in any context, and so on. Basically, when racial tension is relatively low, these arguments don't come up as often, so the word won't appear as often.

I've searched the NYT back to 1852 and plotted how prevalent "nigger" was in a given year, though smoothing the data out using 5-year moving averages (click to enlarge):


We see high values leading up and throughout the Civil War, a comparatively lower level during Reconstruction, followed by two peaks that mark "the nadir of American race relations." It doesn't change much going through the 1920s, even though this is the period of the Great Migration of blacks from the South to the West and Northeast. It falls and stays pretty low during the worst part of the Great Depression, WWII, and the first 10 years after the war. This was a period of increasing racial consciousness and integration, and the prevalence of "negro" in the Crimson was increasing during this time as well. That means that there was a greater conversation taking place, but that it wasn't nasty in tone.

However, starting in the late 1950s it moves sharply upward, reaching a peak in 1971. This is the period of the Civil Rights movement, which on an objective level was merely continuing the previous trend of greater integration and dialogue. Yet just as we'd guess from what we've studied, the subjective quality of this phase of integration was much more acrimonious. Things start to calm down throughout the '70s and mid-'80s, which our study of history wouldn't lead us to suspect, but which a casual look at popular culture would support. Not only is this a period where pop music by blacks had little of a racial angle -- that was also true of most of the R&B music of most of the '60s -- but was explicitly about putting aside differences and moving on. This is most clearly shown in the disco music scene and its re-birth a few years later during the early '80s dance and pop music scene, when Rick James, Prince, and above all Michael Jackson tried to steer the culture onto a post-racial course.

But then the late '80s usher in a resurgence of identity politics based on race, sex, and sexual orientation ("political correctness," colloquially). The peak year here is technically 1995, but that is only because of the unusual weight given to the O.J. Simpson trial and Mark Fuhrman that year. Ignoring that, the real peak year of the racial tension was 1993 according to this measure. By the late '90s, the level has started to plummet, and the 2000s have been -- or should I say were -- relatively free of racial tension, a point I've made for awhile but that bears repeating since it's not commonly discussed.

Many people mention Obama's election, but that was pretty late in the stage. Think back to Hurricane Katrina and Kanye West trying but failing to foment another round of L.A. riots, or Al Sharpton trying but failing to turn the Jena Six into a civil rights cause celebre, or the mainstream media trying but failing to turn the Duke lacross hoax into a fact that would show how evil white people still are. We shouldn't be distracted by minor exceptions like right-thinking people casting out James Watson because that was an entirely elite and academic affair. It didn't set the entire country on fire. The same is true for the minor exception of Larry Summers being driven out of Harvard, which happened during a remarkably feminism-free time.

Indeed, it's hard to recognize the good times when they're happening -- unless they're fantastically good -- because losses loom larger than gains in our minds. Clearly racial tensions continue to go through cycles, no matter how much objective progress is made in improving the status of blacks relative to whites. Thus, we cannot expect further objective improvements to prevent another wave of racial tension.

Aside from the long mid-20th C hiatus, there are apparently 25 year distances between peaks, which is about one human generation. If the near future is like most of the past, we predict another peak around 2018, a prediction I've made before using similar reasoning about the length of time separating the general social hysterias that we've had -- although in those cases, just going back to perhaps the 1920s or 1900s, not all the way back to the 1850s. Still, right now we're in a fairly calm phase and we should enjoy it while it lasts. If you feel the urge to keep quiet on any sort of racial issues, you should err on the side of being more vocal for right now, since the mob isn't predicted to come out for another 5 years or so, and the peak not until 10 years from now. As a rough guide to which way the racial wind is blowing, simply ask yourself, "Does it feel like it did after Rodney King and the L.A. riots, or after the O.J. verdict?" If not, things aren't that bad.

Looking at absolute levels may be somewhat inaccurate -- maybe all that counts is where the upswings and downswings are. So I've also plotted the year-over-year percent change in how prevalent "nigger" is, though this time using 10-year moving averages to smooth the data out because yearly flucuations up or down are even more volatile than the underlying signal. In this graph, positive values mean the trend was moving upward, negative values mean it was moving downward, and values close to 0 mean it was staying fairly steady:


Again we see sustained positive growth during the Civil War, the two bookends of the nadir of race relations, although we now see a small amount of growth during the Harlem Renaissance era. The Civil Rights period jumps out the most. Here, the growth begins in the mid-1940s, but remember that it was at its lowest absolute levels then, so even the modest increases that began then show up as large percent increases. The PC era of the late '80s through the mid '90s also clearly shows up. There are several periods of relative stasis, but I see three periods of decisively moving against a nasty and bitter tone in our racial conversations: Reconstruction after the Civil War (admittedly not very long or very deep), the late '30s through WWII, and the "these are the good times" / Prince / Michael Jackson era of the mid-late '70s through the mid '80s, which is the most pronounced of all.

That trend also showed up television, when black-oriented sitcoms were incredibly popular. During the 1974-'75 season, 3 of the top 10 TV shows were Good Times, Sanford and Son, and The Jeffersons. The last of those that were national hits, at least as far as I recall, were The Cosby Show, A Different World, Family Matters, The Fresh Prince of Bel-Air, and In Living Color, which were most popular in the late '80s and early '90s. Diff'rent Strokes spans this period perfectly in theme and in time, featuring an integrated cast (and not in the form of a "token black guy") and lasting from 1978 to 1986. The PC movement and its aftermath pretty much killed off the widely appealing black sitcom, although after a quick search, I see that Disney had a top-rated show called That's So Raven in the middle of the tension-free 2000s. But it's hard to think of black-focused shows from the mid-'90s through the early 2000s that were as popular as Good Times or The Cosby Show.

But enough about TV. The point is simply that the academic material we're taught in school usually doesn't take into account what's popular on the radio or TV -- the people's culture only counts if they wrote songs about walking the picket line, showed that women too can be mechanics, or that we shall overcome. Historians, and people generally, are biased to see things as bad and getting worse, so they rarely notice when things were pretty good. But some aspects of popular culture can shed light on what was really going on because its producers are not academics with an axe to grind but entrepreneurs who need to know their audience and stay in touch with the times.

Saturday, December 19, 2009

Brief: When were the most critically praised albums released?

To follow up on a previous post about when the best songs were released (according to Rolling Stone), here are some data from the website Best Ever Albums. They've taken 500 albums that appear on numerous lists of "best albums ever," which is better than using one source alone. If an album appears on 30 separate such lists, that indicates pretty widespread agreement. Here is how these top-ranking albums are distributed across time:




Music critics clearly prefer the more counter-cultural albums of the late '60s and early '70s, as well as those of the mid-'90s, although they do give credit to the more mainstream hard rock albums of the late '70s. It's not surprising that the 1980s don't do as well -- the nadir coincides with New Wave music -- since their appeal was too popular and upbeat -- and we all know that great art must be angry or cynical or weird. That may be somewhat true for high art, but when it comes to popular art like rock music or movies, I think the critics inappropriately imitate critics of high art. Sgt. Pepper's Lonely Hearts Club Band is not high art -- sorry.

Within the bounds of what pop music can hope to accomplish, I think the late '70s through the early '90s -- and to a lesser degree, the early-mid 1960s -- did the best. The later Beatles, Nirvana, etc., to me seem too self-conscious to count as the greater and deeper art forms that they were aspiring to.

Still, whether or not the critics are on the right path, these data show a remarkable consensus on their part -- otherwise, one person's list would hardly overlap with another's. I would say that appreciation of art forms is not arbitrary, just that -- in this case -- they reach agreement in the wrong direction!

Tuesday, December 15, 2009

Brief: Is the "culture of fear" irrational?

We hear a lot about how paranoid Americans are about certain things -- people in the middle of nowhere fearing that they could be the next target of a terrorist attack, consumers suspicious of everything they eat because they heard a news story about it causing cancer, and so on.

Of course, we could be overreacting to the magnitude of the problem, as when we panic about a scenario that has a 1 in a trillion chance of occurring but that sounds disastrous if it did happen. It's not clear, though, what the "appropriate" level of concern should be for a disaster of a given magnitude and chance of happening. So the charge of irrationality is harder to level using this argument about a single event.

But we also get comparisons of risk between events wrong, for example when we fear traveling by airplane more than traveling by car, even though planes are safer. Here the case for irrationality is straightforward: for a given level of disaster (say, breaking your arm, dying, or whatever), we should panic more about the more probable ways that it can occur. The plane vs. car example makes us look irrational.

Still, there's another way we could measure how sensible our response is, only instead of comparing two sources of danger at the same point in time, comparing the same source of danger at different points in time. That is, for a given level of disaster, any change in the probability of it happening over time should cause us to adjust our level of concern accordingly. If dying in a plane crash becomes less and less likely over time, people should become less and less afraid of flying. When I looked into this before, I found that the NYT's coverage of murder and rape has become increasing out-of-touch with reality: while the crime statistics show the murder and rape rates falling after the early 1990s, the NYT devoted more and more of its articles to these crimes. So at least at the Newspaper of Record, they were responding irrationally to danger.

But what about the average American -- maybe the NYT responds in the opposite way that we might expect because when violent crime is high, people see and hear about plenty of awful things outside of the media, so that writing tons of articles about murder and rape wouldn't draw in lots more readers. In contrast, when the society becomes safer and safer, an article about murder or rape is suddenly shocking -- just when you thought things were safe! -- and so draws more readers, who start to doubt their declining concern about violence.

The General Social Survey asks people whether there's any area within a mile of their house where they're afraid to go out at night. Here is a plot of the percent of people who say that they are afraid to go out at night, along with the homicide rate for that year:


Clearly there is a tight fit between people's perception of danger and the reality underlying that fear. The Spearman rank correlation between the two within a given year is +0.74. That's assuming that people respond very quickly to changes in violence; it might be even higher because there appears to be somewhat of a lag between a change in the homicide rate and an appropriate change in the level of fear. For example, the homicide rate starts to decline steadily after a peak in 1991, but people's fear doesn't peak until two years later, when it too steadily declines. That makes sense: even if you read the crime statistics, those don't come out until two years later. To respond right away, you'd have to be involved in the collection and analysis of those data. The delay is more likely due to people hearing through word-of-mouth that things are getting better -- or not getting negative word-of-mouth reports -- and that it takes awhile for this information to spread throughout people's social network.

Putting all of the data together presents a mixed picture on how rational or irrational our response to the risk of danger is. But here is one solid piece that average people -- not those with an incentive to misrepresent reality, either in a more negative or more positive way -- do respond rationally to risk.

GSS variables used: fear, year

Monday, December 14, 2009

Programming note

I got an email asking if late-semester work is piling on, but I'm actually just waiting until the year ends in order to round out a lot of posts with time series in them. I want to make sure I get all of the 2009 data in. So I'll probably post briefer items until the beginning of the new year, at which point there will be a glut of meatier posts. As always, feel free to leave requests in the comments.

Brief: Which Western countries care most about preserving the media?

A recent NYT article reviews a German study of how willing people in various countries are to pay for "online content" -- news reporting, videos, songs, etc. In the German-language PDF linked to above, a table shows how people in the countries studied describe their preferences for getting online content. The columns read: free with ads, free with no ads, pay with no ads, pay with ads, and none of above. You can probably figure out what the country names are.

There's a fair amount of variation in how much of the population is willing to pay at all, or conditional on paying or not paying, whether they'll accept advertising. To see what explains this, I've excluded people who answered "none of the above," leaving only those who expressed an opinion. In the table below, I've lumped the two "free" groups together and the two "pay" groups together. I've also calculated a "delusional" index, which answers the question, "Of those who want free content, what percent expect it to not even have advertising?" Obviously someone has to pay for news articles to get written, and some free-preferers recognize that advertising is the only viable alternative to not paying for typical media products yourself. So, those who answer "free, no ads" are expecting media producers to behave like charities.

The table also shows GDP (PPP) per capita, on the hunch that richer countries will be more willing to pay for news, videos, and so on. In economics jargon, this online content is a "normal good" that sees a rise in demand when a population gets wealthier (an "income effect" pushes the demand curve up). The table is ordered from most to least willing to pay.

Country GDP / cap % Pay Delusional
Sweden 37,334 25 20
Netherlands 40,558 20 36
Great Britain 36,358 19 43
USA 47,440 18 27
Belgium 36,416 15 33
Italy 30,631 15 46
Greece 30,681 13 27
Bulgaria 12,322 12 41
Czech 25,118 11 56
Germany 35,539 10 56
Turkey 13,139 9 54
Romania 12,600 9 48
France 34,205 8 56
Portugal 22,232 8 52
Hungary 19,553 8 52
Spain 30,589 6 61
Poland 17,537 5 57


It sure looks like wealth plays a key role, so let's look at how GDP relates to both willingness to pay and how delusional the free-preferers are:


Since the willingness to pay and the delusional index cannot vary outside of the range [0,1], I use the Spearman rank correlation instead of the Pearson correlation. The correlation between GDP and the percent willing to pay is +0.67 (two-tailed p less than 0.01). Between GDP and the delusional index, it is -0.49 (two-tailed p = 0.05). People in richer countries are more willing to pay, and they are less deluded about where stuff comes from -- that is, typically not from charities.

The surprises are worth looking at. At the top, we see mostly Anglo and Nordic countries, and at the bottom the more southern and eastern parts of Europe. However, France ranks pretty low, and Germany doesn't do much better, even though we Americans think of those countries as having more sophisticated media tastes, and as treasuring the institutions of the media -- Gutenberg in Germany, for example. We also think of ourselves as much more bratty when it comes to the media -- that because our sophistication level is so low, we're only a tiny bit willing to pay, and any more than that we'll junk the news in favor of some other cheap form of entertainment.

It looks like the ones that have more pro-market views are more supportive of paying for online content. The French and Germans may like the idea of keeping the media alive and thriving, but Americans are more willing to do what it takes to ensure that happens.