I have thought for a long time that momentum drives the support of the population. Whether it be a political candidate, a popular song, a philosophical principle, or a restaurant, once momentum is achieved, people will glom onto a cause merely because others like it. We obviously see this in the political arena (Obama’s 39% approval rating says more about those included in it than about the man himself) but I have thought that this principle applies to In’n’Out Burger, as well as the singer Macklemore. As it turns out, research suggests this may be true:
When Princeton sociology professor Matthew Salganik was a doctoral student at Columbia, he got interested in blockbusters –- specifically, he got curious about the role of social influence in determining the success of music, art, and books. He and his coauthors set up an ingenious experiment: they created a website where people could listen to songs by unknown artists, then decide whether they wanted to download particular songs to their private library. Participants were randomly assigned to different virtual rooms. In some rooms, people saw only a list of songs, while in others they could see how many times a song had been downloaded. Altogether the researchers created eight rooms — parallel worlds, really — which allowed them to study not just the role of popularity, but also the role of chance, in the creation of hits.
The upshot? Not surprisingly, people downloaded songs that others had liked –- in other words, they responded to social influence. But different songs took off in different rooms. As a song’s popularity snowballed, more and more people downloaded it. Eventually the different virtual worlds had created different mega-hits. One concrete example of this is the song “Lockdown” by the band 52 Metro. In one world it came in first, in another world the exact same song was 40th out of 48.
It would have been impossible to predict which songs would become hits, because given that so much decision making was based on other people’s earlier decisions. To a surprising degree, Salganik concluded, blockbusters are random.
I talked with Salganik about the implications of his findings for decision-making.
(For more on the experiment, read the New York Times’ coverage. For its implications for marketing, see this HBR article from Duncan Watts, one of Salganik’s co-investigators. You can see all the coverage and the original and subsequent academic papers, and review the data here.)
Screenwriter William Goldman, of Princess Bride and Stepford Wives, once said of Hollywood, “Nobody knows anything” — which of course has pretty significant implications for decision making under uncertainty. But that’s not quite true, right?
Right. This is best illustrated by the story of when I met with the head of research of one of the big networks. I showed him a figure that pretty much encapsulated our findings. He looked at it and said, more or less, “I already knew that.” And I said, kind of surprised because I had just spent months on this project, “Great! How?” And he said, “I can predict failure, but I can’t predict success.”
The implication is that people who are experts in these domains can recognize the material that’s really bad, but have almost no chance of knowing which of the remaining good material is going to break out and become a hit. And that’s exactly what we found in the experiment: low appeal songs almost always do badly, but the high appeal songs are harder to predict. Different parts of a system have different amounts of predictability.
What does that mean?
Unpredictability is not the same for every song. Higher quality songs, as a group, will outperform the lower quality ones, but which high-quality song is going to break out is impossible to figure out beforehand. In the experiment, we rewound the world and saw the range of possible outcomes that could have happened – and they’re all over the place!
What does this mean for how we make decisions in domains like this?
If you can accept the inherent unpredictability, then you can draw better lessons from your experiences. But many decision makers don’t do this. Instead, they develop rules of thumb that often obscure the reality: never release a romantic comedy in October. Why? Because everyone knows that you don’t release a romantic comedy in October. Why? Because we tried it once and failed. That kind of reasoning leads to the development of silly just-so stories that can be counter productive.
Instead, you can reframe the decision. Asking, “Is there a possibility that this could take off?” is much easier to answer, and so it’s more useful.
This suggests that judging decisions on outcomes is wrong.
To the extent that the process is predictable, you should be judging on outcome. But if you could imagine lots of disparate outcomes, then there is a fundamental limitation to what a decision maker — even one with perfect information — can do.
As a decision maker I find this depressing.
You shouldn’t! It’s really about accepting your limitations, and not trying not to learn too much from any outcome that has a large random component.
Think about it like this: independent decisions give you more information than interdependent decisions. You can look at the success of Gangnam Style, and it seems like people are making a decision to watch the video. But those decisions aren’t independent – they’re interdependent. People are watching because other people are watching, not because it’s necessarily a great song (although it may be).
Within an organization, that means that individuals should be assessing the quality of an idea or product independently -– at least initially. After that, a team can come to a consensus. If you all sit in a room together initially, though, you’re going to lose information because of the effects of social influence.
But if the consumers are interdependent, you’re still limited in what you can do to affect them. Should companies try to influence them, to alter the signal?
It’s a challenging problem. In the first set of experiments, we saw that success led to more success, which led to inherent unpredictability. So in a subsequent experiment, we tried to create false success to influence behavior: we inverted the results in the experiment, making the popular songs lower than the unpopular songs, and then let it evolve naturally. We found that, for some songs, this change in popularity became self-sustaining. But for other songs that didn’t happen. The volunteers would listen to the low-quality, highly ranked songs more often but wouldn’t download them. Differences in quality create real limits to self-fulfilling prophecies.
It’s also worth noting that in this world where we distorted the popularity of the songs, the total downloads decreased 25 percent – the market contracted. People left earlier, and they were less likely to download the songs they had listened to. So while it may be in the interest of each producer to manipulate the signal, it has pernicious effects on the whole system.