Last week, I was scheduled to give a talk at the University of Wisconsin-Milwaukee entitled “Narrative, Database, and Algorithm in the Hospitable Network.” Unfortunately, the boy got sick, and I had to postpone (I’ll be in Milwaukee sometime in the spring to give that talk).
I wrote a coda for the talk that won’t be quite as rhetorically effective a few months from now. It took up some questions about polling and algorithms that emerged in the wake of President Obama’s victory, and I had hoped to link my discussion of robot writers to these questions as a way to link my talk to recent events. But the kairotic window is quickly closing on that discussion, so I thought I’d post the coda here. I’ll find some new material for the coda in the spring. It shouldn’t be difficult, given that we are inundated with algorithms, narratives, and databases.
The bulk of my talk deals with the algorithmic journalists developed by Narrative Science, a company that transforms data into narratives by way of software. Many see the emergence of robot writers as a threat to the supposedly “human” realm of writing and narrative. That threat is often quickly dismissed, since robots can’t do what “we” do. However, I see these algorithmic journalists as exposing more than just another iteration of the “robots vs. humans” battle. After all, any piece of writing is the result of an algorithm that transforms data into narrative.
The scope of the data, the complexity of the algorithm, and the angle of the story may all change based on who or what is writing. But we are not all that different from robots. And if we imagine all writing as algorithmic, then we can begin to think of algorithmic thinking as a way to toggle between the worldviews of narrative and database. Lev Manovich famously developed this theoretical pair in The Language of New Media to describe how new media stages a kind of battle between narrative and database. The narrative worldview posits a single path through data, making sense of data by way of selection and exclusion. The worldview of database is more inclusive (though it makes some determinations as well) and allows for more pathways through the data to exist simultaneously.
Algorithmic thinking provides a way to sit in the liminal space between narrative and database, and rhetoric presents a long tradition of algorithms that help us sit in that space. With rhetoric, we can toggle between these two approaches to the world. So, this is the talk in a nutshell. Obviously, I’ve moved quickly through this argument, and I’ve skipped some of the hard work of showing how rhetoric is algorithmic. But I’ll save that for my visit to Milwaukee in the spring. Instead, I want to jump to the coda mentioned previously. While I spend much of the talk on the robots of Narrative Science, the coda took up a different robot:
This robot has served, simultaneously, as hero and villain in recent weeks. Nate Silver, baseball stats geek turned political forecasting geek, has been the target of derision (and celebration, depending on your political leanings) during election season. Silver’s fivethirtyeight.com (and if you’re like me, you have burned Silver’s site into your laptop screen during the past months) examines the vast database of available polling data and combines it with numerous variables (the biases of certain polls, jobs numbers, historical voter turnout numbers) to project election winners. Like Narrative Science’s “meta-writers” (who write algorithms that generate stories), Silver and his team author algorithms which process data and generate narratives. He tells us stories about all of this data, helping us make sense of it. But if the hospitable network enables this kind of analysis, opening up databases to anyone willing to author algorithms that make sense of it, that same hospitality has been extended to those who believe that Nate Silver might be a witch. In the lead up to the election, Silver defended himself against those who for various reasons—partisan leanings and television ratings chief among them—insisted that the 2012 presidential election was a “tossup.” He even bet MSNBC’s Joe Scarborough, who refused to believe Silver’s projections of a relatively easy Obama victory, $1,000 that his projection of an Obama victory was accurate. Scarborough was just one of many pundits who pitted their “gut instincts” against Silver’s sophisticated statistical models. This was essentially a replay, on a different stage, of the battle between stats geeks and old, crusty baseball scouts. Numbers vs. “the eye test.” As we know, Silver came out on top as his model correctly predicted the electoral vote count.
Two weeks ago, Silver told Charlie Rose that he believes he knows why he is the target of people like Scarborough: “I think I get a lot of grief because I frustrate narratives that are told by pundits and journalists that don’t have a lot of grounding in objective reality.” This is how many of us have understood this controversy in the wake of Tuesday night’s result: Nate Silver uses data. Joe Scarborough uses narrative. The former always trumps the latter. But this draws too clean a line between database and narrative, splitting the two along a human/nonhuman axis. Thus, Silver might be a “witch” because he uses data, and Scarborough was safe (or safer) from being put on trial because he relies on his human instincts. But both Silver and Scarborough author algorithms that transform data into narrative. Silver’s narratives may have been proven more accurate and may have been grounded in “big data,” but we closely watched election returns on election night because the news media had convinced us that the election was close—a toss-up.
The algorithms of Scarborough, Hannity, and a host of pundits across the political spectrum generated narratives that persuaded many of us to hesitate before proclaiming the election to be “in the bag.” This is not to say that all algorithms are created equal or that the narratives spun by various political noise machines should be treated the same as those generated by logical claims and sound evidence. In fact, this is precisely the problem. We are struggling with ways to sift and sort these narratives, which are sometimes spun from the exact same data.
And there are narratives that are more accurate than others. Silver, after all, was right. But we should also recognize that Silver’s success was not an indication that “big data” will always triumph or that narrative is flawed and “all to human.” That success was the result of a sound algorithm, an authored artifact that stood as an argument for the best way to move between database of polling data and narratives about the winner of the election.
The very fact that no explanation of data can claim to be the explanation means that citizens and media consumers are in a difficult position. How are these various, competing, conflicting narratives to be judged and compared? Which narrative should be trusted? What choices is an algorithm making when generating a narrative? How might we reverse engineer that narrative and speculate about the what motivates it? How does one oscillate between the competing spheres of narrative and database?
Rhetoricians have spent millennia building a vast library of algorithms that can help us understand the motives at work as data is used to spin narratives. These procedures have not necessarily been put forth as algorithms, but reframing rhetoric as a body of theory that generates reading and writing machines presents us with a particularly useful approach to our contemporary predicament. If our present environment is hospitable to conflicting narratives, then we require ways of sifting through those narratives. Databases grow, meaning that narratives proliferate. Determining how one might judge those narratives is an urgent problem for those hoping to make informed decisions about information. Rhetorical theory, which has always been algorithmic, provides one way of dealing with this struggle.