Diff is a small device that monitors the internal events stream of The New York Times and prints out a summary each time an active headline is changed. As it runs, it generates a long stream of changes printed on thermal paper: text that was removed from a headline is rendered as inverted, while additions to a headline are underlined.
We conceived of Diff as a hardware experiment that transposed rhythms and signals from the network realm onto the physical world, while remaining a background process that wouldn’t irritate everyone in the Lab. Because we had recently developed a framework for handling the synchronous, high-volume event stream from our internal publishing process, we decided to look at headline diffs because they were an easy-to-implement, easy-to-understand example of what kinds of insight this new data source could afford.
Of course, we were aware of and inspired by the excellent NewsDiffs project, which provides a more complete and persistent summary of changes to entire articles across several different websites. Our objective in making Diff was as much rooted in the notion of ‘fixing’ an evanescent resource in a place and time (as NewsDiffs does) as it was a reaction to the emerging shape of ‘internet things’ whose purpose is to transpose or transform the properties of network space onto physical space, and vice versa.
Living with Diff on my desk was both interesting and tolerable—no small feat, since I was wary of installing a noisy printer at eye level two feet from my face! As it turns out, headlines changed about every five to seven minutes, unless something interesting was happening, in which case the pace picked up. Most changes were stylistic or grammatical, but every now and then a truly interesting alteration would catch my attention: the change to reflect a jury’s verdict being announced, the mounting of details surrounding an unfolding event, an editor finding just the right hook to introduce an opinion piece.
We’ve only just begun exploring the full potential of the data source for this project, which is exciting in its own right: it’s basically a near-real-time, highly-detailed stream of every event that our publishing framework sees, from the first words typed into our CMS, to an article’s publishing in its own section, to its promotion to the front page.
I unplugged Diff after a week or so of printing, and have saved the 300-odd feet of generated text for some future application. Expect to see more stream-processing tools, internet-things and interaction experiments here soon!