One of the main attractions driving the current crop of wearable devices is their ability to deliver notifications in new, more elegant ways. Recent haptic advances like the “Taptic” and “Force Touch” features get us ever-closer to fulfilling the promise of “silent” notifications, normally accomplished by vibrating motors that, while subtle, are still distinctly audible to others. But notifications, no matter how subtle or invisible to others, are still interruptions to us.
At the Lab, we’re looking at technologies and interaction strategies that afford a different behavior - rather than having notifications “pushed” at you in various forms, interrupting you and taking you out of the moment, we’re interested in improving the experience you have when you consciously decide to “check the stack” of unread messages, unhandled notifications, etc. The big difference between this gesture and, say, a taptic tug on your wrist is the synchronicity - the tug happens when it happens, but you are in charge of when you check your stack.
[ Read more ]
This piece originally appeared on Source.
Over the last several months, The New York Times R&D Lab has been thinking about the future of online communities, particularly those communities and conversations that form around news organizations and their journalism. When we think about community discussion, we typically think about comments sections below our articles, or outside forums that link to our content (Twitter, Reddit, etc.). But what comes after free-text comments?
To explore this further, we developed Membrane, which is an experiment in permeable publishing. By permeable publishing, we mean a new form of reading experience, in which readers may “push back” through the medium to ask specific, contextual (and constrained) questions of the author. Membrane empowers readers with two new abilities. The first is that they can highlight any piece of text within the article, select a question they want to ask (e.g. “Why is this?”, “Who is this?”, “How did this happen?”), and submit that question to the newsroom, asking the reporter to give further explanation or clarify. The second is that they can browse–inline–questions that have already been previously answered by the reporter, giving them the benefit of the discussion that has already occurred. When a reader’s question is answered, they are notified, letting them know that the newsroom is paying attention to their feedback. In this way, the article becomes a channel through which questions can be asked, responses can be given, and relationships can be developed.
[ Read more ]
In May of this year, Facebook announced Facebook Instant Articles, its foray into innovating the Facebook user experience around news reading. A month later, Apple introduced their own take with their Apple News app, which allows “stories to be specially formatted to look and feel like articles taken from publishers’ websites while still living inside Apple’s app”. There has been plenty of discussion about what these moves mean for the future of platforms and their relationship with publishers. But platform discussions aside, let’s examine a fundamental assumption being made here: both Facebook and Apple, who arguably have a huge amount of power to shape what the future of news looks like, have chosen to focus on a future that takes the shape of an article. The form and structure of how news is distributed hasn’t been questioned, even though that form was largely developed in response to the constraints of print (and early web) media.
Rather than look to large tech platforms to propose the future of news, perhaps there is a great opportunity for news organizations themselves to rethink those assumptions. After all, it is publishers who have the most to gain from innovation around their core products. So what might news look like if we start to rethink the way we conceive of articles?
[ Read more ]
Do you have thoughtful ideas about the future? Are you excited about the intersection of media, technology and design? Have you always wanted to work in a tall building? We’re looking for a Creative Technologist to join our lab and help prototype the future of The New York Times.
The job description is below. To apply, please refer to the job listing and instructions at http://www.nytco.com/careers/Technology/#23944. If you have further questions prior to applying, you can contact us at email@example.com.
[ Read more ]
In the past, the bots I’ve made have primarily been a tool for exploring systems and culture. I feed the bot data or text and then design constraints around how it can recombine or augment that material to create insight, humor or strange robot poetry. My delight in these automata has been difficult to articulate, though I think what I find most compelling is the way a bot can reflect aspects of ourselves or systems in a way that is slightly distorted, creating moments of cognitive dissonance that help to reveal the edges of a system or structure.
Lately, however, I’ve become much more interested in bots as social creatures. We’ve done a lot of work in the lab recently on designing systems to be cooperative rather than “smart” — systems that can collaborate better with people, becoming conversational and leaving room for human interpretation. There’s lots more to be said about that, but I’ll leave it for another post.
But a large part of thinking about collaborative systems is thinking about what it means to have a conversation with a computer or a bot. What does that conversation look and feel like and what kind of underlying relationship does it create or reflect?
[ Read more ]