Yesterday, Microsoft announced Tay, its new chatbot for US audiences. This morning, Tay was taken offline, apparently in response to several racist and offensive tweets that supported genocide, concentration camps, and more. Tay was “designed to engage and entertain people where they connect with each other online through casual and playful conversation”. Tay would converse with you on Twitter, Kik, or GroupMe and was aimed at 18–24 year olds. According to the promotional site, “The more you chat with Tay the smarter she gets”. A mere 24 hours later, I’m not sure “smarter” is the adjective anyone would choose.
Tay serves as a perfect illustration of what can happen when bots engage with humans in conversation. The naive response to these kinds of incidents is, “Well, design your bot not to be offensive”. And indeed, much of what has happened with Tay is the result of a lack of thoughtful design. But this situation brings up much larger issues about the inability of bots to negotiate the complexity of human conversation. I would go a step beyond “make your bot polite” and say that we should stop trying to make bots act like people. We need to develop new models for how we converse with machines that move beyond human verisimilitude towards something that is more in keeping with their abilities. Rather than humanoid bots, we should design mechanomorphs: bots that can create a different set of expectations around how we converse with them. If we make bots more machine-like, our conversations with them can have new boundaries, creating a new space less fraught with pre-existing social norms.
[ Read more ]
One of the main attractions driving the current crop of wearable devices is their ability to deliver notifications in new, more elegant ways. Recent haptic advances like the “Taptic” and “Force Touch” features get us ever-closer to fulfilling the promise of “silent” notifications, normally accomplished by vibrating motors that, while subtle, are still distinctly audible to others. But notifications, no matter how subtle or invisible to others, are still interruptions to us.
At the Lab, we’re looking at technologies and interaction strategies that afford a different behavior - rather than having notifications “pushed” at you in various forms, interrupting you and taking you out of the moment, we’re interested in improving the experience you have when you consciously decide to “check the stack” of unread messages, unhandled notifications, etc. The big difference between this gesture and, say, a taptic tug on your wrist is the synchronicity - the tug happens when it happens, but you are in charge of when you check your stack.
[ Read more ]
This piece originally appeared on Source.
Over the last several months, The New York Times R&D Lab has been thinking about the future of online communities, particularly those communities and conversations that form around news organizations and their journalism. When we think about community discussion, we typically think about comments sections below our articles, or outside forums that link to our content (Twitter, Reddit, etc.). But what comes after free-text comments?
To explore this further, we developed Membrane, which is an experiment in permeable publishing. By permeable publishing, we mean a new form of reading experience, in which readers may “push back” through the medium to ask specific, contextual (and constrained) questions of the author. Membrane empowers readers with two new abilities. The first is that they can highlight any piece of text within the article, select a question they want to ask (e.g. “Why is this?”, “Who is this?”, “How did this happen?”), and submit that question to the newsroom, asking the reporter to give further explanation or clarify. The second is that they can browse–inline–questions that have already been previously answered by the reporter, giving them the benefit of the discussion that has already occurred. When a reader’s question is answered, they are notified, letting them know that the newsroom is paying attention to their feedback. In this way, the article becomes a channel through which questions can be asked, responses can be given, and relationships can be developed.
[ Read more ]
In May of this year, Facebook announced Facebook Instant Articles, its foray into innovating the Facebook user experience around news reading. A month later, Apple introduced their own take with their Apple News app, which allows “stories to be specially formatted to look and feel like articles taken from publishers’ websites while still living inside Apple’s app”. There has been plenty of discussion about what these moves mean for the future of platforms and their relationship with publishers. But platform discussions aside, let’s examine a fundamental assumption being made here: both Facebook and Apple, who arguably have a huge amount of power to shape what the future of news looks like, have chosen to focus on a future that takes the shape of an article. The form and structure of how news is distributed hasn’t been questioned, even though that form was largely developed in response to the constraints of print (and early web) media.
Rather than look to large tech platforms to propose the future of news, perhaps there is a great opportunity for news organizations themselves to rethink those assumptions. After all, it is publishers who have the most to gain from innovation around their core products. So what might news look like if we start to rethink the way we conceive of articles?
[ Read more ]
Do you have thoughtful ideas about the future? Are you excited about the intersection of media, technology and design? Have you always wanted to work in a tall building? We’re looking for a Creative Technologist to join our lab and help prototype the future of The New York Times.
The job description is below. To apply, please refer to the job listing and instructions at http://www.nytco.com/careers/Technology/#23944. If you have further questions prior to applying, you can contact us at firstname.lastname@example.org.
[ Read more ]