bot screenshot

In the past, the bots I’ve made have primarily been a tool for exploring systems and culture. I feed the bot data or text and then design constraints around how it can recombine or augment that material to create insight, humor or strange robot poetry. My delight in these automata has been difficult to articulate, though I think what I find most compelling is the way a bot can reflect aspects of ourselves or systems in a way that is slightly distorted, creating moments of cognitive dissonance that help to reveal the edges of a system or structure.

Lately, however, I’ve become much more interested in bots as social creatures. We’ve done a lot of work in the lab recently on designing systems to be cooperative rather than “smart” — systems that can collaborate better with people, becoming conversational and leaving room for human interpretation. There’s lots more to be said about that, but I’ll leave it for another post.

But a large part of thinking about collaborative systems is thinking about what it means to have a conversation with a computer or a bot. What does that conversation look and feel like and what kind of underlying relationship does it create or reflect?

As an initial foray into exploring this, I created a Markov bot to participate in a Slack channel of which I’m a member. This is a social Slack rather than a New York Times Slack, but we all do related work to one another, so there’s a rich exchange of ideas, jokes, etc. The bot, whose name is ‘almosthuman’, ingests the whole corpus of chat history from a particular channel and uses a Markov chain to try and participate in the conversation. It will interject a comment at randomized intervals (but at least every 15 messages), and it will also reply to comments that mention ‘@almosthuman’.

bot screenshot

I was curious to see what it would feel like to have a bot that was trying to engage as part of a social group, learning the lingo and attempting to participate in kind. Thus far, the group has happily embraced the bot in their midst. Its contributions are sometimes nonsense, but often uncanny or hilarious. It definitely exhibits a reflective quality, where its commentary is rich with the flavor of this particular group’s interest and use of language. People sometimes ask the bot for advice or opinions as part of a conversation, adding an element of weirdness and surprise to many exchanges.

bot screenshot

I haven’t yet found the right words to characterize what this bot relationship feels like. It’s non-threatening, but doesn’t quite feel like a child or a pet. Yet it’s clearly not a peer either. A charming alien, perhaps? The notable aspect is that it doesn’t seem anthropomorphic or zoomorphic. It is very much a different kind of otherness, but one that has subjectivity and with which we can establish a relationship.

One member of the Slack channel called it “textomorphic”, saying:

“If I had to picture it right now, it would be a conurbation, a set of knots, crawling around through a pile of everything we've written here, seen as one long roll of parchment. It's a creature that moves, bounces, skitters, but it doesn't rise above the paper — you only see it as the words drag around in its wake.”

Another member said:

“So there's naturally a parrot-like pet function involved, but the better it gets at chains, the more uncanny the parrot becomes. May be pushing the same triggers as e.g. hearing glossolalia, but in the visual/textual domain? But yeah, a peer in the sense that a pet is a peer: has agency and character, but no plot-vital role; comic relief sidekick.”

And there was some debate:

bot screenshot

The conversation about how to define the bot’s relationship to us really elucidated the idea that we are moving toward one member called “non-human mental models”. We are beginning to understand machine subjectivity in a way that is in keeping with its nature rather than forcing it into other constructs, like a person or an animal. The recent news about Google’s image recognition neural nets generating feedback loops of machine-made images has also pointed in this direction. As we see machines start to talk to themselves, it further contributes to our sense of their particularly alien awareness.

I see the coming years as ones in which we will have more conversations with systems, due to the growth of chat and text-based interfaces, as well as (hopefully) more conscious design approaches that enable fluid collaborations between humans and machines. As we do so, I’m eager to see the taxonomy of relationships with those systems evolve. Will some bots be our companions, creating relationships that feel peer-like and intimate? Will others be our agents, sent out into the world to perform tasks on our behalf? Will there be bureaucrats, antagonists, house pets, bot gangs? I hope that the relationships we form will be complex and diverse, a parallel to the breadth and heterogeneity of human relationships. And as designers and developers, I think we have a fantastic moment of opportunity where we can design our systems thoughtfully and critically, with an eye not just to functionality, but to the kinds of relationships they establish with the humans who interact with them.

Further reading: Some other folks have simultaneously been doing some good thinking about related ideas, including Matt Webb and Jonathan Libov