Misinformation is getting worse

May 14, 2019 | General
'Machines that speak and write will make misinformation worse', says Amy Stapleton at Harvard Business Review.

Technologists have long dreamed of building machines that converse as nimbly as humans. Early practitioners famously underestimated the magnitude of the challenge. Yet in 2019, we appear to be mere steps from the goal. The world’s dominant technology players are rushing to create software — whether intelligent or not — that can both converse and write as effectively as humans.

So, what’s the problem? We are not adequately prepared to address the hazards that could come with our successful launch of conversational machines. To anticipate the challenges ahead, it helps to take a quick look at the underlying technologies.

Amazon is certainly at the forefront of the voice assistant industry, and executives on the Amazon Alexa team are candid about their aim to make Alexa more conversational. The Seattle company started the Alexa Challenge three years ago to motivate the best and brightest academic teams. Their goal is to make Alexa more effective in “open domain conversations,” meaning discussions that roam freely across topics.

The coming era of proficient talking and writing machines holds many challenges that few seem to be seriously addressing. Here is a small list to ponder:

  • How do we ensure improved conversational technologies don’t result in a new generation of pernicious and convincing fake-news bots?
  • How do companies protect themselves from rogue bots speaking on their behalf? The most often cited example is Microsoft’s Tay, which notoriously stumbled by mimicking extremely inappropriate statements picked up from malicious interlocutors.
  • How can we avoid scenarios in which machines try to please us by telling us only the things we want to hear? Deep Mind released a study on recommendation engines, pointing out the role these systems play in creating echo chambers that wall a person off from different viewpoints.
  • What methods can we employ to prevent verbose machines from dragging all human discourse down to the lowest common denominator? Recall that software used to generate automated responses relies heavily on datasets of the most mundane human-generated content and conversations.

It should no longer come as a surprise that every technology created by humans comes with pros and cons. As we race headlong into the era of conversational machines, it’s time to start thinking about their cons, and designing tools to combat them.