News
Insight
Published
2019

Science fiction: AI fake news

Read any news article about the latest advancements in AI, and the author will likely state a statistic about how while an increasing portion of the population is bringing AI into the home - in the form of smart speakers and IoT enabled doorbells and security monitors - these same people (and many more) are fearful of the impact AI will have on society. A recent Pega study that measured consumer’s attitudes toward the development and use of AI revealed that over 70% of consumers surveyed “harbor some sort of fear of Artificial Intelligence.”

What sparks this fear? 

In one chapter of Yuval Harari's "21 Lessons for the 21st Century," Harari claims that science fiction plays a significant role in shaping people's perceptions of new technology. Take a minute to consider it: Do you think the average American is more likely to read a research paper/scholarly article about AI and ML advancements, or watch a feature film about AI (The Matrix, Her, Ex Machina, anyone?)? Probably the latter.

With this in mind, it is no wonder people are afraid of AI. Many of these films follow the same script: Human creates AI, AI becomes smarter than human, AI destroys human. In Ex Machina, for example, an AI expert creates a humanoid robot who becomes deceptive and manipulative beyond the human's control, and ultimately stabs her creator in attempt to escape into the real world. 

Now, we can't blame science fiction film makers and authors for society's fear of technological advancements. The stories they tell about AI taking over the world are reflections of our society’s very real concerns about the future of technology and its impact on humans. We also have to realize that science fiction is just that - fiction.  A genre that, by its nature, is meant to represent a false, exaggerated, or alternate reality. What great movie doesn't have a good dose of conflict? Who doesn't enjoy watching an epic human v. robot fight scene? 

You might be thinking, "So what? Everyone knows movies are fake." And while you’re probably correct in your assumption that most Americans know the difference between fiction and reality, Harari would argue that science fiction still impacts viewers in a very a real, deep way. Harari claims that "If you think you can press some delete button and wipe all trace of Hollywood from your subconscious and your limbic system, you are deluding yourself." The fictional narratives about AI that are presented to us again and again make an indelible mark on our psyche, no matter how outlandish they may seem.

In a nationally representative survey of UK citizens, 25% of respondents described AI in terms of robots when asked to describe AI technology. A significant majority of respondents also expressed more concern than excitement surrounding the development of AI. University of Cambridge researchers posit that this mischaracterization of AI is largely due to the embodiment of AI in films and media. “Imagining AI as embodied will lend itself to some narratives more than others: it might, for example encourage the public to focus on worries of gun-toting killer robots rather than the real-world challenge of algorithmic bias.” (University of Cambridge)

Because of this reality, Harari claims "...science fiction needs to be far more responsible in the way it depicts scientific realities; otherwise it might imbue people with the wrong ideas or focus their attention on the wrong problems." Here is where I would disagree. Firstly, there is no way that we could ever force authors or filmmakers to only portray AI in an “accurate” or “responsible” way. Fiction, by nature, is inaccurate – it is meant not to represent the real world. And who has the authority or expertise to determine what the “right ideas” or “right problems to address” even are? Who has the authority to deem a work of art “irresponsible”?

It is important to recognize that overly optimistic portrayals of AI in media can also have negative implications. University of Cambridge AI researchers point out that “Exaggerated expectations for what AI can achieve, and when, risk undermining future research and investment. Misplaced trust in AI technologies has already exposed people to a range of risks, including manipulation, privacy violation, and loss of autonomy.” 

Science fiction portrayals of AI and society’s perceptions of AI feed into each other. A society that is fearful of AI will inspire science fiction portrayals of robot domination and world destruction, which will only add to the existing societal fear. The only way to change the way AI is portrayed is to change the way humans feel about AI.

How you can tell your AI story in a truthful, impactful way

With our clients on the edge of innovation – such as Microsoft, Optum, and Cray – we focus in on stories that matter to their customers. Much of the discussion surrounding AI in the past has primarily been on celebrating technological advancements alone, rather than highlighting the real-life problems that AI has the potential to solve.

Below are a few things you should consider when creating your AI story:

1. Identify your protagonist (Hint: Your customers are your protagonists!)

  • What are they trying to achieve? What tools do they need to succeed?

2. Identify the conflict

  • What is keeping your customers from achieving their goals?

3. Identify your solution

  • How does your AI (or any other product/service) help your audience overcome their real, actual challenges, and achieve their goals? It’s tempting to just fall back on what your product does. Make sure you’re really addressing a pressing challenge for your audience. How can you effectively communicate the purpose behind your offering?

By focusing the narrative around how AI can help solve real people’s problems, we can ground the discussion surrounding AI in fact, reality, and meaning rather than fiction. And this approach to creating an AI narrative goes beyond storytelling – it sets a brand up to innovate with a purpose into the future.  

Many studies, including this one from Pega, reveal that a large majority of consumers do not have an accurate understanding of the basic functionality and capabilities of AI technology. Like Sir Frances Bacon so aptly stated: knowledge is power. The better we can help society understand what AI can do for them – both in its current and future state – the better equipped we will be to move this technology forward in a way that serves all humans.

At Northbound, we don’t make science fiction feature films. But we do help companies tell their stories in a truthful, impactful, customer-centric way. If you’d like some help in writing your story, shoot us an email. We can’t wait to get started.

 

Piper.icon

Piper Donaghu | Strategist

Learn from us

Sign up to receive monthly insights right to your inbox
Get on this list

Explore more resources

We solve business challenges with brand.
Share: