Read This! In a world of deep fakes and Google Duplex, how far should brands take AI? | VentureBeat (venturebeat.com)
Have you noticed that AI has finally passed into everyday reality? No one needs to reference science fiction apologetically to talk about it. It’s plainly here. It’s plainly real. And now that it’s real, our questions about it need to get real in response.No, I don’t mean, “Will the robots rise up and take over?” Rather, now that AI is growing up and gaining power, our questions have to grow up with it.

The effect on brand experiences


Here’s one question to start: Do we, as an industry, have an obligation to think about how using AI will impact not only the brands we work for, but our brands’ audiences and the talent we sometimes use to engage them?

For example, when Google debuted its Duplex platform and made an actual phone call with it, the voice (complete with “ums” and “mmm-hmms”) sounded utterly human. Since then, Google has begun to talk about how, when placing calls, Duplex will have to disclose that it is not, in fact, human.

That’s important. Not to do so would be ethically questionable (and in some US states, likely illegal).
As brand marketers follow suit and present equally lifelike experiences for consumers, what obligations will they have to remove the human mask and reveal the machine at work?

It’s a question we need hard, honest answers to. Because while academics, scientists, and PhDs are ushering in advances in AI itself, we’re the ones putting it in people’s hands, devices, homes, and newsfeeds.

When AI crosses the line, and the political aisle


Case in point: In a recent YouTube video, former US President Barack Obama called current US President Donald Trump an, “unqualified dipshit.” How about that? You may agree, you may disagree. But we can all agree on one thing: The video was fake. We know that because the people who made it told us it was. But our eyes and ears — they didn’t quite know the difference.

The video was produced using an AI-driven technology called “deep fake,” which allows amateurs to use open-source software to create convincingly real audio and video with very little time and effort. The “President Obama” video was actually voiced by actor Jordan Peele, as revealed at the end of the clip.

A new kind of identity theft


Deep fakes first came to prominence on Reddit. There, users were doing something more pernicious than political mudslinging; they were superimposing actors’ faces onto pornographic video clips. With much fanfare, Reddit banned the posting of such videos because of the deep and self-evident ethical boundaries they violate.

Even pornographic sites followed suit and banned deep fakes, drawing a hard line in the sand in an industry that most people wouldn’t quickly associate with ethical or moral standards.Using a different, but related, technology for “Rogue One: A Star Wars Story,” Disney resurrected the characters of Grand Moff Tarkin and a young Princess Leia. The latter (Carrie Fisher), had been able to consent to the resurrection; the former (Peter Cushing) had been deceased for many years.

So far, here’s the running tally on AI-driven artifice: 1) a multi-industry rejection of deep fakes as soon as they became pervasive 2) complex, theatrical manipulations that, so far, are not quite perfect and have been the sole preserve of powerful entertainment companies with nearly limitless financial and technical resources.

So what happens when the technology further improves (which it will) and becomes accessible to marketers and brands (which it always does)? Imagine a casting call where a dozen actors are digitally and convincingly superimposed on a stand-in model prior to engaging the actors in real life.

Or imagine trying out ad copy with perfectly synthesized voiceover by actors whose voices are being digital reproduced and don’t even know they’re saying what they seem to be saying. Imagine promoting a product you’d never use or a cause you abhor.Without consent, are these practices ethical?

(Note: I’ve only been talking about one small category of AI and ethics.

There are other, bigger topics I can’t cover properly in the space of a short article, such as AI’s ability to propagate insidious societal biases.)

Such questions and debates are urgent. There are now companies who claim to use AI to impact how we think, exploiting the human mind’s weakness for instant gratification. With large data sets, these companies want to exploit how we mediate motivation and desire.

While some companies claim to be acting on the side of good, selling their wares as fitness and education tools only, the question remains:

Should we use AI to optimize and exploit physiological responses in order to impact a consumer’s behavior?

Because that power will increasingly come into our grasp. We’ve already seen fallout from this in rudimentary AI, like fake news bots. We’ll soon be able to multiply their impact by many orders of magnitude. As companies acquire more data (abiding by platform terms of service or not), our ethical purpose in using that technology is far less certain.

As Reddit and even pornographic sites have shown, bad ethical behavior can be combated.As an industry, are we really okay exploiting people — their likenesses, their digital environments, their mental sovereignty — in order to squeeze out every last ounce of profit, regardless of the ethics?

Perhaps these questions and conversations, though made urgent by AI, have been with us or longer than we’d like to think.

Ricky Bacon is Group Technology Director for digital experience design agency Critical Mass in NYC.
    • 1
    Francisco Gimeno - BC Analyst In a world where we are going to witness not just virtual but augmented reality, and where the real real and the virtual real is the same, many ethical issues come to the light. We are just witnessing the beginning with bots spreading fake news influencing elections and campaigns, but very soon the AI technology will have to choose if continue without tackling the ethical problems (like has happened in biological technology not allowing the creation of chimera), or just continue and create a new series of problems. Interesting times indeed.
    • 0
    Dean Louis I think there's a fine line between what is right and wrong and companies can easily take it too far. For the longest time, competitive advertising was illegal, so you couldn't mention your competitor in your campaign, whether directly or indirectly, because it was considered offensive and created "bad press" because it insinuated that the other company wasn't as good, but as moral standing and thinking evolved, people said that competition was good, even healthy, so now we can politely slag off our competition all we want, as long as we don't go as far as to say they basically suck! So to, there's going to be a fine line between whether AI should be allowed to influence our thinking. I really don't like the fact that Google uses targeted advertising based on something I may have looked at or accidentally opened on my phone or computer, because most of the time it's stuff that I don't really care about and predetermining what I would be interested in on any given day is just ridiculous. I realize that the vast majority are very sheep like in their thinking but not everyone wants to be led on a leash, some people like to think for themselves and form their own opinions and fake news and fabricated realities is not what I'd be interested in. In all honesty, all that is just to distract for what is really going on around us and keeping people ignorant!