Important Reading: What Google's AI ambitions mean for humanity (fastcompany.com)
Google has more computing power, data, and talent to pursue artificial intelligence than any other company on Earth—and it’s not slowing down. That’s why humans can’t, either.BY KATRINA BROOKER 

The human brain is a funny thing. Certain memories can stick with us forever: the birth of a child, a car crash, an election day. But we only store some details—the color of the hospital delivery room or the smell of the polling station—while others fade, such as the face of the nurse when that child was born, or what we were wearing during that accident.

For Google CEO Sundar Pichai, the day he watched AI rise out of a lab is one he’ll remember forever.

“This was 2012, in a room with a small team, and there were just a few of us,” he tells me. An engineer named Jeff Dean, a legendary programmer at Google who helped build its search engine, had been working on a new project and wanted Pichai to have a look.

“Anytime Jeff wants to update you on something, you just get excited by it,” he says.

Pichai doesn’t recall exactly which building he was in when Dean presented his work, though odd details of that day have stuck with him. He remembers standing, rather than sitting, and someone joking about an HR snafu that had designated the newly hired Geoffrey Hinton—the “Father of Deep Learning,” an AI researcher for four decades, and, later, a Turing Award winner—as an intern.

The future CEO of Google was an SVP at the time, running Chrome and Apps, and he hadn’t been thinking about AI. No one at Google was, really, not in a significant way.

Yes, Google cofounders Larry Page and Sergey Brin had stated publicly 12 years prior that artificial intelligence would transform the company: “The ideal search engine is smart,” Page told Online magazine in May 2000.

“It has to understand your query, and it has to understand all the documents, and that’s clearly AI.” But at Google and elsewhere, machine learning had been delivering meager results for decades, despite grand promises.


p-1-google-quantum-supremacy-future-ai-h

[Illustration: Gabriel Silveira]

Now, though, powerful forces were stirring inside Google’s servers. For a little more than a year, Dean, Andrew Ng, and their colleagues had been building a massive network of interconnected computers, linked together in ways modeled on the human brain.

The team had engineered 16,000 processors in 1,000 computers, which—­combined—were capable of making 1 billion connections. This was unprecedented for a computer system, though still far from a human brain’s capacity of more than 100 trillion connections.

To test how this massive neural net processed data, the engineers had run a deceptively simple experiment. For three days straight, they had fed the machine a diet of millions of random images from videos on YouTube, which Google had acquired in 2006. They gave it no other instructions, waiting to see what it would do if left on its own.

What they learned was that a computer brain bingeing on YouTube is not so different from a human’s. In a remote part of the computer’s memory, Dean and his peers discovered that it had spontaneously generated a blurry, over­pixelated image of one thing it had seen repeatedly over the course of 72 hours: a cat.This was a machine teaching itself to think.

The day he watched this kind of intelligence emerge from Google’s servers for the first time, Pichai remembers feeling a shift in his thinking, a sense of premonition. “This thing was going to scale up and maybe reveal the way the universe works,” he says. “This will be the most important thing we work on as humanity.

”The rise of AI inside Google resembles a journey billions of us are on collectively, hurtling into a digital future that few of us fully understand—and that we can’t opt out of. One dominated in large part by Google. Few other companies (let alone governments) on the planet have the ability or ambition to advance computerized thought.

Google operates more products, with 1 billion users, than any other tech company on earth: Android, Chrome, Drive, Gmail, Google Play Store, Maps, Photos, Search, and YouTube. Unless you live in China, if you have an internet connection, you almost certainly rely on Google to augment some parts of your brain.Shortly after Pichai took over as CEO, in 2015, he set out to remake Google as an “AI first” company.

It already had several research-oriented AI divisions, including Google Brain and DeepMind (which it acquired in 2014), and Pichai focused on turning all that intelligence about intelligence into new and better Google products. Gmail’s Smart Compose, introduced in May 2018, is already suggesting more than 2 billion characters in email drafts each week.

Google Translate can re-create your own voice in a language you don’t speak. And Duplex, Google’s AI-powered personal assistant, can book appointments or reservations for you by phone using a voice that sounds so human, many recipients of the calls weren’t aware it was a robot, raising ethical questions and public complaints.

The company says it has always disclosed to consumers that the calls are coming from Google.

i-1-google-quantum-supremacy-future-ai-h

[Illustration: Gabriel Silveira]

The full reach of Google’s AI influence stretches far beyond the company’s offerings. Outside developers—at startups and big corporations alike—now use Google’s AI tools to do everything from training smart satellites to monitoring changes to the earth’s surface to rooting out abusive language on Twitter (well, it’s trying).

There are now millions of devices using Google AI, and this is just the beginning. Google is on the verge of achieving what’s known as quantum supremacy. This new breed of computer will be able to crack complex equations a million or more times faster than regular ones.

We are about to enter the rocket age of computing.Used for good, artificial intelligence has the potential to help society. It may find cures to deadly diseases (Google execs say that its intelligent machines have demonstrated the ability to detect lung cancer a full year earlier than human doctors), feed the hungry, and even heal the climate.

A paper submitted to a Cornell University science journal in June by several leading AI researchers (including ones affiliated with Google) identified several ways machine learning can address climate change, from accelerating the development of solar fuels to radically optimizing energy usage.Used for ill, AI has the potential to empower tyrants, crush human rights, and destroy democracy, freedom, and privacy.

The American Civil Liberties Union issued a report in June titled “The Dawn of Robot Surveillance” that warned how millions of surveillance cameras (such as those sold by Google) already installed across the United States could employ AI to enable government monitoring and control of citizens. This is already happening in parts of China.

A lawsuit filed that same month accuses Google of using AI in hospitals to violate patients’ privacy.

Every powerful advance in human history has been used for both good and evil. The printing press enabled the spread of Thomas Paine’s “Common Sense” but also Adolf Hitler’s fascist manifesto “Mein Kampf.

” With AI, however, there’s an extra dimension to this predicament: The printing press doesn’t choose the type it sets. AI, when it achieves its full potential, would be able to do just that.

Now is the time to ask questions. “Think about the kinds of thoughts you wish people had inventing fire, starting the industrial revolution, or [developing] atomic power,” says Greg Brockman, cofounder of OpenAI, a startup focused on building artificial general intelligence that received a $1 billion investment from Microsoft in July.

Parties on both the political left and right argue that Google is too big and needs to be broken up. Would a fragmented Google democratize AI? Or, as leaders at the company warn, would it hand AI supremacy to the Chinese government, which has stated its intention to take the lead? President Xi Jinping has committed more than $150 billion toward the goal of becoming the world’s AI leader by 2030.

Inside Google, dueling factions are competing over the future of AI. Thousands of employees are in revolt against their leaders, trying to stop the tech they’re building from being used to help governments spy or wage war.

How Google decides to develop and deploy its AI may very well determine whether the technology will ultimately help or harm humanity. “Once you build these [AI] systems, they can be deployed across the whole world,” explains Reid Hoffman, the LinkedIn cofounder and VC who’s on the board of the Institute for Human-Centered Artificial Intelligence at Stanford University.

“That means anything [their creators] get right or wrong will have a correspondingly massive-scale impact.”“In the beginning, the neural network is untrained,” says Jeff Dean one glorious spring evening in Mountain View, California.

He is standing under a palm tree just outside the Shoreline Amphitheatre, where Google is hosting a party to celebrate the opening day of I/O, its annual technology showcase.

This event is where Google reveals to developers—and the rest of the world—where it is heading next. Dean, in a mauve-gray polo, jeans, sneakers, and a backpack double-strapped to his shoulders, is one of the headliners. “It’s like meeting Bono,” gushes one Korean software programmer who rushed over to take a selfie with Dean after he spoke at one event earlier in the day.

“Jeff is God,” another tells me solemnly, almost surprised that I don’t already know this. Around Google, Dean is often compared to Chuck Norris, the action star known for his kung fu moves and taking on multiple assailants at once.“Oh, that looks good! I’ll have one of those,” Dean says with a grin as a waiter stops by with a tray of vegan tapioca pudding cups.

Leaning against a tree, he speaks about neural networks the way Laird Hamilton might describe surfing the Teahupo’o break. His eyes light up and his hands move in sweeping gestures.

“Okay, so here are the layers of the network,” he says, grabbing the tree and using the grizzled trunk to explain how the neurons of a computer brain interconnect.

He looks intently at the tree, as though he sees something hidden inside it.Last year, Pichai named Dean head of Google AI, meaning that he’s responsible for what the company will invest in and build—a role he earned in part by scaling the YouTube neural net experiment into a new framework for training their machines to think on a massive scale.

That system started as an internal project called DistBelief, which many teams, including Android, Maps, and YouTube, began using to make their products smarter.But by the summer of 2014, as DistBelief grew inside Google, Dean started to see that it had flaws.

It had not been designed to adapt to technological shifts such as the rise of GPUs (the computer chips that process graphics) or the emergence of speech as a highly complex data set.

Also, DistBelief was not initially designed to be open source, which limited its growth. So he made a bold decision: Build a new version that would be open to all. In November 2015, Pichai introduced TensorFlow, Dist­Belief’s successor, one of his first big announcements as CEO.

It’s impossible to overstate the significance of opening TensorFlow to developers outside of Google. “People couldn’t wait to get their hands on it,” says Ian Bratt, director of machine learning at Arm, one of the world’s largest designers of computer chips. Today, Twitter is using it to build bots to monitor conversations, rank tweets, and entice people to spend more time in their feed.

 Airbus is training satellites to be able to examine nearly any part of the earth’s surface, within a few feet. Students in New Delhi have transformed mobile devices into air-quality monitors. This past spring, Google released early versions of TensorFlow 2.0, which makes its AI even more accessible to inexperienced developers.

The ultimate goal is to make creating AI apps as easy as building a website. TensorFlow has now been downloaded approximately 41 million times. Millions of devices—cars, drones, satellites, laptops, phones—use it to learn, think, reason, and create.

An internal company document shows a chart tracking the usage of TensorFlow inside Google (which, by extension, tracks machine learning projects): It’s up by 5,000% since 2015.Tech insiders, though, point out that if TensorFlow is a gift to developers, it may also be a Trojan horse.

“I am worried that they are trying to be the gatekeepers of AI,” says an ex-Google engineer, who asked not to be named because his current work is dependent on access to Google’s platform.

At present, TensorFlow has just one main competitor, Facebook’s PyTorch, which is popular among academics. That gives Google a lot of control over the foundational layer of AI, and could tie its availability to other Google imperatives. “Look at what [Google’s] done with Android,” this person continues.

Last year, European Union regulators levied a $5 billion fine on the company for requiring electronics manufacturers to pre-install Google apps on devices running its mobile operating system. Google is appealing, but it faces further investigations for its competitive practices in both Europe and India.

By helping AI proliferate, Google has created demand for new tools and products that it can sell. One example is Tensor Processing Units (TPUs), which are integrated circuits designed to accelerate applications using TensorFlow.

If developers need more power for their TensorFlow apps—and they usually do—they can pay Google for time and space using these chips running in Google data centers.TensorFlow’s success has won over the skeptics within Google’s leadership.

“Everybody knew that AI didn’t work,” Sergey Brin recalled to an interviewer at the World Economic Forum in 2017. “People tried it, they tried neural nets, and none of it worked.” Even when Dean and his team started making progress, Brin was dismissive.

“Jeff Dean would periodically come up to me and say, ‘Look, the computer made a picture of a cat,’ and I said, ‘Okay, that’s very nice, Jeff,’ ” he said. But he had to admit that AI was “the most significant development in computing in my lifetime.

”Stage 4 of the Shoreline Amphitheatre fits 526 people, and every seat is taken. It’s the second day of I/O, and Jen Gennai, Google’s head of responsible innovation, is hosting a session on “Writing the Playbook for Fair and Ethical Artificial Intelligence and Machine Learning.

” She tells the crowd: “We’ve identified four areas that are our red lines, technologies that we will not pursue. We will not build or deploy weapons. We will also not deploy technologies that we feel violate international human rights.

” (The company also pledges to eschew technologies that cause “overall harm” and “gather or use information for surveillance, violating internationally accepted norms.”) She and two other Google executives go on to explain how the company now incorporates its AI principles into everything it builds, and that Google has a comprehensive plan for tackling everything from rooting out biases in its algorithms to forecasting the unintended consequences of AI.

After the talk, a small group of developers from different companies mingles, dissatisfied. “I don’t feel like we got enough,” observes one, an employee of a large international corporation that uses TensorFlow and frequently partners with Google.

“They are telling us, ‘Don’t worry about it. We got this.’ We all know they don’t ‘got this.’ ”These developers have every right to be skeptical. Google’s rhetoric has often contrasted with its actions, but the stakes are higher with artificial intelligence. Gizmodo was first to report, in March 2018, that the company had a Pentagon contract for AI drone-strike technology, dubbed Project Maven.

After Google employees protested for three months, Pichai announced that the contract would not be renewed. Shortly thereafter, another project came to light: Dragonfly, a search engine for Chinese users designed to be as powerful and ubiquitous as the one reportedly used for 94% of U.S. searches, except that it would also comply with China’s censorship rules, which ban content on some topics related to human rights, democracy, freedom of speech, and civil disobedience.

Dragonfly would also link users’ phone numbers to their searches. Employees protested for another four months, and activists attempted to enlist Amnesty International and Google shareholders in the fight. Last December Pichai told Congress, Google has no plans to launch the search engine in China.

i-2-google-quantum-supremacy-future-ai-h[Illustration: Gabriel Silveira]

During that turmoil, a Google engineer confronted Dean directly about whether the company would continue working with oppressive regimes. “We need to know: What are the red lines?” the engineer tells me, echoing Google’s own verbiage. “I was pushing for: What are things you would never do? I never got clarification.” The employee quit in protest.

When asked today about the dark side of AI, the amiable Dean turns serious. “People in my organization were very outspoken about what we should be doing with the Department of Defense,” he says, referring to their work on Maven. Dean invokes Google’s list of AI applications that it won’t pursue.

“One of them is work on autonomous weapons. That, to me, is something I don’t want to work on or have anything to do with,” he says, looking me straight in the eyes.

Amid the initial Project Maven controversy, The Intercept and The New York Times published emails that revealed Google’s internal concerns about how the extent of its AI ambitions might be received.

“I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons,” Fei-Fei Li, then Google Cloud’s chief AI scientist (and one of the authors of Google’s AI principles), told colleagues in one of them.

“Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI—if not THE most. This is red meat to the media to find all the ways to damage Google.

” She also suggested that the company plant some positive PR stories about Google’s democratization of AI and something described as humanistic AI. “I’d be super careful to protect these very positive images,” she wrote. (Li declined to be interviewed for this story.

She has since left the company to co-lead Stanford’s Human-Centered AI Institute.) 
These AI protests have created an ongoing PR crisis. In March, the company announced an Advanced Technology External Advisory Council, colloquially known as its AI ethics board, but it fell apart just over a week later when thousands of Google employees protested its makeup.

The board had included a drone-company CEO and the president of the right-wing Heritage Foundation, who had made public statements that were transphobic and denied climate change.Pichai himself has stepped in several times.

Last November, he wrote to employees, acknowledging Google’s missteps. “We recognize that we have not always gotten everything right in the past and we are sincerely sorry for that,” he said. “It’s clear we need to make some changes.” But controversy continues to dog Google on how it deploys technology.

In August, an employee organization called Googlers for Human Rights released a public petition with more than 800 signatures asking the company not to offer any tech to Customs and Border Protection, Immigration and Customs Enforcement, or the Office of Refugee Resettlement. (A representative for Google responds that the company supports employee activism.)

When I ask Pichai about how Google’s AI principles influence his own work, he connects it to another corporate priority: assuaging concerns about what Google does with all the user data it possesses.

“What I am pushing the teams on is around AI and privacy,” he says. “It’s a bit counterintuitive, but I think AI gives us a chance to enhance privacy.” Last spring he discussed efforts within Google to use machine learning to protect data on a smartphone from being accessed by anyone other than its owner.He says fears about the dangers of AI are overblown.

“It’s important for people to understand what not to worry about, too, which is, it’s really early, and we do have time,” he explains.

Pichai hopes that Google can quell any disquiet over AI’s dangers by showcasing its virtue. Under an initiative dubbed AI for Social Good, Google is deploying its machine learning to solve what it describes as “the world’s greatest social, humanitarian, and environmental problems.

” There are teams harnessing AI to forecast floods, track whales, diagnose cancer, and detect illegal mining and logging. At I/O, one young entrepreneur from Uganda, invited by Google, spoke of using TensorFlow to track army worms across Africa, a cause of famine throughout the continent. Google’s AI Impact Challenge, launched in 2018, offers $25 million in grants to charities and startups applying AI to causes such as saving rain forests and fighting fires.

The company has also pulled back on two controversial initiatives amid the AI debate. Last December, Google shelved its facial-recognition software, even as rival Amazon moved forward with its own version despite its own employee protests and charges that it enables law enforcement to racially profile citizens.

One insider estimates that the move could cost Google billions in revenue. The company also withdrew from bidding on a $10 billion project to provide cloud computing to the Pentagon, citing ethical concerns. Amazon and Microsoft are still in the running.

When asked how Google determines whether a project is good or bad for society, Pichai cites something called “the lip-reading project.” A team of engineers had an idea to use AI in cameras to read lips. The intention was to enable communication for nonverbal people. However, some raised concerns about unintended consequences.

Could bad actors use it for surveillance through, say, street cameras? The engineers tested it on street cams, CCTV, and other public cameras, and determined that the AI needs to be close-up to work. Google published a paper detailing the effort, confident that, for now, it can be used safely.

It’s a sunny afternoon in Santa Barbara, California, but the thermometer inside Google’s lab reads 10 millikelvin, about 1/100th of a kelvin above absolute zero. “This is one of the coldest places in the universe,” Erik Lucero, a research scientist working in the lab, tells me. “Inside of this,” he says, pointing to a shiny metal container, “is colder than space.

” The vessel is the size and shape of an oil drum, made of copper and plated with real gold. Thick wires made out of niobium-­titanium emerge from the top, octopus-­like, carrying control and measurement signals to and from its processor.This barrel encases one of the most fragile and potentially most powerful machines on earth: a quantum computer.

If all goes as planned, it will turbocharge the capabilities of artificial intelligence in ways that may well reshape how we think about the universe—and humanity’s place in it.

The dream of quantum computing has been around since the ’80s, when Richard Feynman, an original member of the Manhattan Project, which built the atomic bomb, began theorizing ways to unlock computing power by adapting the quantum mechanics used to create nuclear science.

 Today, our computers run on bits of information that equal either zero or one in value; they have to calculate outcomes, probabilities, and equations step-by-step, serially exhausting every option before arriving at an answer. Quantum computers, by contrast, create qubits, where zeros and ones can exist simultaneously.

This allows qubits to process certain kinds of information far faster. How much faster? One widely cited example is that a 300-qubit computer could perform as many simultaneous calculations as there are atoms in the universe.

“Those are actually qubits,” Lucero says, directing me to look under a microscope, where I see some fuzzy black Xs. There are 22 of them.

This is the smaller batch. Elsewhere in the lab, Google has created 72 qubits. For now, they can only survive for 20 microseconds, and conditions have to be colder than outer space.

In order to create a commercially viable quantum computer, Google will need to produce enough qubits and keep them stable and error-free long enough to be able to make any major computing breakthroughs.

Other labs are competing here, too, but Google has assembled some of the world’s foremost experts to find ways to create an environment in which qubits can survive and thrive. It’s moving faster toward this goal than anyone expected: Last December, Google tested its best quantum processor against a regular laptop, and the laptop won.

A few weeks later, after some adjustments to the processor, it beat the laptop, but still lagged behind a desktop computer. In February, the quantum computer outmatched every other computer in the lab.

Hartmut Neven, who leads Google’s quantum team, presented the lab’s advances during Google’s Quantum Spring Symposium in May, describing the increases in processing power as double exponential, a mind-­bending equation that looks like this:

221, 222, 223, 224

Within computer science circles, this growth rate for quantum computing has been dubbed Neven’s law, a nod to Moore’s law, which posits that “classical” computing advances by doubling the number of transistors that can fit on a chip every 18 months.

Now Google’s team is honing in on the major milestone known as quantum supremacy. It will still be years before Google’s quantum computer reaches its full potential. But in the lab, the anticipation of this moment is palpable.

“There are currently problems that humanity [will] not be able to solve without a quantum computer,” Lucero says, standing next to the machine poised to achieve this feat. “The whole idea that you are jumping into a new potential for humankind, that’s exciting.

”The room hums rhythmically, the sound of qubits hatching. What will it mean for humanity when computers can think and calculate at exponentially faster speeds—and on parallel planes? This emerging science may be able to explain the deepest mysteries of the universe—dark matter, black holes, the human brain.

“It’s the ‘Hello, World!’ moment,” Lucero says, referring to the 1984 introduction of Macintosh, the computer that launched a new era for a generation of coders. As Google opens the door to this new cosmos, we all need to get ready for what’s on the other side.

A version of this article appeared in the October 2019 issue of
 Fast Company magazine.
    • 1
    Francisco Gimeno - BC Analyst Any SF enthusiast reading this article will remember Asimov's UNIVAC global computer which is sometimes a servant of humanity and sometimes the reason for its end, or an universal dictator. Always for the good of humanity of course. Google's work up to now is already thinking not just on how to create a working AI, but on how to deal with the consequences of having a non human intelligence which potentially can be more powerful than ours.