How tech giants and their autonomous machines may subvert humanity and plunge the world into a dystopia
Central Ideas:
1 – Treating AI as a public good does not prevent Mafia-G (US Titans) from profiting and growing. But not treating it as a public good indicates that we will not have the luxury to debate and analyze automation in the context of human rights and geopolitics.
2 – Optimistic scenario: new leaders agree on enabling and cooperating on shared AI initiatives and policies. Inspired by the Greek mythology of Mother Earth, they envision GAIA: Global Alliance on Intelligence Augmentation.
3 – Pragmatic scenario: AI models and frameworks need lots of data to learn, improve, and be implemented. Therefore, the most troubling part of a new AI system is not the algorithms and models, but collecting the right data for the machine to train and learn from.
4 – Doomsday scenario: China has developed an ASI (artificial superintelligence) with a single purpose: to exterminate the populations of the United States and its allies. It is the end of the United States and democracy, and the rise of the Réngong Zhinéng Dynasty (AI).
5 – The best way to architect systematic change is to materialize the creation of GAIA, as soon as possible. And it should be located in an area close to an existing AI pole. The best location for GAIA is Montreal, Canada.
About the author:
Amy Webb is one of America’s leading futurists, the author of the award-winning bestseller The Signals Are Talking: Why today’s fringe is tomorrow’s mainstream, in which she explains her method for predicting the future. She is a professor of strategic vision at NYU Stern School of Business.
Introduction
Those who are not engrossed and do not live the day-to-day life of AI research and development cannot see the signs clearly, which is why the public opinion discussion regarding AI gravitates around the tyrannical robots you have seen in recent movies. Or it reflects a kind of manic, unbridled optimism. The lack of nuance is the first cause of the AI problem: some radically overestimate the applicability of AI, while others argue that it will become an invincible weapon.
I know this because I have spent most of the last decade doing research on artificial intelligence and meeting with people and organizations inside and outside the AI ecosystem. I have advised a wide diversity of companies at the epicenter of artificial intelligence, such as Microsoft and IBM. I met and advised stakeholders on the outside: venture capitalists and private investment managers, leaders in the Department of Defense and State Department, and many legislators who feel that regulation is the only way forward. I have also attended several meetings with academic researchers and technologists who work directly on the front lines. It is rare for those working directly with AI to share extreme apocalyptic or utopian visions of the future that we usually hear in the newspapers.
In the United States, we suffer from a tragic lack of foresight. We have an immediacy, a “nowist” mentality, planning for the next few years of our lives more than any other time period. The nowist mentality advocates short-term technological achievements but exempts us from taking responsibility for how technology may evolve and the repercussions and consequences of our next actions. We easily forget that what we do in the present can have serious consequences in the future. Small wonder, then, that the government has outsourced the future development of AI to six publicly traded companies whose achievements are remarkable, but whose financial interests do not always align with what is best for our individual freedoms, our communities, our interests, and democratic ideals.
Meanwhile, in China, the AI development path is caught up in the grand ambitions of the government, which is rapidly building the foundation to become the world’s undisputed AI supremacy. In July 2017, the Chinese government unveiled its next-generation AI development plan aimed at becoming the global leader in AI by 2030, with a domestic industry worth at least $150 billion – a plan that involves applying part of its accounting and financial fund to new labs and startups, as well as launching new schools specifically to train China’s next generation of AI talent.
We, humans, are rapidly losing our consciousness, just as machines are awakening. We have begun to pass some key milestones in the technical and geopolitical development of AI. But with each new advance, AI becomes more invisible to us. The means by which our data is extracted and refined become less and less evident, while our ability to understand how autonomous systems make decisions becomes less and less transparent. We, therefore, have a chasm in understanding how AI is impacting everyday life in the present, as it grows exponentially as we move years and decades into the future. Narrowing that chasm as much as possible, through a critique of the routes AI is currently taking, is the mission of this book.
Part I – Haunted Machines
Chapter 1 – Mind and Machine: A Brief History of AI
In 1955, faculty members Marvin Minsky (mathematics and neurology) and John McCarthy (mathematics), along with Claude Shannon (mathematician and cryptographer at Bell Labs) and Nathaniel Rochester (computer scientist at IBM), proposed a two-month seminar in order to explore Turing’s work and the promise of machine learning. Their theory: if it were possible to describe every feature of human intelligence, a machine could soon be taught to simulate it. But this would require a large and diverse group of experts in many different fields. They believed that a substantial advance could be made if they put together an interdisciplinary group of researchers who worked hard, without interruption, during the summer. The organization of the group was of paramount importance.
It was Hinton, a professor at the University of Toronto, who envisioned a new kind of neural network, a network composed of multiple layers that would extract different information until they recognized what they were looking for. The only way to collect this kind of knowledge in an artificial intelligence system, he thought, was to develop learning algorithms that would allow computers to learn on their own. Instead of teaching them how to perform a single constraining task well, the networks would be built to train themselves.
These new “deep” networks (DNNs) would require a more advanced type of machine learning – “deep learning” – to teach the computers to perform human-like tasks, but with less (or even no) human supervision. One immediate advantage: is scalability. In a neural network, a few neurons make a few choices – but the number of possible choices can increase exponentially with more layers. In other words: humans learn individually, but humanity learns collectively. Imagine a large deep neural network, learning as a unified whole – with the possibility of increasing speed, efficiency, and cost savings over time.
In January 2014, Google had begun investing massively in AI, more than $500 million to acquire a deep machine learning startup called DeepMind and its three founders: neuroscientist Demis Hassabis; former chess prodigy Shane Legg, a machine learning researcher, and entrepreneur Mustafa Suleyman. Part of the appeal of the team: they would develop an AlphaGo program.
AlphaGo – an AI program – had beaten Fan Hui, a professional Go player 5-0. And it won by analyzing fewer positions than IBM’s Deep Blue by several orders of magnitude. When AlphaGo beat a human being, he didn’t know he was playing, what a game means, or why humans like to play.
The first version of AlphaGo required humans to participate in the game and a data set of 100,000 games to learn how to play. The next generation of the system had been built to learn from square one. Just like a human player new to the game, this version – called AphaGo Zero – would have to learn everything from scratch, entirely on its own, without an available library of moves or even a definition of what the pieces did. Not only would the system make decisions – which were the result coming from calculations and could be explicitly programmed – but it would make choices that had to do with critical ability. This meant that the architects of DeepMind were using an enormous amount of processing, even if they didn’t realize it. From this processing, Zero would learn the conditions, values, and motivations for making its decisions and choices during the game.
Zero competed against himself, improving and adjusting his decision layer processes on his own. Each game started with a few random moves, and from each win, Zero would update his systems and then play again, optimizing what he learned. It took only 70 hours of play for Zero to gain the same level of dynamism that AlphaGo had when it defeated the best players in the world.
Chapter 2 – The Isolated World of AI Tribes
What are the AI tribes doing? They are developing narrow artificial intelligence (ANI) systems capable of performing a specific task at the same level or better than us humans. The commercial applications of ANI – and hence the tribe – are already making decisions on our behalf from our email inboxes, when we search for things on the Internet, take pictures with our phones, drive our cars, and apply for credit cards or loans. Tribes are also building what is to come: Artificial General Intelligence (AGI) systems that will perform generalized cognitive tasks because they are machines designed to think like us. But who, exactly, is the “we” from which these AI systems feed? What values, ideals, and worldviews are being taught to these systems?
In North America, the emphasis within universities has focused on technical skills – such as mastery of the programming languages R and Python, know-how in natural language processing and applied statistics, as well as orientation to computer vision, computational biology, and game theory. It is not well regarded to take classes outside the tribe, such as a course on the philosophy of mind, Muslim women in literature, or colonialism. If we are trying to build thinking machines capable of thinking like humans, it is apparently nonsense to exclude learning about the human condition. Right now, courses like these are deliberately left out of the syllabus, and it is difficult for them to have space as optional courses outside of this content.
The nine titans of AI are partners with these universities, which in turn depend on their resources and funding. However, it strikes me as a good time to ask questions about “who is in charge?”; and this should be questioned and discussed in the safe confines of a classroom before students become members of teams that are constantly sidelined by-product deadlines and revenue targets.
Baidu, Alibaba, and Tencent, known collectively as BAT, are the Chinese part of the great AI titans. The AI tribe operating under the aegis of the People’s Republic of China abides by different rules and rituals, among them substantial government funding, oversight, and industry policies designed to boost BAT.
The US slice of the nine titans – Google, Microsoft, Amazon, Facebook, IBM, and Apple – are creative, innovative, and responsible for the greatest advances in AI. They function like a mafia in the purest sense of the word (and not pejoratively): it is a closed super-network of people with similar interests and backgrounds working in a field that has a controlling influence over our futures. These North American companies will be referred to as the G-Mafia.
The AI model of North American consumerism is not at all evil. Neither is China’s government-centralized model. AI itself is not necessarily harmful to society. However, the G-Mafia is composed of publicly traded, for-profit companies that must answer to Wall Street, regardless of the altruistic intentions of their leaders and employees. In China, the BAT tribe is beholden to the Chinese government, which has already decided what is best for the Chinese. What I want to know – and what you should demand an answer to – is: what is best for all of humanity? As AI matures, how will the decisions we make today be reflected in the decisions that machines will make on our behalf in the future?
Chapter 3 – Paper cuts: the unintended consequences of IAs
Have you ever wondered why the intelligence system is not more transparent? Have you ever thought about what data sets are being used – including your own personal data – in order to help AI learn? Under what circumstances is AI being taught to make exceptions? How are developers balancing the commercialization of AI with basic human desires, such as privacy, security, sense of belonging, self-esteem, and self-actualization? What would be the AI tribe’s moral imperatives? What would be their notion of right and wrong? Are they teaching AI empathy? (By the way, would try to teach AI human empathy be a useful and noble claim?)
An algorithm and automated system selected four people to disembark from the plane, including Dr. David Dao and his wife, who is also a doctor. He called the airline attendant, explaining that he had patients to see the next day. While the other passengers obeyed, Dao refused to disembark. Chicago Department of Aviation officials threatened Dao with arrest if he did not disembark. No doubt you must be aware of what happened next because the video of the incident went viral on Facebook, Youtube, and Twitter and was broadcast for days in newspapers around the world. The officers grabbed Dao’s arms and forcibly removed him from his seat, slammed him against the armrest, breaking his glasses and cutting his mouth. The incident was traumatic. How do you explain it? The boarding procedure on most of the world’s airlines, including United, is automated. These are algorithms that sort passengers by group and number. United’s system, decided that there were not enough seats. It calculated compensation for anyone who was drawn with not boarding. If a passenger did not comply, the system recommended that airport security be activated. In other words, the algorithm does not see the context of a decision. And it has no notion of absurdity.
Knowing that we cannot formulate a set of strict commandments to follow, should we instead focus our attention on the humans who develop these systems? These people – the AI tribes – should ask themselves uncomfortable questions:
What is our motivation for AI? Is it aligned with the long-term interests of humanity?
What fundamental rights should we establish to interrogate the data sets, algorithms, and processes that are used to make decisions on our behalf?
Should we continue to compare AI to human thought or is it better to categorize it as something different?
Is there a problem with architecting an AI that recognizes and responds to human emotion?
PART II – Our Futures
Chapter 4 – From the Present Day to Artificial Superintelligence: The Signs of the Times
Whereas many intelligent people advocate AI for the public good, we are not yet discussing artificial intelligence as a public good. This is a mistake. We are currently in the prelude to the modern evolution of AI and cannot continue to think of it as a platform built by the nine AI titans for e-commerce, communications, and cool apps. To fail to consider AI as a public good – something we do with the air we breathe – will lead to serious and insurmountable problems. Treating AI as a public good does not prevent the G-Mafia from profiting and growing. It simply means changing our thinking and expectations. Sooner or later, we will not have the luxury of debating and analyzing automation within the context of human rights and geopolitics, because AI will be complex enough to break free from its moorings and mold it into something we prefer.
With AI, anyone can develop a new product or service, but cannot easily implement it without the help of the G-Mafia. People must use Google’s TensorFlow, Amazon’s many recognition algorithms, Microsoft’s Azure for hosting, IBM’s chip technology, or any of the other AI frameworks, tools, and services that make the ecosystem circular. In practice, the future of AI is not dictated by the terms of a true “free” market in the United States. Since the future has not yet happened, we cannot know with certainty all the possible outcomes of our actions in the present. Therefore, the scenarios presented in the next chapters are written using different exciting contexts that describe the next 50 years. The first deals with an optimistic scenario asking what would happen if the Nine Titans of AI decided to promote radical changes in order to ensure that AI benefits all of us. There is an important difference to note: “optimistic” scenarios are not necessarily prosperous or positive. They do not always lead to utopia. In an optimistic scenario, we are assuming that the best possible decisions are made and that any obstacles to success are overcome.
The second is a pragmatic scenario describing what the future would look like if the AI titans made only negligible improvements in the short term. We assume that while all major stakeholders recognize that AI is probably of course, there is no collaboration to create meaningful and lasting change. Some universities introduce mandatory ethics classes; the G-Mafia enters into industry partnerships to combat risk, yet makes no progress on their own company cultures; our elected officials focus on their upcoming election cycles and are dismissive of China’s grand plans. A pragmatic scenario does not wait for big changes – it recognizes the changing reality of our human impulses to improve. It also identifies that in business and government, leaders are prone to pay little attention to the future in favor of immediate short-term gains.
Finally, the doomsday scenario explains what happens if all signs are left aside and the signs of the times are ignored: we won’t be able to actively plan for the future, and the nine AI titans will continue to compete with each other. If we choose to double down on the status quo, where would that lead us? What happens should AI continue to tread the existing paths of the United States and China? Promoting a systematic line required by the catastrophic scenario is painstaking and time-consuming work that does not end at a finish line. This makes the catastrophic scenario extremely terrifying, and its specifics are disturbing. Because, it seems, at this point, the doomsday scenario is the one destined to come to pass.
Chapter 5 – Prospering in the Third Computing Age: The Optimistic Scenario
Neither AI nor its funding is politicized; everyone agrees that regulating the G-Mafia and AI is not the appropriate measure to take. Cumbersome, irrevocable regulations would be outdated the moment they came into effect; they would prevent innovation from flourishing and be difficult to put into practice. With bipartisan support, Americans unite in favor of increasing federal AI investments around the world, using China’s public program as inspiration. Funding circulates for research and development for economic and workforce impact studies, social impact studies, diversity programs, medical and public health initiatives, and infrastructure, with the intent that U.S. public education will regain its former glory, with advantageous teacher salaries and programmatic content that prepares people for an automated future. We stop with assumptions that the G-Mafia can serve their Washington and Wall Street masters equally and that free markets and our entrepreneurial spirit will produce the best possible results for AI and humanity.
Unlike the homogeneous group of men from similar fields who were part of the first Dartmouth seminar, this time the leaders and experts integrate a wide range of people and worldviews. By standing on the same sacred ground where modern artificial intelligence was born, these leaders agree on enabling and cooperating on shared AI initiatives and policies. Drawing inspiration from Greek mythology and the ancient figure of Mother Earth, they envision GAIA: Global Alliance on Intelligence Augmentation.
2029: Satisfactory nudging process
With the collaboration between the G-Mafia and GAIA resulting in many new trade agreements, citizens around the globe have better and cheaper access to products and services with ANI technology. The GAIA alliance meets periodically, prizing transparency in its work, while its multinational working groups keep up satisfactorily with the pace of technological advancement.
2049: The Rolling Stones are dead (but still writing new music)
In the mid-2030s, researchers working directly on the G-Mafia published an interesting paper, both because of what was revealed about the AI and how the work had been completed. Working from the same set of standards and supported with generous funding (and patience) by the federal government, the researchers collaborated to advance AI. As a result, the first artificial general intelligence range system was developed.
2069: Guardians of the Galaxy Powered by AI
Soon, GAIA will implement a series of guardian AIs that will act as an early warning system for any AGI that has acquired too much cognitive power. While guardians will not necessarily prevent a rogue person from attempting to create AGIs on their own, GAIA is developing scenarios in order to prepare for this eventuality. We place our unshakable esteem and trust in GAIA and the nine AI titans.
Chapter 6 – Learning to live with paper cuts: the pragmatic scenario
AI models and frameworks, regardless of how big or small, need lots of data to learn, improve, and be implemented. Data are similar to the oceans of our world. They surround us, are inexhaustible resources, and are of no use to us unless they are desalinated, treated, and processed for consumption. At the moment there are only a few companies that can effectively desalinate them on a definitive scale. Because of this, the most troubling part of building a new AI system is not algorithms or models, but collecting the right data and cataloging it properly so that a machine can start training and learning from it. As far as the many products and services that the nine AI titans are working exhaustively to build, there are few data sets ready to go.
2029: Learned helplessness
The two operating systems have caused fierce competition between the members of the AI tribes who did not plan ahead in the face of gigantic interoperability problems. It turns out that, despite the hardware, in the two operating systems, people are not interoperable. The transience that was once characteristic of Silicon Valley – seasoned engineers, operations managers, and designers often jump from company to company without any real sense of commitment – has long since disappeared. Instead of bringing us together, AI has effectively and efficiently separated us. It is a painful issue for the United States as well, which in turn has been forced to choose a framework (like most other governments, the United States adopted Applezon [junction of Apple and Amazon] over Google because Applezon offered more affordable prices and included discounted office supplies).
Around the globe, they are talking about “learned helplessness” in the age of AI in the US. We can’t do anything without our automated systems, which constantly encourage us with positive or negative feedback. We try to blame the nine AI titans, but really, we are the only ones to blame.
2049: And Now There Were Five
Americans are learning to live with low but constant levels of anxiety. In the United States, the national feeling of unease is repeatedly compared to the threats of nuclear war in the 1960s and 1980s. Only this time, Americans are not sure what exactly they are afraid of. They don’t know whether their PDRs (Personal Data Records) are protected or what personal data China may have access to. They are not sure how deeply Chinese government hackers are infiltrating US infrastructure systems. People often wake up late at night wondering what China knows about them, the route they take to work, the gas lines that feed their homes, and what they are planning with all this information.
2069: The digitally occupied United States
We realize that China has indeed developed a generation of AGIs with capabilities never seen before. Without the AGIs to watch over the rogue AGIs, China has been able to develop and implement a terrifying system to control the majority of the population on earth. If we do not comply with the Chinese demands, we will run out of communication systems. If we do not provide access to our open data channel to the Chinese Communist Party, it will paralyze our entire infrastructures such as power plants and air traffic control.
Chapter 7 – The Réngong Zhinéng Dynasty: The Doomsday Scenario
U.S. government leaders do not spend enough time educating themselves about what AI is, what it is not, and why it is important. Aside from the usual talk about how AI hurts productivity and jobs, people in Washington make absolutely no attempt to engage the G-Mafia in serious discussions about other pressing AI-related issues, such as national security, geopolitical balance, risks and opportunities for general-purpose artificial intelligence, or the intersection of AI in other fields (such as genomics, agriculture, and education).
2029: External and internal digital lock-in
Since interoperability is still a weak point in the artificial intelligence ecosystem in the West, by 2035, we have effectively stipulated a system of segregation. Our devices are connected to Google, Apple or Amazon, and so we usually only buy the products and services offered by one of these three companies. Since the data in our hereditary PDRs is owned and managed by one of these companies – who have also sold us all the things in our homes with artificial intelligence technology – we are Google, Apple, or Amazon families. A designation accompanied by unintended bias.
2049: Biometric boundaries and nanobot abortions
Now, the D-Mafia is made up of the GAA: Google, Apple, and Amazon. Facebook was the first to declare bankruptcy, and the remnants of Microsoft and IBM were acquired by Google. It is the centennial of the Chinese Communist Revolution and Mao Zedong’s speech of the People’s Republic of China (PRC). Celebrations are planned to honor the late Xi Jinping and the rise of what is being called the Réngong Zhinéng (AI) dynasty.
The laws of the GAA countries were overturned when the AGIs improved and created the kind of functionality that determines who among us lives or dies. But this is of no use. Banning nanobots would mean a return to the normal practice of medicine. And we no longer have large pharmaceutical companies manufacturing all the medicines we need. Even the most optimistic projections show that getting our old health care systems up and running again would take a decade or more – and in the meantime millions of people would suffer greatly from a wide variety of diseases.
2069: Digital extinction
China has developed an ISA that has only one purpose: to exterminate the populations of the United States and its allies. One of China’s countries needs what is left of the Earth’s resources, and Beijing has calculated that the only way to survive is to take those resources away from the United States.
You have witnessed something far worse than any bomb ever created. Bombs are instantaneous and fast. Extermination by AI is slow and uncontrollable. You feel helpless as the bodies of your children lose their life force in your arms. You watch your co-workers collapse at their desks. You feel a sharp pain. You are dizzy. You try to take your last breath.
It is the end of the United States.
It is the end of America’s allies.
It is the end of democracy.
It is the beginning of the rise of the Réngong Zhinéng dynasty. It is inhuman, irrevocable, and absolute.
PART III – Solving the Problems
Chapter 8 – Rocks and Boulders: How to Solve the Future of AI
AI can delegate powers to us to unravel and answer humanity’s greatest mysteries. For example: where and how life originated. And in the process, it can fascinate and entertain us, creating virtual worlds never before imagined, composing music that inspires us, and enabling new experiences that are fun and rewarding. But none of this will happen without planning and a commitment to hard work and courageous leadership within all AI stakeholder groups.
The best way to architect systematic change is to materialize the creation of GAIA as soon as possible, and it should be physically located in a neutral area near an existing AI hub. The best location for GAIA is Montreal, Canada. First of all, Montreal is home to and a concentration of deep learning researchers and laboratories. If we assume that the transition from ANI to AGI will encompass deep learning and deep neural networks, then GAIA should be based where much of the next generation work is occurring. Second, under the administration of Prime Minister Justin Trudeau, the Canadian government has already committed people and funds in order to explore the future of AI. Third, Canada is geopolitically neutral territory for AI – it is far from Silicon Valley and Beijing.
GAIA must consider a rights system that balances individual freedoms with the greater, global good:
Humanity must always be at the center of AI development.
AI systems should be reliable and safe. We should be able to analyze their safety and security independently.
The nine AI titans – including their investors, employees, and the governments they work with – should prioritize safety over speed. Any team working on an AI system – even those not part of the nine titans – cannot reduce costs for the sake of speed. Safety needs to be easily demonstrable and visible to outsiders.
If an AI system causes damage, it must be able to report what went wrong, and there must be a governance process to analyze and mitigate the damage.
To the extent possible, RDPs should be protected against their ability to enable totalitarian regimes.
The nine titans of AI should develop a process to assess the ethical implications of research, workflows, projects, partnerships, and products, and this process should be interwoven into most job functions in companies. As a sign of trust, the nine titans should publicize this process so that we can all better understand how decisions are made regarding our data.
Collaboratively or individually, the nine titans should draft a code of conduct specifically for their AI employees. They should reflect on the fundamental human rights outlined by the GAIA alliance, but they should also weigh in on the company’s unique culture and corporate values. And, in the event that someone violates this code, a clear and secure reporting channel should be available to team members.
Factsheet:
Title: The Big Nine
Author: Amy Webb
Review: Rogério H. Jönck
Images: reproduction and unsplash