Superintelligence vs. superstupidity

Cars, grids, phones, robots … should we regulate the emerging superintelligence?
December 28, 2015 by Amara D. Angelica

Hoo are you? We really want to know! Clip from Superintelligence book cover (credit: Oxford University Press)

In “The A.I. Anxiety” Sunday (12/27), the Washington Post concisely summarized the risks implicit in superintelligence … and more worrisome, in “superstupidity”: “There is no one person who understands exactly how these [intelligent computer] systems work or are operating at any given moment. Throw in elements of autonomy, and things can go wrong quickly and disastrously.”  

In other words: stupid people + superintelligent machines —> superstupidity.

Power grid: FAIL. Example: A yearlong investigation by the AP reveals that Iranian, Chinese, Russian, and other hackers have accessed the “aging, outdated”  and vulnerable U.S. power grid (with some facilities still using Fortran, Windows 95, and floppy disks), downloaded critical drawings, and even took over the controls of a large utility’s wind farm. Got solar + backup batteries?

Sesame credit score (credit: Zheping Huang/Quartz)

Are you “doubleplusungood”? Speaking of total control of power, Ant Financial in China has launched “Sesame Credit scores” on Weibo and WeChat, says Quartz — following a government directive last summer calling for the establishment of a “social credit system.”

The service apparently evaluates one’s purchasing and spending habits to derive a credit score, which is “evidence that the Chinese government is enacting a scheme that will monitor citizens’ finances,” says the ACLU and others, warning that  “one’s political views or ‘morality’ might raise or lower one’s score.”

In 1984, Oceania monitored citizens with its panopticon TVs. Today, “information is increasingly being generated by sensing devices that are located all around us, on our phones,” notes University of Oxford Chinese law and media researcher Rogier Creemer. Wow, what does that portend for the Internet of Things?

Autonomous-car wars. Meanwhile, back in the future, the contest has come down to Tesla (the reigning champ) vs. the new (rumored**) Ford-Google deal vs. Toyota-Stanford-MIT vs. ?-Apple vs. a bunch of others. “Automakers like Tesla are using deep learning, a form of machine learning that uses a set of algorithms to teach computers to think more like humans and to learn how to recognize speech and images,” says Fortune. (But Who controls the code? Cory Doctorow asks.)

Musk told Fortune that Tesla will provide “complete autonomy* in approximately two years.” But wait, Musk said in his $1 billion OpenAI announcement that “the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.” Doesn’t “small set” include Tesla engineers? Perhaps not: “Cars are relatively simple … it’s not the depth of the learning, but the breadth of the perception,” Fortune says, paraphrasing Musk. Um, OK.

So should we regulate superintelligence? Putting all this in perspective, Ben Goertzel has written a penetrating and balanced analysis (see Superintelligence: fears, promises, and potentials post below) of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies and compared it to the thinking of Eliezer Yudkowsky of the Machine Intelligence Research Institute and David Weinbaum (Weaver) and Viktoras Veitas of the Global Brain Institute.

Bostrom “argues that advanced AI poses a potentially major existential risk to humanity,” Goertzel notes, “and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers.”

Goertzel counters that “Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way. … If one views past, current, and future intelligence as ‘open-ended,’… the potential dangers no longer appear to loom so large, and one sees a future that is wide-open, complex and uncertain, just as it has always been.”

What did I miss or get wrong? Tell me in the comments below.

* To sort this mess out, the National Highway Traffic Safety Administration has proposed five levels of vehicle automation:

  • Level 0: The driver completely controls the vehicle at all times.
  • Level 1: Individual vehicle controls are automated, such as electronic stability control or automatic braking.
  • Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping.
  • Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a “sufficiently comfortable transition time” for the driver to do so.
  • Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars.

** UPDATE 1/4/2016 In a newsletter update today, Fortune’s Adam Lashinsky said the Google-Ford deal is rumored, not announced.