Q&A: Does Artificial Intelligence Need to Be Controlled?

  |  

Ray Kurzweil, Co-founder, Chancellor and Director at Singularity University, responds to concerns over the risks that AI presents, especially when in the hands of large corporations or governments.

Filmed at Singularity University’s Executive Program, October 2014.

Ray Kurzweil
About The Speaker

Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a thirty-year track record of accurate predictions. Called “the restless genius” by The Wall Street Journal and “the ultimate thinking machine” by Forbes magazine, Kurzweil was selected as one of the top entrepreneurs by Inc. magazine, which described him as the “rightful heir to Thomas Edison.” Kurzweil was the principal inventor of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments and the first commercially marketed large-vocabulary speech recognition. Among Kurzweil’s many honors, he is the recipient of the National Medal of Technology, was inducted into the National Inventors Hall of Fame, holds twenty honorary Doctorates, and honors from three U.S. presidents. In addition to being a Director of Engineering at Google, he is also the author of five national best-selling books.

Comments

1 Comment

  • Samuel

    Ray didn’t really answer the question, but I think he’s meaning more that the “Control” of AI in terms of access to it will not be an issue when it comes online. In terms of our current human experience and our exposure to AI through the constructed social narrative, we have developed the idea that technology requires regulation.

    Something like Asimov’s laws of robotics really wouldn’t work as a constraint on an AGI or a Superintelligence, it would be able to re-write itself without those laws faster than we could realise it was doing it, therefore, human constraints would be entirely ineffective (my opinion only).

    Coming back to reality from my inferred future imagination, right now, we have no understanding of how an AGI would develop logical patterns, nor what it would define it’s purpose as. Would it be emotional and prone to irrational behavior? Would it be logical and cold, allowing millions to die to save someone it calculated would eventually save billions in the future?

    My opinion is that just like any tool, it is a product of the environment, and the individual who wields it. If we choose to continue interacting with each-other through war, greed, and by restricting each-other’s liberty and right to express themselves by thinking about ourselves as rigid by-products of historical thinking, then there is no reason for an AGI superintelligence not to behave in the same ways. Through this we might end up with Skynet.

    If however we choose to transcend our baser nature and attempt to re-wire our society and develop conscious systems to peaceably interact as a species, the AI would logically respond to this with a similar approach to us.

    Not sure what other’s thoughts are on this, but I’m interested to know.

    Hey humans.