• Smarter-than-human intelligence. That's all. Whether it's created through Artificial Intelligence, Brain-Computer Interfacing, neurosurgery, genetic engineering, or whatever—the Singularity is the point at which our ability to predict the future breaks down because a new character is introduced that is different from all prior characters in the human story. That character is greater-than-human intelligence, whether an enhanced human or a robot. (loc. 48-51)
  • Because it is a non-intelligent process—the unavoidable reality that conditions will always favor some designs over others— evolution by natural selection has to break many, many eggs in order to make an omelet. When we marvel at the swiftness of the cheetah, we do not see the billions of ancestral cousins that weren’t quite fast enough. When we delight in the vibrant plumage of many birds, we do not see the loveless flocks of bachelors that weren’t quite attractive enough. (loc. 243-46)
  • We presently live in a beautiful-but-indifferent world where death and hardship are the norm. Adversity is, after all, the driving force behind natural selection. But as if that weren’t enough, evolution has tragically engineered us not to experience lasting happiness, but to restlessly tend insatiable appetites in the service of our genes. (loc. 289-91)
  • If you give a man a fish, you feed him for a day. If you teach a man to fish, you feed him for a lifetime. But if you discovered a simple way to make food optional, you, the man, and the fish could move on to other pursuits. (loc. 460-62)
  • Our lives are small temporally as well as spatially: if this 14 billion year cosmic history were scaled to one year, then 100,000 years of human history would be 4 minutes and a 100 year life would be 0.2 seconds. Further deflating our hubris, we've learned that we're not that special either. Darwin taught us that we're animals, Freud taught us that we're irrational, machines now outpower us, and just last month, Deep Fritz outsmarted our Chess champion Vladimir Kramnik. Adding insult to injury, cosmologists have found that we're not even made out of the majority substance. (loc. 746-50)
  • I believe that consciousness is, essentially, the way information feels when being processed. (loc. 756)
  • To put it in a single sentence, advanced AI is potentially dangerous because only a minority of cognitively possible goal sets place a high priority on the continued survival of human beings and the structures we value. (loc. 876-78)
  • Most AI researchers do not comprehend the magnitude of the “Friendly AI problem” because no one wants to have to take the responsibility of creating the first truly intelligent artificial being. They just want to play with their program and ignore the long-term consequences. (loc. 900-902)
  • An AI that is very powerful but does not understand the subtleties of human morality will eventually kill many people, by accident or deliberately, in the course of accomplishing its goals. Unless people are specifically valued, they will be killed for the sake of some higher goal. It will not have hard feelings, it will just find the elimination of certain humans (eventually all of them) instrumentally useful for its goals. (loc. 964-66)
  • The Singularity is not necessarily extreme life extension, human brain scanning, accelerating technological progress, blasting off in every direction at the speed of light (part of Ray Kurzweil's definition), etc. The Singularity is smarter-than-human intelligence. That's it. Nothing else. (loc. 1271-73)
  • To disagree with the idea that this sort of Singularity is possible, you have to believe that human intelligence is the maximum possible level of intelligence permitted by the physical universe. And that is just absurd. (loc. 1276-77)
  • This could be used to build advanced robotic components with atomic precision, including neutron bombs and delivery mechanisms. If you figure out a way to defend against neutron bombs, the AI will just build a robot that walks up to you and stabs you in the heart with a diamond spike. (loc. 1460-62)
  • Consciousness is interesting to think about, but it can be a red herring. Too often, sophisticated-sounding arguments about consciousness and its relationship to AI boil down to one simple and ultimately boring sentiment: “I know I am conscious, and I know other humans are, but I am philosophically uncomfortable with the idea of a conscious machine.” (loc. 1496-98)
  • You are still special even though your mind is non-magical—don't worry. We humans have survived Copernican revolutions before, we'll manage. Our civilization didn't end when we found out that the Earth wasn't the center of the universe. It won't end when we realize that humans are not the only minds that can feel things consciously. (loc. 1500-1502)
  • This is anthropocentrism at work. It's basically humanity being a big baby and saying “me, me, me”. Everything is about me. To be intelligent, an entity needs my emotions, my desires, my concerns, my relationships, my insecurities, my personal quirks. No it doesn't. Humans are just one possible intelligence in a galaxy of possible intelligences. Get over yourself. (loc. 1516-20)
  • The sense of control we get when we think of sparking the Singularity with a human is an illusion. For true control over the outcome of the Singularity, we need Friendly AI. (loc. 1733-34)
  • A planet full of attractive people would do a lot to improve our quality of life. I don't care if that sounds superficial, it's the truth. You know it. (loc. 1823-24)