• From this perspective, reverse-engineering the human brain may be regarded as the most important project in the universe. (p. 5)
  • If understanding language and other phenomena through statistical analysis does not count as true understanding, then humans have no understanding either. (p. 7)
  • The operating principle of the neocortex is arguably the most important idea in the world, as it is capable of representing all knowledge and skills as well as creating new knowledge. (p. 8)
  • We often misrecognize people and things and words because our threshold for confirming an expected pattern is too low. (p. 52)
  • Natural selection does nothing even close to striving for intelligence. The process is driven by differences in the survival and reproduction rates of replicating organisms in a particular environment. Over time, the organisms acquire designs that adapt them for survival and reproduction in that environment, period; nothing pulls them in any direction other than success there and then.” (p. 76)
  • When scientists have thought about the pathways of the brain for the last hundred years or so, the typical image or model that comes to mind is that these pathways might resemble a bowl of spaghetti—separate pathways that have little particular spatial pattern in relation to one another. Using magnetic resonance imaging, we were able to investigate this question experimentally. And what we found was that rather than being haphazardly arranged or independent pathways, we find that all of the pathways of the brain taken together fit together in a single exceedingly simple structure. They basically look like a cube. They basically run in three perpendicular directions, and in each one of those three directions the pathways are highly parallel to each other and arranged in arrays. So, instead of independent spaghettis, we see that the connectivity of the brain is, in a sense, a single coherent structure.” (p. 82)
  • Although we experience the illusion of receiving high-resolution images from our eyes, what the optic nerve actually sends to the brain is just a series of outlines and clues about points of interest in our visual field. We then essentially hallucinate the world from cortical memories that interpret a series of movies with very low data rates that arrive in parallel channels. (p. 94)
  • As we have seen, it is not just a metaphor to state that there is information contained in our neocortex, and it is frightening to contemplate that none of this information is backed up today. There is, of course, one way in which we do back up some of the information in our brains—by writing it down. The ability to transfer at least some of our thinking to a medium that can outlast our biological bodies was a huge step forward, but a great deal of data in our brains continues to remain vulnerable. (p. 123)
  • I would not expect such an “uploading” technology to be available until around the 2040s. (p. 127)
  • In our digital brain we would also back up old memories before discarding them from the active neocortex, a precaution we can’t take in our biological brains. (p. 174)
  • I would also provide a critical thinking module, which would perform a continual background scan of all of the existing patterns, reviewing their compatibility with the other patterns (ideas) in this software neocortex. We have no such facility in our biological brains, which is why people can hold completely inconsistent thoughts with equanimity. (p. 176)
  • This critical thinking module would run as a continual background task. It would be very beneficial if human brains did the same thing. (p. 176)
  • The human brain appears to be able to handle only four simultaneous lists at a time (without the aid of tools such as computers), but there is no reason for an artificial neocortex to have such a limitation. (p. 177)
  • Finally, our new brain needs a purpose. A purpose is expressed as a series of goals. In the case of our biological brains, our goals are established by the pleasure and fear centers that we have inherited from the old brain. (p. 177)
  • As nonbiological brains become as capable as biological ones of effecting changes in the world—indeed, ultimately far more capable than unenhanced biological ones—we will need to consider their moral education. A good place to start would be with one old idea from our religious traditions: the golden rule. (p. 178)
  • In mathematics you don’t understand things. You just get used to them. —John von Neumann (p. 179)
  • There is considerable plasticity in the brain, which enables us to learn. But there is far greater plasticity in a computer, which can completely restructure its methods by changing its software. (p. 193)
  • Thus, in that respect, a computer will be able to emulate the brain, but the converse is not the case. (p. 193)
  • Von Neumann was deeply aware of the increasing pace of progress and its profound implications for humanity’s future. A year after his death in 1957, fellow mathematician Stan Ulam quoted him as having said in the early 1950s that “the ever accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” This is the first known use of the word “singularity” in the context of human technological history. (p. 194)
  • British philosopher Colin McGinn (born in 1950) writes that discussing “consciousness can reduce even the most fastidious thinker to blabbering incoherence.” (p. 200)
  • If you were at a cocktail party and there were both “normal” humans and zombies, how would you tell the difference? Perhaps this sounds like a cocktail party you have attended. (p. 202)
  • English physicist and mathematician Roger Penrose (born in 1931) took a different leap of faith in proposing the source of consciousness, though his also concerned the microtubules—specifically, their purported quantum computing abilities. His reasoning, although not explicitly stated, seemed to be that consciousness is mysterious, and a quantum event is also mysterious, so they must be linked in some way. (p. 207)
  • If you do accept the leap of faith that a nonbiological entity that is convincing in its reactions to qualia is actually conscious, then consider what that implies: namely that consciousness is an emergent property of the overall pattern of an entity, not the substrate it runs on. (p. 211)
  • The question as to whether or not an entity is conscious is therefore not a scientific one. (p. 211)
  • It is difficult to maintain that a few-days-old embryo is conscious unless one takes a panprotopsychist position, but even in these terms it would rank below the simplest animal in terms of consciousness. (p. 213)
  • Before brains there was no color or sound in the universe, nor was there any flavor or aroma and probably little sense and no feeling or emotion. —Roger W. Sperry (p. 218)
  • Evolution also moves toward greater complexity, greater knowledge, greater intelligence, greater beauty, greater creativity, and the ability to express more transcendent emotions, such as love. (p. 223)
  • While these observations certainly support the idea of plasticity in the neocortex, their more interesting implication is that we each appear to have two brains, not one, and we can do pretty well with either. (p. 225)
  • In each of these cases, one of the hemispheres believes that it has made a decision that it in fact never made. To what extent is that true for the decisions we make every day? (p. 229)
  • Philosopher Arthur Schopenhauer (1788–1860) wrote that “everyone believes himself a priori to be perfectly free, even in his individual actions, and thinks that at every moment he can commence another manner of life…. But a posteriori, through experience, he finds to his astonishment that he is not free, but subjected to necessity, that in spite of all his resolutions and reflections he does not change his conduct, and that from the beginning of his life to the end of it, he must carry out the very character which he himself condemns.” (p. 235)
  • Thus even though our decisions are determined (because our bodies and brains are part of a deterministic universe), they are nonetheless inherently unpredictable because we live in (and are part of) a class IV automaton. We cannot predict the future of a class IV automaton except to let the future unfold. (p. 239)
  • Nonetheless I will continue to act as if I have free will and to believe in it, so long as I don’t have to explain why. (p. 240)
  • But when one paradigm runs out of steam (for example, when engineers were no longer able to reduce the size and cost of vacuum tubes in the 1950s), it creates research pressure to create the next paradigm, and so another S-curve of progress begins. (p. 255)
  • So it is with the law of accelerating returns: Each technology project and contributor is unpredictable, yet the overall trajectory, as quantified by basic measures of price/performance and capacity, nonetheless follows a remarkably predictable path. (p. 267)
  • Intelligence evolved because it was useful for survival—a fact that may seem obvious, but one with which not everyone agrees. (p. 277)
  • The last invention that biological evolution needed to make—the neocortex—is inevitably leading to the last invention that humanity needs to make—truly intelligent machines—and the design of one is inspiring the other. Biological evolution is continuing but technological evolution is moving a million times faster than the former. (p. 281)
  • In either scenario, waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny. (p. 282)