• We don’t yet know whether we humans are the only stargazers in our cosmos, or even the first, but we’ve already learned enough about our Universe to know that it has the potential to wake up much more fully than it has thus far. (loc. 450-452)
  • Let’s instead define life very broadly, simply as a process that can retain its complexity and replicate. (loc. 484-484)
  • In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware. (loc. 486-488)
  • Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. (loc. 535-536)
  • Even though the information in our human DNA hasn’t evolved dramatically over the past fifty thousand years, the information collectively stored in our brains, books and computers has exploded. (loc. 543-545)
  • Yet despite the most powerful technologies we have today, all life forms we know of remain fundamentally limited by their biological hardware. (loc. 553-554)
  • All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles. (loc. 556-558)
  • The gist of the letter was that the goal of AI should be redefined: the goal should be to create not undirected intelligence, but beneficial intelligence. (loc. 675-676)
  • We’ll follow its subsequent progress later in the book. (loc. 677-678)
  • The real worry isn’t malevolence, but competence. (loc. 818-818)
  • One of the most spectacular developments during the 13.8 billion years since our Big Bang is that dumb and lifeless matter has turned intelligent. (loc. 914-915)
  • Intelligence = ability to accomplish complex goals. (loc. 928-929)
  • Comparing the intelligence of humans and machines today, we humans win hands-down on breadth, while machines outperform us in a small but growing number of narrow domains. (loc. 958-959)
  • So far, the smallest memory device known to be evolved and used in the wild is the genome of the bacterium Candidatus Carsonella ruddii, storing about 40 kilobytes, whereas our human DNA stores about 1.6 gigabytes, comparable to a downloaded movie. (loc. 1092-1094)
  • Such memory systems are called auto-associative, since they recall by association rather than by address. (loc. 1105-1106)
  • In summary, not only is it possible for matter to implement any well-defined computation, but it’s possible in a plethora of different ways. (loc. 1182-1183)
  • In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter. (loc. 1225-1226)
  • In other words, the hardware is the matter and the software is the pattern. This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms. (loc. 1226-1228)
  • The ability to learn is arguably the most fascinating aspect of general intelligence. (loc. 1297-1297)
  • This helps explain not only why neural networks are now all the rage among AI researchers, but also why we evolved neural networks in our brains: if we evolved brains to predict the future, then it makes sense that we’d evolve a computational architecture that’s good at precisely those computational problems that matter in the physical world. (loc. 1401-1404)
  • AI researchers have often been accused of over-promising and under-delivering, but in fairness, some of their critics don’t have the best track record either. (loc. 1471-1473)
  • As technology grows more powerful, we should rely less on the trial-and-error approach to safety engineering. In other words, we should become more proactive than reactive, investing in safety research aimed at preventing accidents from happening even once. (loc. 1737-1739)
  • These energy and transportation accidents teach us that as we put AI in charge of ever more physical systems, we need to put serious research efforts into not only making the machines work well on their own, but also into making machines collaborate effectively with their human controllers. (loc. 1866-1868)
  • According to a U.S. government study, bad hospital care contributes to over 100,000 deaths per year in the United States alone,32 so the moral imperative for developing better AI for medicine is arguably even stronger than that for self-driving cars. (loc. 1893-1896)
  • The reason that the Athenian citizens of antiquity had lives of leisure where they could enjoy democracy, art and games was mainly that they had slaves to do much of the work. But why not replace the slaves with AI-powered robots, creating a digital utopia that everyone can enjoy? (loc. 2175-2177)
  • Now that everything from books to movies and tax preparation tools has gone digital, additional copies can be sold worldwide at essentially zero cost, without hiring additional employees. This allows most of the revenue to go to investors rather than workers. (loc. 2204-2205)
  • But then he mentioned this to a Japanese roboticist, who protested: “No, robots are very good at those things!” (loc. 2288-2289)
  • Governments can help their citizens not only by giving them money, but also by providing them with free or subsidized services such as roads, bridges, parks, public transportation, childcare, education, healthcare, retirement homes and internet access; indeed, many governments already provide most of these services. (loc. 2331-2333)
  • Providing people with income isn’t enough to guarantee their well-being. (loc. 2354-2355)
  • There’s absolutely no guarantee that we’ll manage to build human-level AGI in our lifetime—or ever. But there’s also no watertight argument that we won’t. (loc. 2421-2422)
  • Career advice for today’s kids: Go into professions that machines are bad at—those involving people, unpredictability and creativity. (loc. 2447-2448)
  • Globalization is merely the latest example of this multi-billion-year trend of hierarchical growth. (loc. 2781-2782)
  • I suspect that there are simpler ways to build human-level thinking machines than the solution evolution came up with, and even if we one day manage to replicate or upload brains, we’ll end up discovering one of those simpler solutions first. (loc. 2848-2850)
  • We certainly can’t last a billion years, after which the gradually warming Sun will have cranked up Earth’s temperature enough to boil off all liquid water. (loc. 3511-3513)
  • With our present level of intelligence and emotional maturity, we humans have a knack for miscalculations, misunderstandings and incompetence, and as a result, our history is full of accidents, wars and other calamities that, in hindsight, essentially nobody wanted. (loc. 3533-3535)
  • Rather, aided by technology, life has the potential to flourish for billions of years, not merely here in our Solar System, but also throughout a cosmos far more grand and inspiring than our ancestors imagined. (loc. 3658-3659)
  • What we can say for sure, however, is that the energy prospects for the future of life are dramatically better than our current technology allows. We haven’t even managed to build a fusion reactor, yet future technology should be able to do ten and perhaps even a hundred times better. (loc. 3885-3887)
  • Nothing can travel faster than the speed of light through space, but space is free to expand as fast as it wants. (loc. 3965-3966)
  • Cosmologist Frank Tipler has built on this idea to speculate that you could also achieve subjective immortality in the final moments before a Big Crunch by speeding up the computations toward infinity as the temperature and density skyrocketed. (loc. 4190-4192)
  • Will there be cooperation, competition or war? (loc. 4305-4306)
  • I think that this assumption that we’re not alone in our Universe is not only dangerous but also probably false. (loc. 4343-4344)
  • I’m therefore crossing my fingers that all searches for extraterrestrial life find nothing: this is consistent with the scenario where evolving intelligent life is rare but we humans got lucky, so that we have the roadblock behind us and have extraordinary future potential. (loc. 4416-4418)
  • Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty, passion and meaning in a near eternity of meaninglessness experienced by nobody. (loc. 4437-4439)
  • My vote is for embracing technology, and proceeding not with blind faith in what we build, but with caution, foresight and careful planning. (loc. 4442-4443)
  • Remarkably, physicists have since discovered that all laws of classical physics can be mathematically reformulated in an analogous way: out of all ways that nature could choose to do something, it prefers the optimal way, which typically boils down to minimizing or maximizing some quantity. (loc. 4527-4529)
  • There are two mathematically equivalent ways of describing each physical law: either as the past causing the future, or as nature optimizing something. (loc. 4529-4530)
  • In other words, nature appears to have a built-in goal of producing self-organizing systems that are increasingly complex and lifelike, and this goal is hardwired into the very laws of physics. (loc. 4563-4565)
  • If you had been quietly observing Earth around the time when life got started, you would have noticed a dramatic change in goal-oriented behavior. Whereas earlier, the particles seemed as though they were trying to increase average messiness in various ways, these newly ubiquitous self-copying patterns seemed to have a different goal: not dissipation but replication. (loc. 4582-4585)
  • So in a sense, our cosmos invented life to help it approach heat death faster. (loc. 4592-4592)
  • Evolution has implemented replication optimization in precisely this way: rather than ask in every situation which action will maximize an organism’s number of successful offspring, it implements a hodgepodge of heuristic hacks: rules of thumb that usually work well. (loc. 4603-4605)
  • If you’re chased by a heat-seeking missile, you don’t really care whether it has consciousness or feelings! (loc. 4644-4645)
  • People don’t think twice about flooding anthills to build hydroelectric dams, so let’s not place humanity in the position of those ants. (loc. 4696-4697)
  • In the inverse reinforcement-learning approach, a core idea is that the AI is trying to maximize not the goal-satisfaction of itself, but that of its human owner. (loc. 4731-4733)
  • With increasing intelligence may come not merely a quantitative improvement in the ability to attain the same old goals, but a qualitatively different understanding of the nature of reality that reveals the old goals to be misguided, meaningless or even undefined. (loc. 4824-4826)
  • Once this friendly AI understands itself well enough, it may find this goal as banal or misguided as we find compulsive reproduction, and it’s not obvious that it will not find a way to subvert it by exploiting loopholes in our programming. (loc. 4837-4838)
  • If we could watch a fast-forward replay of our 13.8-billion-year cosmic history, we’d witness several distinct stages of goal-oriented behavior: 1. Matter seemingly intent on maximizing its dissipation 2. Primitive life seemingly trying to maximize its replication 3. Humans pursuing not replication but goals related to pleasure, curiosity, compassion and other feelings that they’d evolved to help them replicate 4. Machines built to help humans pursue their human goals. (loc. 4962-4968)
  • Although thinkers have pondered the mystery of consciousness for thousands of years, the rise of AI adds a sudden urgency, in particular to the question of predicting which intelligent entities have subjective experiences. (loc. 5095-5096)
  • Just as with “life” and “intelligence,” there’s no undisputed correct definition of the word “consciousness“. (loc. 5111-5112)
  • Consciousness = subjective experience. (loc. 5118-5119)
  • As Yuval Noah Harari puts it in his book Homo Deus:4 “If any scientist wants to argue that subjective experiences are irrelevant, their challenge is to explain why torture or rape are wrong without reference to any subjective experience.” (loc. 5126-5129)
  • Some recent NCC research suggests that your consciousness mainly resides in a “hot zone” involving the thalamus (near the middle of your brain) and the rear part of the cortex (the outer brain layer consisting of a crumpled-up six-layer sheet which, if flattened out, would have the area of a large dinner napkin).12 This same research controversially suggests that the primary visual cortex at the very back of the head is an exception to this, being as unconscious as your eyeballs and your retinas. (loc. 5358-5362)
  • If a mathematical theory of consciousness whose equations fit on a napkin could successfully predict the outcomes of all experiments we perform on brains, then we’d start taking seriously not merely the theory itself, but also its predictions for consciousness beyond brains—for example, in machines. (loc. 5399-5402)
  • Let’s take a physics perspective: What particle arrangements are conscious? (loc. 5409-5410)
  • Solids, liquids and gases are all emergent phenomena: they’re more than the sum of their parts, because they have properties above and beyond the properties of their particles. They have properties that their particles lack. (loc. 5417-5419)
  • Now just like solids, liquids and gases, I think consciousness is an emergent phenomenon, with properties above and beyond those of its particles. (loc. 5419-5420)
  • Just as Galileo had pursued his mathematical theory of motion despite establishment pressure not to challenge geocentrism, Giulio had developed the most mathematically precise consciousness theory to date, integrated information theory (IIT). (loc. 5444-5446)
  • In summary, I think that consciousness is a physical phenomenon that feels non-physical because it’s like waves and computations: it has properties independent of its specific physical substrate. (loc. 5484-5485)
  • If consciousness is the way that information feels when it’s processed in certain ways, then it must be substrate-independent; it’s only the structure of the information processing that matters, not the structure of the matter doing the information processing. In other words, consciousness is substrate-independent twice over! (loc. 5486-5489)
  • Consciousness is the way information feels when being processed in certain ways. (loc. 5501-5502)
  • To be conscious, a system needs to be able to store and process information. (loc. 5502-5503)
  • Another controversial IIT claim is that today’s computer architectures can’t be conscious, because the way their logic gates connect gives very low integration. (loc. 5529-5530)
  • One could even imagine a nested hierarchy of consciousnesses at all levels from microscopic to cosmic. (loc. 5605-5606)
  • Their subjective experience of free will is simply how their computations feel from inside: they don’t know the outcome of a computation until they’ve finished it. (loc. 5653-5654)
  • It’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe. (loc. 5663-5663)
  • Traditionally, we humans have often founded our self-worth on the idea of human exceptionalism: the conviction that we’re the smartest entities on the planet and therefore unique and superior. The rise of AI will force us to abandon this and become more humble. (loc. 5670-5672)
  • Elon told me after the Asilomar meeting that he found it amazing how AI safety has gone from a fringe issue to mainstream in only a few years, and I’m just as amazed myself. (loc. 6053-6054)