Nexus

Get the Book

Big Ideas

  • The Sorcerer’s Apprentice, A warning…
  • Newspaper Editors Vs. Printing presses.
  • User Engagement Is driven by outrage.
  • Creating Wiser Networks, Via wise humans.

“We have named our species Homo sapiens—the wise human. But it is debatable how well we have lived up to the name.

Over the last 100,000 years, we Sapiens have certainly accumulated enormous power. Just listing our discoveries, inventions, and conquests would fill volumes. But power isn’t wisdom, and after 100,000 years of discoveries, inventions, and conquests humanity has pushed itself into an existential crisis. We are on the verge of ecological collapse, caused by the misuse of our own power. We are also busy creating new technologies like artificial intelligence (AI) that have the potential to escape our control and enslave or annihilate us. Yet instead of our species uniting to deal with these existential challenges, international tensions are rising, global cooperation is becoming more difficult, countries are stockpiling doomsday weapons, and a new world war does not seem impossible.

If we Sapiens are so wise, why are we so self-destructive?”

~ Yuval Noah Harari from Nexus

Yuval Noah Harari is one of my all-time favorite thinkers.

We have featured each of his previous three great books: Sapiens: A Brief History of Humankind, Homo Deus: A Brief History of Tomorrow, and 21 Lessons for the 21st Century.

It’s hard to put into words just how much I admire the logical coherence and clarity of his writing—not to mention the breadth and depth of his thinking. It is breathtaking.

As per the back flap of the book, Harari is a historian, philosopher, and bestselling author. He is considered one of the world’s most influential public intellectuals. Born in Israel, Professor Harari received his Ph.D. from the University of Oxford.

He is currently a lecturer in the Department of History at the Hebrew University of Jerusalem and a Distinguished Research Fellow at the University of Cambridge’s Centre for the Study of Existential Risk. He co-founded the social impact company Sapienship, focused on education and storytelling, with his husband, Itzik Yahav.

As per the sub-title, this book is “A Brief History of Information Networks from the Stone Age to AI.” It’s a compelling, sobering look at the challenges we face as AI becomes more and more integrated into our lives. 

The book is packed with Big Ideas and we’ll barely scratch the surface. Let’s get to work.

P.S. Check out our Notes on Jonathan Haidt’s great books for more powerful, sobering wisdom on the challenges we face in the 21st century: The Righteous Mind: Why Good People Are Divided by Politics and Religion, The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure, and The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.

History isn’t the study of the past; it is the study of change.

Yuval Noah Harari

THE SORCERER’S APPRENTICE

Two thousand years later, when the Industrial Revolution was making its first steps and machines began replacing humans in numerous tasks, Johann Wolfgang von Goethe published a similar cautionary tale titled ‘The Sorcerer’s Apprentice.’ Goethe’s poem (later popularized as a Walt Disney animation starring Mickey Mouse) tells of an old sorcerer who leaves a young apprentice in charge of his workshop and gives him some chores to tend to while he is gone, like fetching water from the river. The apprentice decides to make things easier for himself and, using one of the sorcerer’s spells, enchants a broom to fetch water for him. But the apprentice doesn’t know how to stop the broom, which relentlessly fetches more and more water, threatening to flood the workshop. In panic, the apprentice cuts the enchanted broom in two with an ax, only to see each half become another broom. Now two enchanted brooms are inundating the workshop with water. When the old sorcerer returns, the apprentice pleads for help: ‘The spirits that I summoned, I now cannot rid myself of again.’ The sorcerer immediately breaks the spell and stops the flood. The lesson to the apprentice—and to humanity—is clear: never summon powers you cannot control.

That’s from the Prologue in which Harari introduces us to the main themes of the book.

Shortly after that passage, he tells us: “Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. The main argument of this book is humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes us to use the power unwisely. Our problem, then, is a network problem.”

In the next paragraph he says: “Even more specifically, it is an information problem.”

Then he proceeds to walk us through the primary ways we view information. He starts by describing what he calls “The Naive View of Information.”

This naive view, he tells us: “posits that in sufficient quantities information leads to truth, and truth in turn leads to power and wisdom.”

He continues by saying: “This naive view justifies the pursuit of ever more powerful information technologies and has been the semiofficial ideology of the computer age and the Internet.”

He quotes Google’s mission statement and AI optimists like Marc Andreessen and Ray Kurzweil to frame this world view. He concludes his exploration of “The Naive View of Information” by saying: “Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water.”

In the next section entitled “Weaponizing Information,” he tells us about a competing view of information. He calls this perspective “The Populist View of Information.”

He tells us: “In its most extreme versions, populism posits that there is no objective truth at all and that everyone has ‘their own truth,’ which they wield to vanquish rivals.”

He continues by saying: “Whenever and wherever populism succeeds in disseminating the view of information as a weapon, language itself is undermined. Nouns like ‘facts’ and adjectives like ‘accurate’ and ‘truthful’ become elusive. Such words are not taken as pointing to a common objective reality. Rather, any talk of ‘facts’ or ‘truth’ is bound to prompt at least some people to ask, ‘Whose facts and whose truths are you referring to?’”

He also makes the important point that “this power-focused and deeply skeptical view of information isn’t a new phenomenon and it wasn’t invented by anti-vaxxers, flat-earthers, Bolsonaristas, or Trump supporters.”

He concludes the Prologue by saying: “If we wish to avoid relinquishing power to a charismatic leader or inscrutable AI, we must first gain a better understanding of what information is, how it helps to build human networks, and how it relates to truth and power. Populists are right to be suspicious of the naive view of information, but they are wrong to think that power is the only reality and that information is always a weapon. Information isn’t the raw material of truth, but it isn’t a mere weapon, either. There is enough space between these extremes for a more nuanced and hopeful view of human information networks and of our ability to handle power wisely. This book is dedicated to exploring that middle ground.”

The rest of the book is dedicated to helping us discover that potential middle ground. The book has three parts: Part I: Human Networks (in which we define information and explore “A Brief History of Democracy and Totalitarianism); Part II: The Inorganic Network (in which we explore “How Computers Are Different from Printing Presses”); and Part III: Computer Politics (in which we explore the prospective impact of AI on democracies, totalitarianism and what he calls “The Silicon Curtain” that may divide our society in the future).

I repeat: It’s an intellectually compelling, sobering look at the challenges we face. It’s also IMPOSSIBLE to “summarize” here. My intention is to get you thinking about these Ideas and, if you feel so inspired, get you to read/listen to the book so more of us are aware of the challenges we face and working together to find the optimal solutions.

We humans rule the world not because we are so wise but because we are the only animals that can cooperate flexibly in large numbers.

Yuval Noah Harari

Scientific institutions are nevertheless different from religious institutions, in as much as they reward self-skepticism. Conspiracy theorists tend to be extremely skeptical regarding the existing consensus, but when it comes to their own beliefs, they lose all their skepticism and fall prey to confirmation bias. The trademark of science is not merely skepticism but self-skepticism, and at the heart of every scientific institution we find a strong self-correcting mechanism.

Yuval Noah Harari

One of the recurrent paradoxes of populism is that it starts by warning us that all humans are driven by a dangerous hunger for power, but often ends by entrusting all power in a single ambitious human.

Yuval Noah Harari

Trusting only ‘my own research’ may sound scientific, but in practice it amounts to believing there is no objective truth. … science is a collaborative institutional effort rather than a personal quest.

Yuval Noah Harari

NEWSPAPER EDITORS VS. PRINTING PRESSES

Since the current information revolution is more momentous than any previous information revolution, it is likely to create unprecedented realities on an unprecedented scale.

It is important to understand this because we humans are still in control. We don’t know for how long, but we still have the power to shape these new realities. To do so wisely, we need to comprehend what is happening. When we write computer code, we aren’t just designing a product. We are redesigning politics, society, and culture. We also need to take responsibility for what we are doing.

Alarmingly, as in the case of Facebook’s involvement with the anti-Rohingya campaign, the corporations that lead the computer revolution tend to shift responsibility to customers and voters, or to politicians and regulators. When accused of creating social and political mayhem, they hide behind arguments like ‘We are just a platform. We are doing what our customers want and what the voters permit. We don’t force anyone to use our services, and we don’t violate any existing law. If customers didn’t like what we do, they would pass laws against us. Since the customers keep asking for more, and since no law forbids what we do, everything must be okay.’

These arguments are either naive or disingenuous. Tech giants like Facebook, Amazon, Baidu, and Alibaba aren’t just the obedient servants of customer whims and government regulations. They increasingly shape these whims and regulations. The tech giants have a direct line to the world’s most powerful governments, and they invest huge sums of money in lobbying efforts to throttle regulations that might undermine their business model. For example, they have fought tenaciously to protect Section 230 of the U.S. Telecommunications act of 1996, which provides immunity from liability for online platforms regarding content published by their users. It is Section 230 that protects Facebook, for example, from being liable for the Rohingya massacre. In 2022 top tech companies spent close to $70 million on lobbying in the United States, and another €110 million on lobbying EU bodies, outstripping the lobbying expenses of oil and gas companies and pharmaceuticals. The tech giants also have a direct line to people’s emotional system, and they are masters at swaying the whims of customers and voters. If the tech giants obey the wishes of voters and customers, but at the same time also mold these wishes, then who really controls whom?

That’s from the first chapter in Part II in which Harari introduces us to “The New Members” of our network (AI!) and “How Computers Are Different from Printing Presses.”

Harari tells us that “A paradigmatic case of the novel power of computers is the role that social media algorithms have played in spreading hatred and undermining social cohesion in numerous countries.”

One of the many examples Harari uses to establish the fact that computers/algorithms/AI are different from all the previous forms of information networks is the HORRIFIC tragedy that occurred in Myanmar (Burma) that was fueled by Facebook.

He tells us: “The crucial thing to grasp is that social media algorithms are fundamentally different from printing presses and radio sets. In 2016-17, Facebook’s algorithms were making active and fateful decisions by themselves. They were more akin to newspaper editors than to printing presses.”

How were the algorithms making decisions that led to the atrocities in Myanmar?

Harari tells us: The more time people spent on the platform, the richer Facebook became. In line with the business model, human managers provided the company’s algorithms with a single overriding goal: increase user engagement. Humans are more likely to be engaged by hate-filled conspiracy theory than by a sermon on compassion. So in pursuit of user engagement, the algorithms made the fateful decision to spread outrage.”

We’ll talk about another example of how algorithms programmed to achieve the goal of increasing user engagement above all other goals have led to an increase in moral outrage and hate in a moment. For now, I want to encourage you (again!) to get the book for the deeper dive AND to watch The Social Dilemma for more on “the unintended catastrophic consequences of the attention economy.”

For more on the PERSONAL affects of social media and smartphones/etc., please check out Jonathan Haidt’s The Anxious Generation.

You may also enjoy checking out our Notes on Irresistible by Adam Alter.

P.S. Another book I thought of as I read this one is called Metabolical: The Truth About Processed Food and How It Poisons People and the Planet. In *that* book, Dr. Robert Lustig walks us through just how much the *food* industry spends to convince us their ultraprocessed “food” and sugary-drink products are healthy.

To put it in perspective, in addition to the $38 million food companies spent in 2022 in the U.S. on lobbying the government, Coca Cola ALONE spent $132.8 million between 2010 and 2015 on various health and fitness studies. 😕

The Founding Fathers committed enormous mistakes—such as endorsing slavery and denying women the vote—but they also provided the tools for their descendants to correct these mistakes. That was their greatest legacy.

Yuval Noah Harari

To summarize, a dictatorship is a centralized information network, lacking strong self-correcting mechanisms. A democracy, in contrast, is a distributed information network, possessing strong self-correcting mechanisms.

Yuval Noah Harari

The history of print and witch-hunting indicates that an unregulated information market doesn’t necessarily lead people to identify and correct their errors, because it may well prioritize outrage over truth.

Yuval Noah Harari

OUTRAGE DRIVES ENGAGEMENT

As explained briefly in chapter 6, the process of radicalization started when corporations tasked their algorithms with increasing user engagement, not only in Myanmar, but throughout the world. For example, in 2012, users were watching about 100 million hours of videos every day on YouTube. That was not enough for company executives, who set their algorithms an ambitious goal: 1 billion hours a day by 2016. Through trial-and-error experiments on millions of people, the YouTube algorithms discovered the same pattern that Facebook algorithms also learned: outrage drives engagement up, while moderation tends not to. Accordingly, the youTube algorithms began recommending outrageous conspiracy theories to millions of viewers while ignoring more moderate content. By 2016, users were indeed watching one billion hours every day on YouTube.

YouTubers who were particularly intent on gaining attention noticed that when they posted an outrageous video full of lies, the algorithm rewarded them by recommending the video to numerous users and increasing the YouTubers’ popularity and income. In contrast, when they dialed down the outrage and stuck to the truth, the algorithms tended to ignore them. Within a few months of such reinforcement learning, the algorithm turned many YouTubers into trolls.

That’s from another chapter in Part II called “Fallible: The Network Is Often Wrong.”

As you may recall, the basic idea of “The Naive View of Information” is that the more information you put out into the world, the closer we get to the truth and the wiser we get.

That would be AWESOME if it was true. But, alas, as Harari sternly advises us… It’s not.

In fact, when executives at the leading tech giants prioritize user engagement above everything else as they program their AI-algorithms to drive profits by mining our attention, what happens?

Well… As Harari tells us…

THE MOST OUTRAGEOUS CONTENT IS SHARED WITH THE MOST PEOPLE.

Which leads to a more and more radicalized society.

He tells us: “In Myanmar in 2016, in Brazil in 2018, and in numerous other countries, the algorithms scored videos, posts, and all other content solely according to how many minutes people engaged with the content and how many times they shared it with others. An hour of lies or hatred was ranked higher than ten minutes of truth or compassion—or an hour of sleep. The fact that lies and hate tend to be psychologically and socially destructive, whereas truth, compassion, and sleep are essential for human welfare, was completely lost on the algorithms. Based in this very narrow understanding of humanity, the algorithms helped to create a new social system that encouraged our basest instincts while discouraging us from realizing the full spectrum of the human potential.”

P.S. Alexandra spends more time watching YouTube videos than I do. (I read books, folks!) And, we’ve been discussing how fascinating it is how it seems that nearly EVERYONE popular these days is/has become so troll-ish. This passage captures one of the main reasons why. When troll-ish behaviors get you more likes and views, it’s tough not to conform and stoke the outrage.

P.P.S. Here’s another brilliant passage: “[Nick] Bostrom’s point [in Superintelligence] was that the problem with computers isn’t that they are particularly evil but that they are particularly powerful. And the more powerful the computer, the more careful we need to be about defining its goal in a way that precisely aligns with your ultimate goals. If we define a misaligned goal to a pocket calculator, the consequences are trivial. But if we define a misaligned goal to a super intelligent machine, the consequences could be dystopian.”

Facebook and other social media platforms didn’t consciously set out to flood the world with fake news and outrage. But by telling their algorithms to maximize user engagement, this is exactly what they perpetrated.

Yuval Noah Harari

One thing that is clear is that the future of employment will be very volatile. Our big problem won’t be an absolute lack of jobs, but rather retraining and adjusting to an ever-changing job market. … So people will need to retrain and reinvent themselves not just once but many times, or they will become irrelevant.

Yuval Noah Harari

The most important human skill for surviving the twenty-first century is likely to be flexibility. … While computers are nowhere near their full potential, the same is true of humans.

Yuval Noah Harari

CREATING WISER NETWORKS

Just as the law of the jungle is a myth, so also is the idea that the arc of history bends toward justice. History is a radically open arc, one that can bend in many directions and reach very different destinations. Even if Homo sapiens destroys itself, the universe will keep going about its business as usual. It took four billion years for terrestrial evolution to produce a civilization of highly intelligent apes. If we are gone, and it takes evolution another hundred million years to produce a civilization of highly intelligent rats, it will. The universe is patient.

There is, though, an even worse scenario. As far as we know today, apes, rats, and the other organic animals of planet Earth may be the only conscious entities in the entire universe. We have now created a nonconscious but very powerful alien intelligence. If we mishandle it, AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness. It is our responsibility to prevent this.

The good news is that if we eschew complacency and despair, we are capable of creating balanced information networks that will keep their own power in check. Doing so is not a matter of inventing another miracle technology or landing upon some brilliant idea that has somehow escaped all previous generations. Rather, to create wiser networks, we must abandon both the naive and populist views of information, put aside our fantasy of infallibility, and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms. That is perhaps the most important takeaway this book has to offer.

This wisdom is much older than human history. It is elemental, the foundation of organic life. The first organisms weren’t created by some infallible genius or god. They emerged through an intricate process of trial and error. Over four billion years, ever more complex mechanisms of mutation and self-correction led to the evolution of trees, dinosaurs, jungles, and eventually humans. Now we have summoned an alien inorganic intelligence that could escape our control and put in danger not just our own species but countless other life-forms. The decisions we all make in the coming years will determine whether summoning this alien intelligence proves to be a terminal error or the beginning of a hopeful new chapter in the evolution of life.

The intro quote on the first page featured the very first words of the book and that passage above features the very last words of the book.

In the final chapter of the book (in which he discusses “The Silicon Curtain” that may separate cultures in the future), Harari challenges the idea that “the law of the jungle” is conflict.

He tells us: “As [the primatologist Frans] de Waal and many other biologists documented in numerous studies, real jungles—unlike the one in our imagination—are full of cooperation, symbiosis, and altruism displayed by countless animals, plants, fungi, and even bacteria. Eighty percent of all land plants, for example, rely on symbiotic relationships with fungi, and almost 90 percent of vascular plant families enjoy symbiotic relationships with microorganisms. If organisms in the rain forests of Amazonia, Africa, or India abandoned cooperation in favor of an all-out competition for hegemony, the rain forests and all their inhabitants would quickly die. That’s the law of the jungle.”

To state the obvious: Wise networks will only be created by WISE HUMANS—which is why I work so hard to help create a world in which we each overcome our “complacency and despair” by striving to live with more Wisdom, Discipline, Courage and LOVE.

I repeat: YOU are the Hero we’ve been waiting for. Here’s to doing the hard work to forge excellence so we can activate our Heroic potential and change the world together.

Today’s the day. LET’S GO!

The clearest pattern we observe in the long-term history of humanity isn’t the constancy of conflict, but rather the increasing scale of cooperation. A hundred thousand years ago, Sapiens could cooperate only at the level of bands. Over the millennia, we have found ways to create communities of strangers, first on the level of tribes and eventually on the level of religions, trade networks, and states.

Yuval Noah Harari

The technologies of the twenty-first century are far more powerful—and potentially far more destructive—than those of the twentieth century. We therefore have less room for error. In the twentieth century, we can say that humanity got a C-minus in the lesson on using industrial technology. Just enough to pass. In the twenty-first century, the bar is set much higher. We must do better this time.

Yuval Noah Harari