In Defense of Humanity

In Defense of Humanity

On July 13, 1833, during a visit to the Cabinet of Natural History at the Jardin des Plantes, in Paris, Ralph Waldo Emerson had an epiphany. Peering at the museum’s specimens—butterflies, hunks of amber and marble, carved seashells—he felt overwhelmed by the interconnectedness of nature, and humankind’s place within it.

The experience inspired him to write “The Uses of Natural History,” and to articulate a philosophy that put naturalism at the center of intellectual life in a technologically chaotic age—guiding him, along with the collective of writers and radical thinkers known as transcendentalists, to a new spiritual belief system. Through empirical observation of the natural world, Emerson believed, anyone could become “a definer and map-maker of the latitudes and longitudes of our condition”—finding agency, individuality, and wonder in a mechanized age.

America was crackling with invention in those years, and everything seemed to be speeding up as a result. Factories and sugar mills popped up like dandelions, steamships raced to and from American ports, locomotives tore across the land, the telegraph connected people as never before, and the first photograph was taken, forever altering humanity’s view of itself. The national mood was a mix of exuberance, anxiety, and dread.

[From the June 2018 issue: Henry A. Kissenger on AI and how the Enlightenment ends]

The flash of vision Emerson experienced in Paris was not a rejection of change but a way of reimagining human potential as the world seemed to spin off its axis. Emerson’s reaction to the technological renaissance of the 19th century is worth revisiting as we contemplate the great technological revolution of our own century: the rise of artificial superintelligence.

Even before its recent leaps, artificial intelligence has for years roiled the informational seas in which we swim. Early disturbances arose from the ranking algorithms that have come to define the modern web—that is, the opaque code that tells Google which results to show you, and that organizes and personalizes your feeds on social platforms like Facebook, Instagram, and TikTok by slurping up data about you as a way to assess what to spit back out.

Now imagine this same internet infrastructure but with programs that communicate with a veneer of authority on any subject, with the ability to generate sophisticated, original text, audio, and video, and the power to mimic individuals in a manner so convincing that people will not know what is real. These self-teaching AI models are being designed to become better at what they do with every single interaction. But they also sometimes hallucinate, and manipulate, and fabricate. And you cannot predict what they’ll do or why they’ll do it. If Google’s search engine is the modern-day Library of Alexandria, the new AI will be a mercurial prophet.

[From the May 2018 issue: The era of fake video begins]

Generative artificial intelligence is advancing with unbelievable speed, and will be applied across nearly every discipline and industry. Tech giants—including Alphabet (which owns Google), Amazon, Meta (which owns Facebook), and Microsoft—are locked in a race to weave AI into existing products, such as maps, email, social platforms, and photo software.

The technocultural norms and habits that have seized us during the triple revolution of the internet, smartphones, and the social web are themselves in need of a thorough correction. Too many people have allowed these technologies to simply wash over them. We would be wise to rectify the errors of the recent past, but also to anticipate—and proactively shape—what the far more radical technology now emerging will mean for our lives, and how it will come to remake our civilization.

Corporations that stand to profit off this new technology are already memorizing the platitudes necessary to wave away the critics. They’ll use sunny jargon like “human augmentation” and “human-centered artificial intelligence.” But these terms are as shallow as they are abstract. What’s coming stands to dwarf every technological creation in living memory: the internet, the personal computer, the atom bomb. It may well be the most consequential technology in all of human history.

People are notoriously terrible at predicting the future, and often slow to recognize a revolution—even when it is already under way. But the span of time between when new technology emerges and when standards and norms are hardened is often short. The Wild West, in other words, only lasts for so long. Eventually, the railroads standardize time; incandescent bulbs beat out arc lamps; the dream of the open web dies.

The window for effecting change in the realm of AI is still open. Yet many of those who have worked longest to establish guardrails for this new technology are despairing that the window is nearly closed.

Generative AI, just like search engines, telephones, and locomotives before it, will allow us to do things with levels of efficiency so profound, it will seem like magic. We may see whole categories of labor, and in some cases entire industries, wiped away with startling speed. The utopians among us will view this revolution as an opportunity to outsource busywork to machines for the higher purpose of human self-actualization. This new magic could indeed create more time to be spent on matters more deserving of our attention—deeper quests for knowledge, faster routes to scientific discovery, extra time for leisure and with loved ones. It may also lead to widespread unemployment and the loss of professional confidence as a more competent AI looks over our shoulder.

[Annie Lowrey: Before AI takes over, make plans to give everyone money]

Government officials, along with other well-intentioned leaders, are groping toward ethical principles for artificial intelligence—see, for example, the White House’s “Blueprint for an AI Bill of Rights.” (Despite the clunky title, the intention is for principles that will protect human rights, though the question of civil rights for machines will eventually arise.) These efforts are necessary but not enough to meet the moment.

We should know by now that neither the government’s understanding of new technologies nor self-regulation by tech behemoths can adequately keep pace with the speed of technological change or Silicon Valley’s capacity to seek profit and scale at the expense of societal and democratic health. What defines this next phase of human history must begin with the individual.

Just as the Industrial Revolution sparked transcendentalism in the U.S. and romanticism in Europe—both movements that challenged conformity and prioritized truth, nature, and individualism—today we need a cultural and philosophical revolution of our own. This new movement should prioritize humans above machines and reimagine human relationships with nature and with technology, while still advancing what this technology can do at its best. Artificial intelligence will, unquestionably, help us make miraculous, lifesaving discoveries. The danger lies in outsourcing our humanity to this technology without discipline, especially as it eclipses us in apperception. We need a human renaissance in the age of intelligent machines.

In the face of world-altering invention, with the power of today’s tech barons so concentrated, it can seem as though ordinary people have no hope of influencing the machines that will soon be cognitively superior to us all. But there is tremendous power in defining ideals, even if they ultimately remain out of reach. Considering all that is at stake, we have to at least try.

[From the June 2023 issue: Never give artificial intelligence the nuclear codes]

Transparency should be a core tenet in the new human exchange of ideas—people ought to disclose whenever an artificial intelligence is present or has been used in communication. This ground rule could prompt discipline in creating more-human (and human-only) spaces, as well as a less anonymous web. Any journalist can tell you that anonymity should be used only as a last resort and in rare scenarios for the public good. We would benefit from cultural norms that expect people to assert not just their opinions but their actual names too.

Now is the time, as well, to recommit to making deeper connections with other people. Live videochat can collapse time and distance, but such technologies are a poor substitute for face-to-face communication, especially in settings where creative collaboration or learning is paramount. The pandemic made this painfully clear. Relationships cannot and should not be sustained in the digital realm alone, especially as AI further erodes our understanding of what is real. Tapping a “Like” button is not friendship; it’s a data point. And a conversation with an artificial intelligence is one-sided—an illusion of connection.

Someday soon, a child may not have just one AI “friend,” but more AI friends than human ones. These companions will not only be built to surveil the humans who use them; they will be tied inexorably to commerce—meaning that they will be designed to encourage engagement and profit. Such incentives warp what relationships ought to be.

Writers of fiction—Fyodor Dostoyevsky, Rod Serling, José Saramago—have for generations warned of doppelgängers that might sap our humanity by stealing a person’s likeness. Our new world is a wormhole to that uncanny valley.

Whereas the first algorithmic revolution involved using people’s personal data to reorder the world for them, the next will involve our personal data being used not just to splinter our shared sense of reality, but to invent synthetic replicas. The profit-minded music-studio exec will thrill to the notion of an AI-generated voice with AI-generated songs, not attached to a human with intellectual-property rights. Artists, writers, and musicians should anticipate widespread impostor efforts and fight against them. So should all of us. One computer scientist recently told me she’s planning to create a secret code word that only she and her elderly parents know, so that if they ever hear her voice on the other end of the phone pleading for help or money, they’ll know whether it’s been generated by an AI trained on her publicly available lectures to sound exactly like her and scam them.

Today’s elementary-school children are already learning not to trust that anything they see or hear through a screen is real. But they deserve a modern technological and informational environment built on Enlightenment values: reason, human autonomy, and the respectful exchange of ideas. Not everything should be recorded or shared; there is individual freedom in embracing ephemerality. More human interactions should take place only between the people involved; privacy is key to preserving our humanity.

Finally, a more existential consideration requires our attention, and that is the degree to which the pursuit of knowledge orients us inward or outward. The artificial intelligence of the near future will supercharge our empirical abilities, but it may also dampen our curiosity. We are at risk of becoming so enamored of the synthetic worlds that we create—all data sets, duplicates, and feedback loops—that we cease to peer into the unknown with any degree of true wonder or originality.

We should trust human ingenuity and creative intuition, and resist overreliance on tools that dull the wisdom of our own aesthetics and intellect. Emerson once wrote that Isaac Newton “used the same wit to weigh the moon that he used to buckle his shoes.” Newton, I’ll point out, also used that wit to invent a reflecting telescope, the beginnings of a powerful technology that has allowed humankind to squint at the origins of the universe. But the spirit of Emerson’s idea remains crucial: Observing the world, taking it in using our senses, is an essential exercise on the path to knowledge. We can and should layer on technological tools that will aid us in this endeavor, but never at the expense of seeing, feeling, and ultimately knowing for ourselves.

A future in which overconfident machines seem to hold the answers to all of life’s cosmic questions is not only dangerously misguided, but takes away that which makes us human. In an age of anger, and snap reactions, and seemingly all-knowing AI, we should put more emphasis on contemplation as a way of being. We should embrace an unfinished state of thinking, the constant work of challenging our preconceived notions, seeking out those with whom we disagree, and sometimes still not knowing. We are mortal beings, driven to know more than we ever will or ever can.

The passage of time has the capacity to erase human knowledge: Whole languages disappear; explorers lose their feel for crossing the oceans by gazing at the stars. Technology continually reshapes our intellectual capacities. What remains is the fact that we are on this planet to seek knowledge, truth, and beauty—and that we only get so much time to do it.

As a small child in Concord, Massachusetts, I could see Emerson’s home from my bedroom window. Recently, I went back for a visit. Emerson’s house has always captured my imagination. He lived there for 47 years until his death, in 1882. Today, it is maintained by his descendants and a small staff dedicated to his legacy. The house is some 200 years old, and shows its age in creaks and stains. But it also possesses a quality that is extraordinarily rare for a structure of such historic importance: 141 years after his death, Emerson’s house still feels like his. His books are on the shelves. One of his hats hangs on a hook by the door. The original William Morris wallpaper is bright green in the carriage entryway. A rendering of Francesco Salviati’s The Three Fates, holding the thread of destiny, stands watch over the mantel in his study. This is the room in which Emerson wrote Nature. The table where he sat to write it is still there, next to the fireplace.

[From the October 1883 issue: Ralph Waldo Emerson’s ‘Historic Notes of Life and Letters in Massachusetts’]

Standing in Emerson’s study, I thought about how no technology is as good as going to the place, whatever the destination. No book, no photograph, no television broadcast, no tweet, no meme, no augmented reality, no hologram, no AI-generated blueprint or fever dream can replace what we as humans experience. This is why you make the trip, you cross the ocean, you watch the sunset, you hear the crickets, you notice the phase of the moon. It is why you touch the arm of the person beside you as you laugh. And it is why you stand in awe at the Jardin des Plantes, floored by the universe as it reveals its hidden code to you.


This article appears in the July/August 2023 print edition with the headline “In Defense of Humanity.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Back to blog