Some may take comfort in the fact that humanity has not yet destroyed itself with nuclearweapons. However, what worked for nuclear power may not work for genetic power.
The weaponization of the scientific and technological breakthroughs stemming from human genome research presents a serious global security challenge. Gene-editing pioneer and Nobel Laureate Jennifer Doudna often tells a story of a nightmare she once had. A colleague asked her to teach someone how her technology works. She went to meet the student and “was shocked to see Adolf Hitler, in the flesh.”
Doudna is not alone in being haunted by the power of science. Famously, having just returned home from Los Alamos in early 1945, John von Neumann awakened in panic. “What we are creating now is a monster whose influence is going to change history, provided there is any history left,” he stammered while straining to speak to his wife. He surmised, however, that “it would be impossible not to see it through, not only for military reasons, but it would also be unethical from the point of view of the scientists not to do what they knew is feasible, no matter what terrible consequences it may have.”
According to biographer Ananyo Bhattacharya, von Neumann saw what was happening in Nazi Germany and the USSR and believed that “the best he could do is allow politicians to make those [ethical and security] decisions: to put his brain in their hands.” Living through a devastating world war, the Manhattan Project polymath “had no trust left in human nature.”
Doudna also decided that the work must go on, but for a different reason. According to biographer Walter Isaacson, the biochemist was “instinctively against” the idea of editing human genes. What changed her mind was hearing stories of people affected by genetic diseases. Doudna concluded that gene-editing decisions should be left to individual choice: “I’m an American, and putting a high priority on personal freedom and choice is part of our culture.” The global security environment, characterized by American relative global decline coupled with China’s interest in weaponizing biotechnology, was also unlikely lost on her. According to Isaacson, Defense Advanced Research Projects Agency (DARPA), the research and development agency of the U.S. Department of Defense, “already has a project going, in conjunction with Doudna’s lab, to study how to create genetically enhanced soldiers.”
Two enterprising scientists faced an extraordinary ethical-existential dilemma – one stemming from revolutionary breakthroughs in physics, the other in biology – and reached a similar conclusion: to continue. They reasoned that technological progress cannot be stopped; the genie was already out of the bottle. The best they could do was to continue leading the development, while at least exercising some power over it to do good. They also reasoned that, in a war-prone world, the safest bet was US scientific and technological dominance.
Some may take comfort in the fact that humanity has not yet destroyed itself with nuclear weapons. However, what worked for nuclear power may not work for genetic power. We got incredibly lucky (so far) with the former, philosopher Nick Bostrom has pointed out. It so happens that nuclear weapons are very difficult to produce and so only select states are capable of unleashing their civilization-ending potential. However, we may not be so lucky with genetic weapons. In 2016, a US intelligence community global threat assessment listed gene editing as a “weapon of mass destruction.” The gene editing technology CRISPR-Cas9, which lets scientists rewrite DNA sequences in any cell, may be that “black ball” invention that Bostrom conceptualizes in his Vulnerable World Hypothesis, an invention capable of democratizing mass destruction – making widely available the power to cause an existential catastrophe. What is to be done? Bostrom argues that our best bet is “a system of ubiquitous real-time worldwide surveillance.”
The problem with Bostrom’s Hobbesian-style emphasis on extreme surveillance and preventive policing is that it asks us to destroy the very civilization we are tasked with saving. We are asked to destroy our civilization’s most precious resources – our values (e.g. privacy) and achievements (e.g. civil liberty). These resources may not be universal, fully realized or foolproof, but they are arguably our most innovative and hard-earned achievements. This begs the question of how we balance ethics and security in a world characterized by the security dilemma. That is, a world with the most powerful states accruing ever-deadlier weapons as a kneejerk reaction to global anarchy, thereby triggering other states to do the same and increasing global insecurity.
If gene editing is a ‘black ball’ invention, must we choose between our humanness and survival? Humanness refers to our current biological form, which also mysteriously includes “a sense of self,” or consciousness. This form, as H.G. Wells observes in The War of the Worlds, is our evolutionary “birthright.” We earned it through immense toil and suffering – “the toll of a billion deaths”; no life useless or “in vain,” each a gift to natural selection. Philosopher Michael Sandel uses the concept of “giftedness” to challenge the popular liberal eugenic and transhumanist ideas that who we are is something to enhance or transcend through the means of technology. Genetic engineering, he argues, “erodes our appreciation for the gifted character of human powers and achievements” and thus undermines three key features critical for our long-term survival: humility, responsibility, and solidarity.
The humanness-survival dilemma is beginning to haunt our culture. It appears, for example, in the American science fiction television series Raised by Wolves (2020-2022) as a clash between two androids named “Mother” and “Grandmother.” ‘Mother’ fights to protect humanness, while ‘Grandmother’ believes in doing whatever it takes for humans to survive. Even if it means transforming humans into grotesque sea creatures, or “a simpler, happier version of themselves,” as Grandmother puts it. The dilemma remains unresolved.
The medium-term risk of genetically reengineering humans is potentially making the 21st century the most unequal in history. Reflecting on advances in biotechnology and AI, Yuval Harari imagines a small class of “superhumans” dominating a underclass of “useless” people. Alternatively, this underclass may become a “standing reserve,” an orderly resource for technical application, or a mere source of energy.
The problem with finding a solution to the humanness-security dilemma is that it cannot be done entirely scientifically or perhaps even philosophically. It requires us to step outside of our intellectual comfort zone into the realm of free choice. Freedom of choice is what we are, or as the late French philosopher Jean-Paul Sartre put it, that which we are “condemned” to have as self-aware beings. Sandel observes that the genomic revolution has induced “a kind of moral vertigo.” We have struggled to articulate our “unease” with the speed of developments in genetic technologies. It has forced us to consider questions that “verge on theology,” from which “modern philosophers and political theorists tend to shirk… But our new powers of biotechnology make them unavoidable.
Some believe that scientists should be at the core of decision-making, while others prefer that elected officials and bureaucratic experts take the helm. The private sector, nonstate groups, and super-empowered individuals also shape the global security environment. As do civil (and uncivil) society organizations. Rogue actors, such as He Jiankui, who used CRISPR to edit two babies, cannot be neglected either. What is to be done as CRISPR and other biotechnologies (in synergistic interaction with AI, neurotechnology, and nanotechnology) are raising the possibility of almost anyone having godlike powers? This is perhaps the most ethically-challenging security problem.
Assuming there is a solution, the big question is whether it resides in how we structure our society (i.e. politics) or in human nature (i.e. biology). If it is the former, a solution would likely require deep structural transformation of the international system into one that is capable not just of global surveillance but also of universal norm creation. Genetic weapons would need to become as universally morally repugnant as, say, cannibalism, at least for the class of people capable of creating them. Climate change has already exposed the inadequacy of our state-based system to meet existential challenges. It is a system that makes even the most powerful actors (like the United States, China, and Russia) feel insecure, thus rendering continuous collaboration unattainable. The United States is unlikely to reestablish its unipolar dominance, which it never fully enjoyed.
If it is not the structure of the international system but human nature that needs tweaking, we are in luck. For the first time in human history, biotechnology may make that possible. Then, three nightmarish questions emerge: Who is to decide whether and how to make these tweaks? If our cantankerous nature serves an evolutionary purpose, what else would we lose if we lost our desire to fight? What if it is the structure that requires transformation?
. . .
Yelena Bibermanis an associate professor of political science at Skidmore College, new voice at the Andrew W. Marshall Foundation, nonresident senior fellow at the Atlantic Council’s South Asia Center, and associate at Harvard University’s Davis Center. Her book project focuses on the technologies and international politics of “genetic warfare” in the context of US-China rivalry. Her first book, Gambling with Violence: State Outsourcing of War in Pakistan and India,was published by Oxford University Press in 2019.
A year ago, Russia’s cyberwar against Ukraine was reviled as it deployed hostile information and systems interventions with synchronized physical hostilities. Yet, the results of the cyberwar have been far…
ChatGPT and other natural language models have recently sparked considerable intrigue and unease. Governments and businesses are increasingly acknowledging the role of Generative Pre-trained Transformers (GPTs) in shaping the cybersecurity…