Author Archives: Ricky

The battleground of human genetic engineering

Science, ethics, ideology, and politics all clash fiercely over an innocent-sounding topic: the “designer baby”. This battle has loomed unusually large in the public view after a recent announcement that a Chinese scientist intentionally used a controversial genetic engineering technique on a set of viable human embryos, a global first. There are a number of fascinating perspectives to explore, and this story comes with a sprinkling of geopolitical intrigue as well.

In November of 2018, scientist He Jiankui announced that he had used a burgeoning gene editing tool, CRISPR-Cas9 (or just CRISPR, for short), to genetically modify two twin girls. He claims to have used this tool in an attempt to confer genetic resistance to HIV/AIDS by disabling one particular gene, CCR5. CRISPR, while extremely promising, is still quite poorly tested, and has been proven capable of deleting much larger swaths of genetic material than intended. The main concern is that these deletions could eliminate sections of a cell’s genetic code that are crucial for its normal functions, which could lead to problems such as cancer. There is also a concern that catastrophic errors could be transferred genetically to descendants.

Molecular biologist Ellen Jorgensen explains CRISPR-Cas9’s mechanics and potential.
Source: TED

 

Unsurprisingly, then, He’s announcement was met with widespread backlash from the scientific community. An official investigation showed that He fabricated ethics approvals in order to recruit participants for his experiment, and he was subsequently fired from his university. Because of its dangers, many countries (including China) have prohibited gene editing of human embryos for reproductive purposes.

However, He’s situation may not have been quite what it seems. He has been painted as a rogue agent, pursuing his research in relative secrecy in pursuit of fame or notoriety. His university, the hospital where the edited twins were born, and even his own government denounced his actions. Suspiciously, though, the Washington Post noted that, in an interview with the Associated Press, an executive from that same hospital applauded He’s research on camera, and the university was listed as a sponsor on a copy of the informed consent form He used for his experiment. Furthermore, CCR5, the gene He attempted to modify, is associated with memory and cognition, meaning that the modified twins may exhibit augmented intelligence.

He Jiankui speaking at the Second International Summit on Human Genome Editing. Source: Iris Tong (Voice of America)

Is it possible that the Chinese government is covertly supporting or encouraging unethical genetic engineering practices? Dr. Gregory Licholai of Yale’s School of Management notes that China has been much quicker than other countries to expedite human trials of CRISPR-enabled cancer treatments, and that China’s regulatory authorities have been “extremely permissive” regarding CRISPR clinical trials.

The genetic modification of humans carries enormous risks and rewards. With enough skill and some good luck, a country that supports early adoption of human gene editing could claim significant health and intellectual advantages over the rest of the world within a generation. Only time will tell if November’s announcement quietly ushered in a new age of geopolitical competition.

— Ricky C.

The technological singularity: Science fiction or science future?

What would happen if we programmed a computer to design a faster, more efficient computer? Well, if all went according to plan, we’d get a faster, more efficient computer. Now, we’ll assign this newly designed computer the same task: improve on your own design. It does so, faster (and more efficiently), and we iterate on this process, accelerating onwards. Towards what? Merely a better computer? Would this iterative design process ever slow down, ever hit a wall? After enough iterations, would we even recognize the hardware and software devised by these ever-increasingly capable systems? As it turns out, these questions have extremely important ramifications in the realm of artificial intelligence (AI) and humanity’s continuing survival.

Conceptual underpinnings

In 1965, Gordon Moore, then CEO of Intel, wrote a paper describing a simple observation: every year, the number of components in an integrated circuit (computer chip) seemed to double. This roughly corresponds to a doubling of performance, as manufacturers can fit twice the “computing power” on the same-sized chip. Ten years later, Moore’s observation remained accurate, and around this same time, an eminent Caltech professor popularized the principle under the title of “Moore’s law”. Although current technology is brushing up against theoretical physical limits of size (there is a theoretical “minimum size” transistor, limited by quantum mechanics), Moore’s law has more-or-less held steady throughout the last four and a half decades.

Moore’s Law, illustrated. Source: Our World in Data

Accelerating returns

This performance trend represents an exponential increase over time. Exponential change underpins Ray Kurzweil’s “law of accelerating returns” — in the context of technology, accelerating returns mean that the technology improves at a rate proportional to its quality. Does this sound familiar? It is certainly the kind of acceleration we anticipated in our initial scenario. This is what is meant by the concept of a singularity — once the conditions for accelerating returns are met, the advances they bring begin to spiral beyond our understanding and, quite likely, beyond our control.

Losing control

As AI will almost certainly depend on some digital computer substrate, the concept of accelerating returns are readily applied to AI. However, losing control of an exponentially accelerating machine intelligence could have catastrophic consequences. In his excellent TED Talk, the world-renowned AI philosopher Nick Bostrom discusses the “control problem” of general AI and suggests that, though the advent of machine superintelligence remains decades away, it would be prudent to address its lurking dangers as far in advance as possible.

Nick Bostrom delves into the existential implications imposed onto humanity by machine superintelligence. Source: TED

 

In his talk, Bostrom makes a poignant illustrative analogy: “The fate of [chimpanzees as a species] depends a lot more on what we humans do than on what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend on what the superintelligence does.”

— Ricky C.

The technological singularity: Science fiction or science future?

What would happen if we programmed a computer to design a faster, more efficient computer? Well, if all went according to plan, we’d get a faster, more efficient computer. Now, we’ll assign this newly designed computer the same task: improve on your own design. It does so, faster (and more efficiently), and we iterate on this process, accelerating onwards. Towards what? Merely a better computer? Would this iterative design process ever slow down, ever hit a wall? After enough iterations, would we even recognize the hardware and software devised by these ever-increasingly capable systems? As it turns out, these could potentially be some of the most important questions our species will ever ask.

In 1965, Gordon Moore, then CEO of Intel, wrote a paper describing a simple observation: every year, the number of components in an integrated circuit (computer chip) seemed to double. This roughly corresponds to a doubling of performance, as manufacturers can fit twice the “computing power” on the same-sized chip. Ten years later, Moore’s observation remained accurate, and around this same time, an eminent Caltech professor popularized the principle under the title of “Moore’s law”. Although current technology is brushing up against theoretical physical limits of size (there is a theoretical “minimum size” transistor, limited by quantum mechanics), Moore’s law has more-or-less held steady throughout the last four and a half decades.

Moore’s Law, illustrated. Source: Our World in Data

This performance trend represents an exponential increase over time. Exponential change underpins Ray Kurzweil’s “law of accelerating returns” — in the context of technology, accelerating returns mean that the technology improves at a rate proportional to its quality. Does this sound familiar? This is certainly the kind of acceleration we anticipated with computers designing computers. This is what is meant by the concept of a singularity — once the conditions for accelerating returns are met, those advances begin to spiral beyond our understanding, if not our control.

This concept is perhaps most easily applied to artificial intelligence (AI):

Let us suppose that the technological trends most relevant to AI and neurotechnology maintain their accelerating momentum, precipitating the ability to engineer the stuff of mind, to synthesize and manipulate the very machinery of intelligence. At this point, intelligence itself, whether artificial or human, would become subject to the law of accelerating returns, and from here to a technological singularity is but a small leap of faith. — Murray Shanahan, The Technological Singularity, MIT Press

Clearly, there is reason to wade cautiously into these teeming depths. In his excellent TED Talk, the world-renowned AI philosopher Nick Bostrom suggests that, though the advent of machine superintelligence remains decades away, it would be prudent to address its lurking dangers as far in advance as possible.

Source: TED

— Ricky C.