Category Archives: Science Communication

The technological singularity: Science fiction or science future?

What would happen if we programmed a computer to design a faster, more efficient computer? Well, if all went according to plan, we’d get a faster, more efficient computer. Now, we’ll assign this newly designed computer the same task: improve on your own design. It does so, faster (and more efficiently), and we iterate on this process, accelerating onwards. Towards what? Merely a better computer? Would this iterative design process ever slow down, ever hit a wall? After enough iterations, would we even recognize the hardware and software devised by these ever-increasingly capable systems? As it turns out, these questions have extremely important ramifications in the realm of artificial intelligence (AI) and humanity’s continuing survival.

Conceptual underpinnings

In 1965, Gordon Moore, then CEO of Intel, wrote a paper describing a simple observation: every year, the number of components in an integrated circuit (computer chip) seemed to double. This roughly corresponds to a doubling of performance, as manufacturers can fit twice the “computing power” on the same-sized chip. Ten years later, Moore’s observation remained accurate, and around this same time, an eminent Caltech professor popularized the principle under the title of “Moore’s law”. Although current technology is brushing up against theoretical physical limits of size (there is a theoretical “minimum size” transistor, limited by quantum mechanics), Moore’s law has more-or-less held steady throughout the last four and a half decades.

Moore’s Law, illustrated. Source: Our World in Data

Accelerating returns

This performance trend represents an exponential increase over time. Exponential change underpins Ray Kurzweil’s “law of accelerating returns” — in the context of technology, accelerating returns mean that the technology improves at a rate proportional to its quality. Does this sound familiar? It is certainly the kind of acceleration we anticipated in our initial scenario. This is what is meant by the concept of a singularity — once the conditions for accelerating returns are met, the advances they bring begin to spiral beyond our understanding and, quite likely, beyond our control.

Losing control

As AI will almost certainly depend on some digital computer substrate, the concept of accelerating returns are readily applied to AI. However, losing control of an exponentially accelerating machine intelligence could have catastrophic consequences. In his excellent TED Talk, the world-renowned AI philosopher Nick Bostrom discusses the “control problem” of general AI and suggests that, though the advent of machine superintelligence remains decades away, it would be prudent to address its lurking dangers as far in advance as possible.

Nick Bostrom delves into the existential implications imposed onto humanity by machine superintelligence. Source: TED

 

In his talk, Bostrom makes a poignant illustrative analogy: “The fate of [chimpanzees as a species] depends a lot more on what we humans do than on what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend on what the superintelligence does.”

— Ricky C.

Recording the Cell? New technologies further uncover mysteries surrounding the cell.

Does anyone really know what life is like inside of a cell? Sure, we can all say that the mitochondria is the powerhouse of the cell, and we’ve learned mitosis more time than we can count, but do we really know about the intricacies of day to day cellular processes? Historically, answer has been an overwhelming no, but that is something the researchers behind CAMERA are hoping to change.

CAMERA, or CRISPR-mediated analog multievent recording aperture is a tool developed by David Liu and Weixin Tang of Harvard university to record the molecular interactions within a cell, all of which are stored on the cell’s DNA. This new discovery allows scientists to observe and therefore clarify the processes that contribute to such things as the emergence of cancer, aging, environmental damage, and even embryonic development. CAMERA is only one of the many developments based off of the gene cutting technology known as CRISPR-Cas9.

Thyroid Cancer Cell Line. Courtesy of NASA’s Marshall Space Flight Centre and Flickr Commons. 

What is CRISPR-Cas9 you ask? Well, it’s basically a really small pair of scissors, so small that it can even cut DNA. CRISPR-Cas9, or CRISPR for short, is a technology based off of the natural defence mechanisms found in bacteria that have been reengineered for editing genomes. It has the ability to cut the double helix strand of DNA allowing for researchers to easily alter DNA sequences and modify gene expression. Some of the major implications of this include the possible correction of genetic defects, and the treatment and prevention of cancer and other diseases.

Video recreating a CRISPR-mediated genome editing. Courtesy of McGovern Institute for Brain Research at MIT .

So how did scientists develop a cellular recording device from this cutting tool? When CRISPR cuts a DNA strand to alter the sequence, the strand will naturally repair itself but in doing so can occasionally add in errors that make the targeted gene inactive. These random errors can sometimes be used as markers, mapping out the cell’s pattern of differentiation. Liu and Tang took this information and set out to regulate it thereby creating a more detailed, continuous record of a cell’s life, documenting not only its responses to external factors but the severity of the response and how long it lasts.

Flowchart of CRISPR mediated gene alterations. Image courtesy of Flickr Commons

At this point in time, CAMERA, is able to document cellular responses to light exposure, antibiotics, viral infections, and internal molecular interactions in as few as 10 cells. As well, it can record multiple events at once making it an impressive candidate for future medical technologies involved in screening embryos for a wide variety of mutations during development. Despite these impressive feats, Liu and Tang are still working towards pinpointing the recording down to one cell, allowing scientists to one day observe the processes of each cell individually and efficiently isolating any mutations. Another big step is proving it works to the same detailed extent when placed in the body of a living mammal as it does in a small cell group in a petri dish. There is still a lot to be done before we can confidently say we know how cells operate but CAMERA is a step in the right direction.

-Tenanye Haglund

AI vs Humans

“Siri, please write my SCIE 300 blog post for me.” Unfortunately, Siri does not yet have the capability to form conscious thought and respond with an engaging response…but this idea may not be so far-fetched.

In recent studies, Artificial Intelligence (AI) systems from Alibaba and Microsoft performed better than humans on reading comprehension tests. Although this AI innovation threatens to displace some human jobs, the practical applications of this technology in customer service and other professional sectors show extraordinary potential in saving time and human efforts.

Source: https://cumanagement.com/sites/default/files/2018-09/AI-human-heads.jpg

In the study, AI machines were subjected to Stanford University’s SQuAD, a reading comprehension test based on Wikipedia articles. Humans scored an average of 82.304, while Alibaba’s machine learning model scored 82.44 and Microsoft’s scored 82.65. I found this innovation interesting because reading comprehension is a complex task involving language understanding, critical thinking, and problem solving. The thought of computers surpassing humans in these areas both scares and fascinates me.

Alibaba’s AI software is a deep neural network model based on Natural Language Processing (NLP) using the Hierarchical Attention Network. It can read in order to identify phrases that could contain potential answers. Currently, the model only works well with questions that have clear answers. If inquiries are too vague, or if there are no clearly prepared solutions, the system may not work. Despite these hiccups, the impact of this underlying technology is incredibly widespread. It is already being expanded and utilized in customer service jobs, such as call-centers, food service, retail, and online inquiry management. Alibaba has already employed this technology in its AI-powered customer service chatbot which answers millions of online shoppers’ questions.

After Alibaba and Microsoft announced the ability of their AIs, there has been a looming fear that machines will take over human jobs. This new technology could indeed mean that we could codify routine jobs, even those that require social interaction (like answering customer inquiries) into a series of machine-readable instructions.

As this technological automation occurs, companies may deploy more bot technology, potentially displacing human jobs. However, with the current technology, AIs are not yet capable of fully understanding and responding to customers as a human could, and are thus unable to fully replace most jobs. Entirely new job sectors will also arise as technology develops and grows, especially in fields such as data science and computer engineering. Looking further, this innovation could lead to more advanced bots capable of solving more complex problems, including social and political issues such as climate change or resource allocation.

– Angela Wei

Image

No Resistance: An Introduction to Superconductivity

Technologies like magnetometry (the measure of magnetism) and magnetic resonance imaging rely on the strength of magnetic fields. With increasing need for experimental precision and control, new physics are sought after to develop stronger electromagnets. One way is to eliminate a material’s electrical resistance. This is known as superconductivity. Herein we discuss what superconductivity is.

In physics we learn that electrons may be excited into higher energy states. We can excite electrons using energy carried by light (known as photoexcitation).

Semiconductors respond to photoexcitations. These are materials that exhibit an electrical resistance unlike that of insulators or conductors. Their electronic behaviour, the way that electrons “move” through the material, is often temperature dependent. All light carries energy. If this energy is absorbed by matter, the matter heats up. It is this heat that excites electrons, so we can use light to change the electronic behaviour of semiconductors. This makes them useful objects of study.

For simplicity, we treat these semiconductors as patterned arrangements of atoms (called a lattice). Where there are “gaps” in the arrangement, for not every lattice is densely packed, we imagine varying densities of electrons. It is reasonable to imagine these electrons as a cloud that pervades the atomic arrangement. If we excite the semiconductor with light, this electron cloud may change, thereby changing the semiconductor’s electronic behaviour.

Where does superconductivity arise? As mentioned, semiconductors are temperature dependent, so they respond to heating (in our example, by way of photoexcitation). What if we cool a semiconductor instead? As a general rule, the electrical resistance of a semiconductor increases as its temperature decreases. However, when cooled to a temperature near absolute zero (the semiconductor’s critical temperature), its electrical resistance vanishes. All magnetic field lines, like those seen with iron filings dropped around a bar magnet, are expelled from the interior of the semiconductor.

“Levitation of a magnet on a superconductor” (source: Wikimedia Commons, available under CC BY-SA 3.0)

This bizarre property of matter, while it is not universal, is the direct consequence of a lack of electron excitations. Recall that excited electrons (like those in an electronic circuit) emit energy as they relax. This energy is absorbed by matter, so energy is lost as heat. Electrons also need somewhere (i.e. a higher energy state) to be excited towards. This “somewhere” is unique to the material, so the material determines how and where electrons are excited.

“Overview of superconducting critical temperatures for a variety of superconducting materials since the first discovery in 1911” (source: Wikimedia Commons, available under CC BY-SA 4.0)

Supercooled materials lose many pathways to these higher energy states, so their electrons are never excited unwillingly, so no energy is lost as heat. Since electrical resistance in semiconductors is temperature dependent, a supercooled and thereby “heat-less” semiconductor has zero electrical resistance. It is superconductive.

– Eric Easthope

The systematic study of candidate superconductors (like bismuth selenide) is ongoing at UBC’s Stewart Blusson Quantum Matter Institute.

Source: General Chemistry: Principles, Patterns, and Applications (2012). Saylor Academy. Available here under CC BY-NC-SA 3.0.

Artificial Intelligence: Should we be concerned?

Faster, efficient and predictable. These are some of the qualities that make a computer better than humans at computation and analysis of data. Ever since the first computer was made, the key difference between a human and computer has been intelligence. It is the reason humans use computers and not the other way around. However, if a computer were to have intelligence, to what extent would it affect humans? And on how large a scale?

The most common conception of artificial intelligence is a computer of superhuman intelligence capable of outthinking a human. In reality, most of this is true. Take for example a complex game like chess, a chess grandmaster cannot beat AlphaZeroGo (AI). AlphaZeroGo was beaten 100-0 by AlphaZero. OpenAI’s bot managed to beat the world’s top Dota(online multiplayer game) players in 1-v-1 games. It is on course to beating them in 5-v-5 games where the five on the computer’s side is really a one.

Why should this be concerning? Proffessionals in these games have spent thousands of hours practicing. The computer has only spent a few hundred, if not less. The computer does not have the rules of these games written in it’s code. It is allowed to form them; an act of intelligence. The computer can train tirelessly against itself to get better.

Sebastian Thrun
Attribution: World Economic Forum [CC BY-SA 2.0], via Wikimedia Commons

The impact of artificial intelligence is not limited to games. Sebastian Thrun of Udacity (an online educational organization) and his colleagues have trained AI in various fields. One of them is an AI that drives car autonomously. This was done in a span of 3 months. Dermatologists train for several years to get proficient at identifying skin cancer. In late 2017, one of the world’s top dermatologists was looked at a mole on a patient’s skin and deduced that it was not cancer. To back their diagnosis, they used Thrun’s AI (different from self driving AI) through their phone which concluded that it was skin cancer. A biopsy revealed an aggressive form of melanoma. Link

Elon Musk
Attribution: Steve Jurvetson [CC BY 2.0], via Wikimedia Commons

Why would this be a cause for concern? Elon Musk has been heavily involved in the field of artificial intelligence and he has been recorded stating his concerns about AI on multiple occassions. He has claimed that AI is more dangerous than nuclear weapons. Link Why do some share this concern while others do not? This can be answered by explaining what AI is and what it is not.

AI is most cases deals with a specialized domain. It is trained through a process called Deep Learning. It can be trained to get better than humans, but at specific tasks. For example, Thrun’s self driving AI cannot control a motorcyle on the same road or beat someone at Chess. An AI proficient in multiple domains does not exist at this time. Moreover, there is no governing body to monitor the fabrication of AI.

In conclusion, better communication of science behind AI can help curb the concerns over it and hopefully lead to formation of a body of governance.

This video describes the common misconceptions about artificial intelligence.
Attribution: TED Talks, via YouTube

https://youtu.be/B-Osn1gMNtw

Elon Musk is seen here expressing his concerns about AI.
Attribution: SXSW, via YouTube

Combating Climate Change with Robotic Jellyfish

The backbone of any diverse ecosystem is a healthy coral reef. Image from Wikimedia Commons

A quick dive beneath the ocean’s surface can reveal a completely different world. Our ocean’s coral reefs house some of nature’s most complex, diverse, and lively aquatic life. Alas, with global warming increasing our ocean’s temperatures, most of this coral is actually dying at an alarming rate.

Be that as it may, within this bustling community you might come across a robotic jellyfish or two. Have no fear, these ones don’t sting! In fact, these devices may be our solution to combating climate change.

What are robotic jellyfish?

The robotic jellyfish is a device that was developed by Erik Engeberg and his team of mechanical engineers at Florida Atlantic University. This robot mimics the gentle movements of a real jellyfish and collects data on ocean temperatures via built-in sensors. Ultimately, this allows for the study of the hidden impacts of climate change at sea.

The robotic jellyfish propelling itself gently through the ocean. Image from JENNIFER FRAME, NICK LOPEZ, OSCAR CURET AND ERIK D. ENGEBERG/IOP PUBLISHING

Can this robot save our reefs?

Yes! In fact, the Great Barrier Reef recently experienced a widespread death of coral (a process known as “bleaching”). Consequently, the death of aquatic life whom depended on coral as shelter to protect themselves from predators followed suit. With that being said, the creation of the robotic jellyfish has allowed scientists to develop better measures to protect these reefs from further damage.

Coral reefs become lacklustre and dull after dying in a process known as “bleaching”. The bleaching of coral reefs no longer provide shelter for aquatic life. Image from Wikimedia Commons

How were coral reefs studied before?

In the past, drones were deployed to collect data on marine life; however, they were very destructive. For instance, drones produced a lot of noise which can scare off marine life. On top of that, their propellers take in ocean water quite forcibly, tearing off the coral which is an essential habitat for these animals.

The soft movements of wild jellyfish were what inspired Engeberg and his team to develop quieter technology to monitor coral reefs. The robotic jellyfish has allowed us to collect data without posing as a threat to animals or potentially destroying the reef.

Underwater drones were used in the past. However, their propellers were quite noisy and posed as a threat to the coral reefs. Image from Wikimedia Commons

The Future of the Robotic Jellyfish

Though the robotic jellyfish is still a work in progress, it has given scientists a better understanding of how to tackle the ongoing fight with climate change. To give you a better visual and understanding of the robotic jellyfish, this Youtube video summarizes the robot and all its technicalities:

-Christina Rayos

Recording the Cell? New technologies further uncover the mystery of the cell

Does anyone really know what life is like inside of a cell? Sure, we can all say that the mitochondria is the powerhouse of the cell, and we’ve learned mitosis more time than we can count, but do we really know about the intricacies of day to day cellular processes? Historically, answer has been an overwhelming no, but that is something the researchers behind CAMERA are hoping to change.

CAMERA, or CRISPR-mediated analog multievent recording aperture is a tool developed by David Liu and Weixin Tang of Harvard university to record the molecular interactions within a cell, all of which are stored on the cell’s DNA. This new discovery allows scientists to observe and therefore clarify the processes that contribute to such things as the emergence of cancer, aging, environmental damage, and even embryonic development. CAMERA is only one of the many developments based off of the gene cutting technology known as CRISPR-Cas9.

Thyroid Cancer Cell Line. Courtesy of NASA’s Marshall Space Flight Centre and Flickr Commons.

What is CRISPR-Cas9 you ask? CRISPR-Cas9, or CRISPR for short, is a technology based off of the natural defense mechanisms found in bacteria that have been reengineered for editing genomes. It has the ability to cut the double helix strand of DNA allowing for researchers to easily alter DNA sequences and modify gene expression. Some of the major implications of this include the possible correction of genetic defects, and the treatment and prevention of cancer and other diseases.

So how did scientists develop a cellular recording device from this cutting tool? When CRISPR cuts a DNA strand to alter the sequence, the strand will naturally repair itself but in doing so can occasionally add in errors that make the targeted gene inactive. These random errors can sometimes be used as markers, mapping out the cell’s pattern of differentiation. Liu and Tang took this information and set out to regulate it thereby creating a more detailed, continuous record of a cell’s life, documenting not only its responses to external factors but the severity of the response and how long it lasts.

CRISPR mediated DNA splitting. Courtesy of Flickr Commons.

At this point in time, CAMERA, is able to document cellular responses to light exposure, antibiotics, viral infections, and internal molecular interactions in as few as 10 cells. As well, it can record multiple events at once making it an impressive candidate for future medical technologies involved in screening embryos for a wide variety of mutations during development. Despite these impressive feats, Liu and Tang are still working towards pinpointing the recording down to one cell, allowing scientists to one day observe the processes of each cell individually and efficiently isolating any mutations. Another big step is proving it works to the same detailed extent when placed in the body of a living mammal as it does in a small cell group in a petri dish. There is still a lot to be done before we can confidently say we know how cells operate but CAMERA is a step in the right direction.

-Tenanye Haglund