Tag Archives: Artificial Intelligence

Immortality?

What defines immortality? If immortality is defined by “living” beyond the grave as a physical body with a personality and ability to interact with the world, then computer science is on the edge of this scary yet fascinating phenomenon.

https://www.sciencealert.com/images/articles/processed/shutterstock_225928441_web_1024.jpg

What is it:

In the past few years, researchers have developed many different types of AI technology to capture and store human data, with the potential of building Virtual Reality replicas of the deceased. This AI technology is based on the idea of “augmented reality,” where an AI programme uses the technological imprint – past social media – left behind by someone to build a digital replica of them. Lifenaut, a branch of the Terasem Movement, for example, gathers human personality data for free with the hope of creating a foundational database to one day transfer into a robot or holograph. While this technology is still in its experimental stages, at least 56,00 people have already stored mind-files online, each containing the person’s unique characteristics, including their mannerisms, beliefs, and memories. According to researchers, in about fifty years, millennials will have reached a point in their lives where they will have generated zettabytes (1 trillion gigabytes) of data, which is enough to create a digital version of themselves.

How:

The prospective application of this technology is that loved ones may use robot reincarnation as a way to grieve or commemorate someone who passed away. VR replicas will be able to speak with the same voice as the dead person, ask questions, and even perform simple tasks. They may be programmed to contain memories and personality, so family members could dynamically converse and interact with them.

https://www.youtube.com/watch?time_continue=89&v=KYshJRYCArEConcerns:

Concerns:

Of course, digital-afterlife technology is a revolutionary concept that brings major ethical and practical implications. Some believe that VR replicas of loved ones are a normal, new way to mourn the deceased, similar to current ways people use technology to remember their loved ones, such as watching videos or listening to voice recordings. The problematic part of this application is that it does not seem like a healthy way to grieve. Allowing people to clutch onto digital personas of deceased individuals out of fear and delusion could effectively inhibit people from moving on with their life. The other consequence that this AI technology threatens is the potential of robots achieving high intelligence, becoming so advanced they could replicate the human race. Some futurists thus believe that it is essential to program chips with preventative technology into robots to battle this apocalyptic risk. There are also significant social implications to consider with VR replicas. Should the right to create these replicas be based solely on wealth? The prospect of people having the ability to buy immortality, even in digital form, is certainly problematic, as it perpetuates troubling societal disparity. Ultimately, there are far too many harmful individual and societal consequences of VR human replication technology for it be a worthwhile or necessary AI innovation.

Do you believe in immortality?

No, and one life is enough for me.” – Albert Einstein

~ Angela Wei

Artificial Intelligence: Should we be concerned?

Faster, efficient and predictable. These are some of the qualities that make a computer better than humans at computation and analysis of data. Ever since the first computer was made, the key difference between a human and computer has been intelligence. It is the reason humans use computers and not the other way around. However, if a computer were to have intelligence, to what extent would it affect humans? And on how large a scale?

The most common conception of artificial intelligence is a computer of superhuman intelligence capable of outthinking a human. In reality, most of this is true. Take for example a complex game like chess, a chess grandmaster cannot beat AlphaZeroGo (AI). AlphaZeroGo was beaten 100-0 by AlphaZero. OpenAI’s bot managed to beat the world’s top Dota(online multiplayer game) players in 1-v-1 games. It is on course to beating them in 5-v-5 games where the five on the computer’s side is really a one.

Why should this be concerning? Proffessionals in these games have spent thousands of hours practicing. The computer has only spent a few hundred, if not less. The computer does not have the rules of these games written in it’s code. It is allowed to form them; an act of intelligence. The computer can train tirelessly against itself to get better.

Sebastian Thrun
Attribution: World Economic Forum [CC BY-SA 2.0], via Wikimedia Commons

The impact of artificial intelligence is not limited to games. Sebastian Thrun of Udacity (an online educational organization) and his colleagues have trained AI in various fields. One of them is an AI that drives car autonomously. This was done in a span of 3 months. Dermatologists train for several years to get proficient at identifying skin cancer. In late 2017, one of the world’s top dermatologists was looked at a mole on a patient’s skin and deduced that it was not cancer. To back their diagnosis, they used Thrun’s AI (different from self driving AI) through their phone which concluded that it was skin cancer. A biopsy revealed an aggressive form of melanoma. Link

Elon Musk
Attribution: Steve Jurvetson [CC BY 2.0], via Wikimedia Commons

Why would this be a cause for concern? Elon Musk has been heavily involved in the field of artificial intelligence and he has been recorded stating his concerns about AI on multiple occassions. He has claimed that AI is more dangerous than nuclear weapons. Link Why do some share this concern while others do not? This can be answered by explaining what AI is and what it is not.

AI is most cases deals with a specialized domain. It is trained through a process called Deep Learning. It can be trained to get better than humans, but at specific tasks. For example, Thrun’s self driving AI cannot control a motorcyle on the same road or beat someone at Chess. An AI proficient in multiple domains does not exist at this time. Moreover, there is no governing body to monitor the fabrication of AI.

In conclusion, better communication of science behind AI can help curb the concerns over it and hopefully lead to formation of a body of governance.

This video describes the common misconceptions about artificial intelligence.
Attribution: TED Talks, via YouTube

https://youtu.be/B-Osn1gMNtw

Elon Musk is seen here expressing his concerns about AI.
Attribution: SXSW, via YouTube

The technological singularity: Science fiction or science future?

What would happen if we programmed a computer to design a faster, more efficient computer? Well, if all went according to plan, we’d get a faster, more efficient computer. Now, we’ll assign this newly designed computer the same task: improve on your own design. It does so, faster (and more efficiently), and we iterate on this process, accelerating onwards. Towards what? Merely a better computer? Would this iterative design process ever slow down, ever hit a wall? After enough iterations, would we even recognize the hardware and software devised by these ever-increasingly capable systems? As it turns out, these could potentially be some of the most important questions our species will ever ask.

In 1965, Gordon Moore, then CEO of Intel, wrote a paper describing a simple observation: every year, the number of components in an integrated circuit (computer chip) seemed to double. This roughly corresponds to a doubling of performance, as manufacturers can fit twice the “computing power” on the same-sized chip. Ten years later, Moore’s observation remained accurate, and around this same time, an eminent Caltech professor popularized the principle under the title of “Moore’s law”. Although current technology is brushing up against theoretical physical limits of size (there is a theoretical “minimum size” transistor, limited by quantum mechanics), Moore’s law has more-or-less held steady throughout the last four and a half decades.

Moore’s Law, illustrated. Source: Our World in Data

This performance trend represents an exponential increase over time. Exponential change underpins Ray Kurzweil’s “law of accelerating returns” — in the context of technology, accelerating returns mean that the technology improves at a rate proportional to its quality. Does this sound familiar? This is certainly the kind of acceleration we anticipated with computers designing computers. This is what is meant by the concept of a singularity — once the conditions for accelerating returns are met, those advances begin to spiral beyond our understanding, if not our control.

This concept is perhaps most easily applied to artificial intelligence (AI):

Let us suppose that the technological trends most relevant to AI and neurotechnology maintain their accelerating momentum, precipitating the ability to engineer the stuff of mind, to synthesize and manipulate the very machinery of intelligence. At this point, intelligence itself, whether artificial or human, would become subject to the law of accelerating returns, and from here to a technological singularity is but a small leap of faith. — Murray Shanahan, The Technological Singularity, MIT Press

Clearly, there is reason to wade cautiously into these teeming depths. In his excellent TED Talk, the world-renowned AI philosopher Nick Bostrom suggests that, though the advent of machine superintelligence remains decades away, it would be prudent to address its lurking dangers as far in advance as possible.

Source: TED

— Ricky C.