Tag Archives: AI

Immortality?

What defines immortality? If immortality is defined by “living” beyond the grave as a physical body with a personality and ability to interact with the world, then computer science is on the edge of this scary yet fascinating phenomenon.

https://www.sciencealert.com/images/articles/processed/shutterstock_225928441_web_1024.jpg

What is it:

In the past few years, researchers have developed many different types of AI technology to capture and store human data, with the potential of building Virtual Reality replicas of the deceased. This AI technology is based on the idea of “augmented reality,” where an AI programme uses the technological imprint – past social media – left behind by someone to build a digital replica of them. Lifenaut, a branch of the Terasem Movement, for example, gathers human personality data for free with the hope of creating a foundational database to one day transfer into a robot or holograph. While this technology is still in its experimental stages, at least 56,00 people have already stored mind-files online, each containing the person’s unique characteristics, including their mannerisms, beliefs, and memories. According to researchers, in about fifty years, millennials will have reached a point in their lives where they will have generated zettabytes (1 trillion gigabytes) of data, which is enough to create a digital version of themselves.

How:

The prospective application of this technology is that loved ones may use robot reincarnation as a way to grieve or commemorate someone who passed away. VR replicas will be able to speak with the same voice as the dead person, ask questions, and even perform simple tasks. They may be programmed to contain memories and personality, so family members could dynamically converse and interact with them.

https://www.youtube.com/watch?time_continue=89&v=KYshJRYCArEConcerns:

Concerns:

Of course, digital-afterlife technology is a revolutionary concept that brings major ethical and practical implications. Some believe that VR replicas of loved ones are a normal, new way to mourn the deceased, similar to current ways people use technology to remember their loved ones, such as watching videos or listening to voice recordings. The problematic part of this application is that it does not seem like a healthy way to grieve. Allowing people to clutch onto digital personas of deceased individuals out of fear and delusion could effectively inhibit people from moving on with their life. The other consequence that this AI technology threatens is the potential of robots achieving high intelligence, becoming so advanced they could replicate the human race. Some futurists thus believe that it is essential to program chips with preventative technology into robots to battle this apocalyptic risk. There are also significant social implications to consider with VR replicas. Should the right to create these replicas be based solely on wealth? The prospect of people having the ability to buy immortality, even in digital form, is certainly problematic, as it perpetuates troubling societal disparity. Ultimately, there are far too many harmful individual and societal consequences of VR human replication technology for it be a worthwhile or necessary AI innovation.

Do you believe in immortality?

No, and one life is enough for me.” – Albert Einstein

~ Angela Wei

Open AI Creates Text Generator Too Dangerous To Release

Open AI, a Silicon-Valley company devoted to developing artificial intelligence to benefit humanity, has recently created an algorithm so good at generating fake text they have deemed it too dangerous to release.

The first step in creating the text generator, GPT-2, was to collect text from the internet by following the most up-voted links on Reddit. This created 40 gigabytes worth of human selected training text for the algorithm. The next step was to set up the computation of probabilities to suggest the next most probable words to use. So, given a sentence to begin with, GPT-2 then recommends a string of words to follow, the quality of which are freakishly good.

As I mentioned in my last article, this capacity for people to use computers to generate fake content is bad news given the advent of fake news. GPT-2 is yet another tool that allows for the mass flooding of fake news into our communication channels.

Researchers from Cambridge University have created a browser-based game that allows you to rise to power using an army of fake news generating computer bots. Incredibly, the researchers are using the data generated from the game to help fuel research on media literacy and education. This is one of the ways that we can fight back, to become more educated and spot fake news.

A second, yet more concerning way that researchers are fighting back, is by using machine learning to spot machine generated text. Researchers at MIT, IBM, and Harvard, have created an algorithm called GLTR, Giant Language model Test Room, who’s inner workings are similar to that of Open AI’s GPT-2, but who’s job is to calculate the probability that the text was written by a computer. They have also made a website where you can try it out. This battle of the machines is quite concerning as there is no malicious actor at the moment, and yet the race is on with researches trying to outdo each other at a rapid rate.

While artificial intelligence does not seem to be in any way dangerous yet, it does look like the spinning wheel of progress is already unstoppable with advances racing forward faster than safety measures can be placed.

 https://www.youtube.com/watch?v=0n95f-eqZdw

Many experts suggest creating safety measures before creating dangerous algorithms: Source

For those who are interested in hearing more, Siraj Raval has an extremely informative, while funny and not too long, video describing the implications of GPT-2 and how it works.

~Danny

P.S. The game is really cool!

Also, if you’re at all into computers, Siraj Raval has a great channel! (Seriously).

 

Here’s a little comedic relief

 

 

AI vs Humans

“Siri, please write my SCIE 300 blog post for me.” Unfortunately, Siri does not yet have the capability to form conscious thought and respond with an engaging response…but this idea may not be so far-fetched.

In recent studies, Artificial Intelligence (AI) systems from Alibaba and Microsoft performed better than humans on reading comprehension tests. Although this AI innovation threatens to displace some human jobs, the practical applications of this technology in customer service and other professional sectors show extraordinary potential in saving time and human efforts.

Source: https://cumanagement.com/sites/default/files/2018-09/AI-human-heads.jpg

In the study, AI machines were subjected to Stanford University’s SQuAD, a reading comprehension test based on Wikipedia articles. Humans scored an average of 82.304, while Alibaba’s machine learning model scored 82.44 and Microsoft’s scored 82.65. I found this innovation interesting because reading comprehension is a complex task involving language understanding, critical thinking, and problem solving. The thought of computers surpassing humans in these areas both scares and fascinates me.

Alibaba’s AI software is a deep neural network model based on Natural Language Processing (NLP) using the Hierarchical Attention Network. It can read in order to identify phrases that could contain potential answers. Currently, the model only works well with questions that have clear answers. If inquiries are too vague, or if there are no clearly prepared solutions, the system may not work. Despite these hiccups, the impact of this underlying technology is incredibly widespread. It is already being expanded and utilized in customer service jobs, such as call-centers, food service, retail, and online inquiry management. Alibaba has already employed this technology in its AI-powered customer service chatbot which answers millions of online shoppers’ questions.

After Alibaba and Microsoft announced the ability of their AIs, there has been a looming fear that machines will take over human jobs. This new technology could indeed mean that we could codify routine jobs, even those that require social interaction (like answering customer inquiries) into a series of machine-readable instructions.

As this technological automation occurs, companies may deploy more bot technology, potentially displacing human jobs. However, with the current technology, AIs are not yet capable of fully understanding and responding to customers as a human could, and are thus unable to fully replace most jobs. Entirely new job sectors will also arise as technology develops and grows, especially in fields such as data science and computer engineering. Looking further, this innovation could lead to more advanced bots capable of solving more complex problems, including social and political issues such as climate change or resource allocation.

– Angela Wei

Artificial Intelligence: Should we be concerned?

Faster, efficient and predictable. These are some of the qualities that make a computer better than humans at computation and analysis of data. Ever since the first computer was made, the key difference between a human and computer has been intelligence. It is the reason humans use computers and not the other way around. However, if a computer were to have intelligence, to what extent would it affect humans? And on how large a scale?

The most common conception of artificial intelligence is a computer of superhuman intelligence capable of outthinking a human. In reality, most of this is true. Take for example a complex game like chess, a chess grandmaster cannot beat AlphaZeroGo (AI). AlphaZeroGo was beaten 100-0 by AlphaZero. OpenAI’s bot managed to beat the world’s top Dota(online multiplayer game) players in 1-v-1 games. It is on course to beating them in 5-v-5 games where the five on the computer’s side is really a one.

Why should this be concerning? Proffessionals in these games have spent thousands of hours practicing. The computer has only spent a few hundred, if not less. The computer does not have the rules of these games written in it’s code. It is allowed to form them; an act of intelligence. The computer can train tirelessly against itself to get better.

Sebastian Thrun
Attribution: World Economic Forum [CC BY-SA 2.0], via Wikimedia Commons

The impact of artificial intelligence is not limited to games. Sebastian Thrun of Udacity (an online educational organization) and his colleagues have trained AI in various fields. One of them is an AI that drives car autonomously. This was done in a span of 3 months. Dermatologists train for several years to get proficient at identifying skin cancer. In late 2017, one of the world’s top dermatologists was looked at a mole on a patient’s skin and deduced that it was not cancer. To back their diagnosis, they used Thrun’s AI (different from self driving AI) through their phone which concluded that it was skin cancer. A biopsy revealed an aggressive form of melanoma. Link

Elon Musk
Attribution: Steve Jurvetson [CC BY 2.0], via Wikimedia Commons

Why would this be a cause for concern? Elon Musk has been heavily involved in the field of artificial intelligence and he has been recorded stating his concerns about AI on multiple occassions. He has claimed that AI is more dangerous than nuclear weapons. Link Why do some share this concern while others do not? This can be answered by explaining what AI is and what it is not.

AI is most cases deals with a specialized domain. It is trained through a process called Deep Learning. It can be trained to get better than humans, but at specific tasks. For example, Thrun’s self driving AI cannot control a motorcyle on the same road or beat someone at Chess. An AI proficient in multiple domains does not exist at this time. Moreover, there is no governing body to monitor the fabrication of AI.

In conclusion, better communication of science behind AI can help curb the concerns over it and hopefully lead to formation of a body of governance.

This video describes the common misconceptions about artificial intelligence.
Attribution: TED Talks, via YouTube

https://youtu.be/B-Osn1gMNtw

Elon Musk is seen here expressing his concerns about AI.
Attribution: SXSW, via YouTube

The technological singularity: Science fiction or science future?

What would happen if we programmed a computer to design a faster, more efficient computer? Well, if all went according to plan, we’d get a faster, more efficient computer. Now, we’ll assign this newly designed computer the same task: improve on your own design. It does so, faster (and more efficiently), and we iterate on this process, accelerating onwards. Towards what? Merely a better computer? Would this iterative design process ever slow down, ever hit a wall? After enough iterations, would we even recognize the hardware and software devised by these ever-increasingly capable systems? As it turns out, these could potentially be some of the most important questions our species will ever ask.

In 1965, Gordon Moore, then CEO of Intel, wrote a paper describing a simple observation: every year, the number of components in an integrated circuit (computer chip) seemed to double. This roughly corresponds to a doubling of performance, as manufacturers can fit twice the “computing power” on the same-sized chip. Ten years later, Moore’s observation remained accurate, and around this same time, an eminent Caltech professor popularized the principle under the title of “Moore’s law”. Although current technology is brushing up against theoretical physical limits of size (there is a theoretical “minimum size” transistor, limited by quantum mechanics), Moore’s law has more-or-less held steady throughout the last four and a half decades.

Moore’s Law, illustrated. Source: Our World in Data

This performance trend represents an exponential increase over time. Exponential change underpins Ray Kurzweil’s “law of accelerating returns” — in the context of technology, accelerating returns mean that the technology improves at a rate proportional to its quality. Does this sound familiar? This is certainly the kind of acceleration we anticipated with computers designing computers. This is what is meant by the concept of a singularity — once the conditions for accelerating returns are met, those advances begin to spiral beyond our understanding, if not our control.

This concept is perhaps most easily applied to artificial intelligence (AI):

Let us suppose that the technological trends most relevant to AI and neurotechnology maintain their accelerating momentum, precipitating the ability to engineer the stuff of mind, to synthesize and manipulate the very machinery of intelligence. At this point, intelligence itself, whether artificial or human, would become subject to the law of accelerating returns, and from here to a technological singularity is but a small leap of faith. — Murray Shanahan, The Technological Singularity, MIT Press

Clearly, there is reason to wade cautiously into these teeming depths. In his excellent TED Talk, the world-renowned AI philosopher Nick Bostrom suggests that, though the advent of machine superintelligence remains decades away, it would be prudent to address its lurking dangers as far in advance as possible.

Source: TED

— Ricky C.