Tag Archives: FakeNews

Open AI Creates Text Generator Too Dangerous To Release

Open AI, a Silicon-Valley company devoted to developing artificial intelligence to benefit humanity, has recently created an algorithm so good at generating fake text they have deemed it too dangerous to release.

The first step in creating the text generator, GPT-2, was to collect text from the internet by following the most up-voted links on Reddit. This created 40 gigabytes worth of human selected training text for the algorithm. The next step was to set up the computation of probabilities to suggest the next most probable words to use. So, given a sentence to begin with, GPT-2 then recommends a string of words to follow, the quality of which are freakishly good.

As I mentioned in my last article, this capacity for people to use computers to generate fake content is bad news given the advent of fake news. GPT-2 is yet another tool that allows for the mass flooding of fake news into our communication channels.

Researchers from Cambridge University have created a browser-based game that allows you to rise to power using an army of fake news generating computer bots. Incredibly, the researchers are using the data generated from the game to help fuel research on media literacy and education. This is one of the ways that we can fight back, to become more educated and spot fake news.

A second, yet more concerning way that researchers are fighting back, is by using machine learning to spot machine generated text. Researchers at MIT, IBM, and Harvard, have created an algorithm called GLTR, Giant Language model Test Room, who’s inner workings are similar to that of Open AI’s GPT-2, but who’s job is to calculate the probability that the text was written by a computer. They have also made a website where you can try it out. This battle of the machines is quite concerning as there is no malicious actor at the moment, and yet the race is on with researches trying to outdo each other at a rapid rate.

While artificial intelligence does not seem to be in any way dangerous yet, it does look like the spinning wheel of progress is already unstoppable with advances racing forward faster than safety measures can be placed.

 https://www.youtube.com/watch?v=0n95f-eqZdw

Many experts suggest creating safety measures before creating dangerous algorithms: Source

For those who are interested in hearing more, Siraj Raval has an extremely informative, while funny and not too long, video describing the implications of GPT-2 and how it works.

~Danny

P.S. The game is really cool!

Also, if you’re at all into computers, Siraj Raval has a great channel! (Seriously).

 

Here’s a little comedic relief