Tag Archives: Computer Science

Moving Beyond Silicon (Part Three): The Holy Grail, Quantum Computing.

“This is a revolution not unlike the early days of computing. It is a transformation in the way computers are thought about.”

– Ray Johnson, Lockheed Martin

In Part One of this series, we discussed how Photonics could extend Moore’s Law by allowing conventional computers to send information at light speed. In Part Two, we discussed how Graphene could extend Moore’s law by creating computers that could operate thousands of times faster, cheaper, cooler, and friendlier to the environment. But what if the solution to Moore’s Law isn’t harnessing a new technology, or implementing some new material; what if the only way to make Moore’s law obsolete, is to go back to the drawing board and rethink how information is computed. Welcome to the world of Quantum Computing.

D-Wave 128qubit chip

A chip constructed by D-Wave Systems, a company in Burnaby, B.C., designed to operate as a 128-qubit quantum optimization processor, Credit: D-Wave Systems (Wikimedia Commons)

In order to appreciate the impact of quantum computing, it will first be necessary to understand how it differs from classical computing. To get a decent overview, please watch the following short explanation by Isaac McAuley.

YouTube Preview Image

Now, with a better understanding of Quantum Computing and how it differs from classical computing, we can ask, “Why is this development so important?”

In order to answer this, consider that Quantum Computers can solve certain problems much more efficiently then our fastest computers can. For instance, suppose you have a budget for buying groceries and you want to work out which items at the store will give you the best value for your money; a quantum computer can solve this task much faster then a classical one. But let’s try a less trivial example. Suppose you take that very same problem and now you  are a hydro company, you have a limited amount of electricity to provide your entire city with, and you want to find the best method of providing electricity to all people within your city at all hours of the day. Ever further, consider that you might be a doctor and that you want to radiate the most amount of cancer out of your patient’s body, using the smallest amount of radio isotopes, and by compromising the least amount of their immune system. All of these are problems of optimization that a quantum computer can solve at breakneck speeds. Think about it, how much time and money is spent trying to solve these problems and how much scientific progress could be made if they could all of these problems could be solved exponentially faster. For further consideration, checkout the following video by Lockheed Martin (one of the first buyers of a Quantum Computer) below:

YouTube Preview Image

Now that we are familiar with how Quantum Computing differs from classical computing, and what Quantum Computing could do for scientific research, the question one might ask is, “Why do we not have Quantum Computers yet?” The simplest answer is that while some Quantum Computers are for sale at exorbitant prices (The D-Wave One 128 Qubit Computer remains a costly $10,000,000 USD), Quantum Computers remain highly prone to errors.

Recently, researchers at the Martinis Lab at the University of Santa Barbara have developed a new technology for Quantum Computers that allows the computer to check itself for errors without compromising how the system operates. One of the fundamental obstacles when working with Quantum Computers is that measuring a Qubit changes its inherent state. Therefor, any operation performed on a Qubit, such as checking to see that the Qubit stores the information that you want, will defeat the purpose of the system altogether.

Why? Well, because Quantum Physics, that’s why.

This new system allows Qubits to work together in order to ensure that the information within them is preserved by storing information across several Qubits which backup their neighbouring Qubits. According to chief researcher Julian Kelly, this new development allows Quantum computers the ability to

“pull out just enough information to detect errors, but not enough to peek under the hood and destroy the quantum-ness”

This development could allow Quantum Computers the reliability needed to not only ensure that they work as intended; but also, decrease the price of the current Quantum Computers as most of the money spent on a Quantum Computer is on the environmental controls the machine is placed in to prevent errors from occurring.

If you are interested in learning more about Quantum Computing, I highly recommend the following articles as introductions to what will surely be a revolution in Computer Science:

1. Quantum Computing for Everyone by Michael Neilson (a writer on the standard text for Quantum Computing, )
2. The Limits of Quantum by Scott Aronson in Scientific American (an MIT Professor of Computer Science)
3. The Revolutionary Quantum Computer that May Not Be Quantum at All by Wired Science

If you have any questions, please feel free to comment. I hope you all enjoyed this three part series on what the future of computation holds in trying to surpass Moore’s Law. Whatever way you look at it, the future looks bright indeed!

– Corey Wilson

Autopilot Vehicle

Sitting in a car that drives itself is like a dream, and now, this dream is closer to become true as more and more manufactures are introducing semi-autopilot systems that assist the human driver to drive better and safer. It would be a huge step if the car can operate by itself but this process in done in many smaller steps. Autopilot seems to be the next generation vehicles as many manufactures are rushing to develop their own autopilot car. This technology will reduce human error and therefore reduce the chances of traffic accidents.

Although autopilot car seems to only appear in science fiction stories, however the technology has already been used on some cars today. Such as highway lane assist and adaptive cruise control technology. So how does auto piloting work in a car? First of all, It has to be able to sense the surroundings by systems using advanced imaging systems to gather information about a vehicle’s surroundings that is then cross-checked against detailed GPS and map data . Some other manufactures uses only camera and laser sensor based systems which are much more affordable but does not perform as well.

How a Tesla S P85D auto pilot system

Autopilot cars have had some success in testing but it is still at an early stage in real use. The technology might not be ready to be accepted by the governments and public but the result of autopilot cars can potentially affect our lives or even change it. With computing systems running the cars which means cars could be driving at a faster speed and the existing traffic system would allow more cars to be on the roads. This would greatly reduce the time spent in the cars and even use the time in the cars to doing other things . Another major advantage of autopilot vehicle is that it takes out human errors of driving. Each year many traffic accidents are caused by distraction of drivers (texting, phoning, loose of concentration, sleeping). These will no longer be a problem as autopilot cars arrive. Below shows how an Audi A7 parks itself in an very efficient manner.

 

Audi A7 auto parking system

 

With lots of researches and improve in technology, autopilot cars may no longer be a dream in few years’ time. As the technology becomes much more mature it is up to whether the public and the government would accept such an idea. In the near future, we may be able to watch TV or play games on our way to work or school in an autopilot car.

Work Cited

Can I See Your License, Registration and C.P.U.?

Autos on autopilot: the evolution of the driverless car.

Car autopilot would end text danger while driving, says Volvo. 

Moving Beyond Silicon (Part Two): The Unlimited Potential of Graphene

In Part One of this series, I discussed an overarching trend in computer science called Moore’s Law. This law (think of it as a law of computer nature) states that roughly every two years, the overall processing power of the conventional computer will double. Now, while this may be exciting to the consumer who cannot wait to get their hands on a faster computer for the same price; the consequences of this law for the computer engineers who create the devices, have never been more challenging.

The most difficult of these challenges is that as more components are put into the central processing unit (CPU) of a computer, the components will need to become so small that they will eventually reach the size of a single atom! Once at that hard limit, there will simply be no more room left in the microchip for more components. Consequently, the method of how we manufacture computers will need to be drastically reimagined if technological innovation is to continue in the foreseeable future.

Moore's Law and Technological Innovation

Moore’s Law can be directly linked to technological innovation. As our computers become more powerful, cutting-edge technologies proliferate. Credit: Humanswlord (WordPress)

That said, as many novel options for how to compute information differently have become available, scientists have wondered if the problem lies in what we compute our information with. Particularly, what if extending Moore’s Law for the next century meant that we only had to change the material we make our computers with? Enter the miracle material, graphene.

Put simply, graphene is a very thin layer of carbon, measuring only one atom thick. These single carbon atoms are packed together tightly to form what is known as a hexagonal honeycomb lattice.

Graphene in a Hexagonal Honeycomb Lattice

Graphene in a Hexagonal Honeycomb Lattice. Each carbon atom (represented by the “C”) is perfectly bonded to it’s neighbours. Credit: Karl Bednarik (Wikimedia Commons).

This unique structure of carbon atoms makes graphene the thinnest, lightest, strongest, best heat and electricity conducting material known to science. Not only that, but due to carbon being the fourth most abundant element in the universe, it could very well be the most sustainable material also.  However, it isn’t what graphene is that makes it so spectacular, but what it can do when put it to the task of computation.

In 2013, IBM showed their first generation of graphene-based integrated circuit (IC). Just this last year, IBM announced another breakthrough in creating its next generation of IC built with graphene. In this new generation of graphene based IC, IBM layered graphene in the channels of a microchip (the spots where electricity is conducted and electrons are moved around). From applying graphene in this way, IBM found the microchip to be 10,000 times faster then the current silicon alternative which uses copper. From this, IBM claims that graphene based electronics possess the potential to reach speeds upwards of 500ghz (that is 500 billion operations per second or 20 times faster then the conventional laptops sold today). This is made possible because graphene has little to no electrical resistance, which means it can move electrons around the processor much more efficiently then copper ever could.

With that said, there are still many hurdles which must be passed before graphene makes it into your next mobile device. For one, graphene based IC’s remain incredibly difficult to build using traditional processes for manufacturing microchips. IBM stated that current methods of creating graphene for use in IC’s remain expensive and inefficient. That said, it is only a matter of time before manufacturing processes are streamlined and the great graphene revolution in computer science begins!

For more information on graphene, check out this video by SciShow below.

YouTube Preview Image

Vive la graphene!

– Corey Wilson

Moving Beyond Silicon: Taking on Moore’s Law with Photonics

“It can’t go on forever. The nature of exponentials is that you push them out and eventually disaster happens”

This stark comment made in 2005 by Gordon E. Moore, a co-founder of Intel, has served as a wakeup call for computer scientists who have known for nearly forty years that mainstream manufacturing processes for computer circuitry will soon become obsolete.

Ever since its original conception in 1965, Moore’s Law has predicted that roughly every two years, the number of transistors put into a computers central processing unit (CPU) will double. Moreover, each time the amount of transistors doubles,  Why should you care about this trend? First, progress in the development of the increasingly intelligent technologies that effect our lives relies heavily on this trend. Insofar as over time, we depend on our computers becoming faster, while simultaneously staying cool, small and economic to operate; so that we can innovate with them.

PPTMooresLawai

Much of human progress, from consumer electronics to medical breakthroughs, relies on the Moore’s Law continuing for the foreseeable future. Credit: Ray Kurzweil and Kurzweil Technologies, Inc. (Wikimedia Commons)

Second, Intel has predicted that as soon as 2021, new strategies for designing computer hardware will need to be implemented or development of exciting new technologies will be stunted dramatically. Consequently, computer scientists have been researching the future of manufacturing the CPU, and the prospects are encouraging.

Optoelectronics_experiment

An electro-optics researcher experiments with routing lasers. Credit: Adelphi Lab Center (Wikimedia Commons)

 

Take for instance, silicon photonics, a development in CPU design that will allow signals to be processed within the computer using lasers which guide photons  rather than traditional electronic circuits which pass information using electrons. Silicon photonics develops computing in two key ways. First, a hybrid silicon laser can be used to encode information using pulses of light and pass the laser through guides to transmit information quickly to other parts of the computer.

2108713905_44d262678d_o

The new technology by IBM allows for electrical signals to combined with the light produced by a laser and create short pulses of light. These pulses can then be routed around the inside of a computer to transmit information at speeds much faster then our modern computers can accomplish today. Credit: ibmchips (Flickr Commons)

 

Second, a laser can be passed through specially designed optical logic gates, made from crystals with a non-linear refractory index, to perform arithmetic and logical operations within the computer’s processing unit at light speed.

5228755628_8a6e249b49_o

By utilizing specially manufacturing crystals, the new technology produced by IBM can create logic gates, the fundamental circuitry that makes decisions inside the CPU, that utilize light rather than the traditional electronic circuity. Not only does this have the benefit of fast-as-light speeds, but the circuitry operates cooler than the modern computer and utilizes far less electricity. Credit: Programmazione.it2010 (Flickr Commons)

 

In December of 2012, IBM announced that it had designed and created a hybrid silicon photonics-electronic chip, and not only that, but they also managed to integrate the monolithic manufacturing process used to make CPUs today.

This breakthrough by IBM in silicon photonics found two key benefits. First, there is the difference in performance. Where traditional CPUs are able to move data around the computer in the mere gigabytes per second, comparatively, tests on the new IBM photonics chip show the speed to be in the terabytes per second. From this capability, IBM has predicted that communication between computers, or between CPUs within a computer, could see a speed increase by a factor of one thousand. Second, because IBM was able to use a similar manufacturing process, that is not too different from the way CPUs are made today, this means that this technology could be offered commercially quickly, cheaply and integrate with current computer hardware almost seamlessly.

So what does this mean for the future of computation? Will silicon photonics contribute to the forthcoming revolution in computer manufacturing? Tell me what you think and stay tuned for part two when I look at developments in new materials that will shape the computers of the future.

For a more detailed breakdown of the silicon photonics, check out the presentation given by the Director of the Institute for Energy Efficiency and Kavli Professor of Nanotechnology at the University of California Santa Barbara, John Bowers. Bower’s presentation at the 2014 European Conference on Optical Communications provides some of the finer details of this exciting new technology in the video below:

YouTube Preview Image

– Corey Wilson

Artificial Intelligence: Humanity’s greatest achievement or biggest threat?

Imagine a world where our every need is catered to by an army of sentient machines— robots in charge of menial tasks such as cleaning and cooking, responsible for driving us safely to work, and growing our food more efficiently.  These are just some of the potential uses of artificial intelligence, a field of computer science that has the goal of creating intelligent machines that are able to learn and act on their own. Some argue that as we develop more powerful artificial intelligence we will be able to tackle problems such as world poverty and hunger.

But not everyone is keen on these technologies, including renowned theoretical physicist Stephen Hawking and inventor/entrepreneur Elon Musk, who has gone as far as saying that “…with artificial intelligence we are summoning the demon”.

YouTube Preview Image

Video uploaded to YouTube by Kostas Keramaris

Critics of these technologies are not questioning the potential benefits, but are weighing them against their inherent risks on humanity as a whole. One of the biggest risks is referred to as technological singularity — a point at which artificial intelligence will exceed human capacity and control, which could potentially end civilization or humanity as we know it. In a nutshell, the machines would be able to learn at a rate beyond the limitations of human biology, and once outside the control of their creators, the machines’ behaviour may not be as intended or even harmful to mankind.

Will AI ever reach singularity? The Sci-Fi series Battlestar Galactica explores this possibility, depicting a future in which humans are in a perpetual state of war with their former robotic minions. Photo credit: "Big Frakkin Toaster" by ⣫⣤⣇⣤,  licensed under CC BY 2.0

Will AI ever reach singularity? The Sci-Fi series Battlestar Galactica explores this possibility, depicting a future in which humans are in a perpetual state of war with their former robotic minions.
Photo credit: “Big Frakkin Toaster” by ⣫⣤⣇⣤, licensed under CC BY 2.0

Doomsday scenarios aside, advances in AI technology will undoubtedly have other negative effects on society. A recent report by an American non-partisan think tank looked at how AI would affect the work force. Based on the opinions of over 1,900 experts, they believe that by the year 2025  AI and robotics will permeate every aspect of our lives, and foresee that an increase in automation will put downward pressure on blue collar jobs, and to some extent, white collar jobs.

Having AI take over everyday tasks such as driving will also have interesting ramifications on our legal frameworks – who bears the responsibility if a vehicle that is driven by an algorithm crashes and kills a human? Should the blame rest with the software engineer, or should we take a robot to court

The potential benefits of AI research are clear — so how can we ensure that its risks are dealt with accordingly? The Future of Life Institute, a volunteer-run group, has been trying to address these issues for a number of years. They have recently put forth a proposal delineating some of the potential research that can be done to ensure AI remains beneficial and aligned with human interests. They are pushing to increase research on ways to make AI safer, and to better understand its effects on society. Their proposal has been backed by top AI researchers and academia – including Stephen Hawking and Elon Musk, who has also decided to donate $10 million USD to fund such research.

What’s your take on this issue? Do you cozy up with Siri every night and welcome the era of robotics with open arms? Or do you lie awake at night in fear of the robot uprising?

Hasta la vista baby? Some prominent scientific minds have their doubts about Artificial Intelligence

Hasta la vista baby? Some prominent scientific minds are not so keen on Artificial Intelligence.
(Image: Wikimedia Commons)

Written by Armando Dorantes Bollain y Goytia