Pages

vendredi, juin 05, 2015

Google Car

In this article, we talk about something a little different from what we have seen before, Google Cars. More generally, we talk about self-driving cars. This is different from what has been done before because in our previous examples we talked about doing one thing which, was more or less managed by one algorithm using Machine Learning. Here, such a car would be an agglomeration of new technologies so that it would have all that it needs to become self-driving.
The declared objective of a self-driving car is to reduce accidents, which is why they did not stop at developing a car that only assists drivers, they want the human to stop taking part in driving. Of course, for the tests they ran, the car had a steering wheel to allow a human to take over in case of a problem but the final design of the car will not give such a possibility. This kind of car will also enable people who are physically disabled and cannot drive to use again a car without the help of anyone. In this video, they took a nearly blind man to make a tour in their car; he was the one sitting behind de steering wheel.

How does it work? I will not go into details, firstly because Google indeed has not unveiled all the information about the functioning of their car and secondly because that might become a bit complicated for everyone - including me - to explain it. So quickly, if you have not seen a self-driving car yet, it has a device turning on its roof. This device in fact sends laser beam in order to generate a detailed 3D map of its environment. We can see the result in the picture below where every box represents an obstacle for the car.
Google's self-driving car using its laser-generated maps of the
conditions around it  to guide its path
We add to these laser beams several camera so that the car can detect a red light or a stop sign. Then we add some more radar devices so that the car can move and estimate with precision at what distance an obstacle is approaching. Then you mix all of this up with an algorithm to determine the driving behavior and we get a self-driving car!
Back in 2011, the technology was already doing amazing things as this TED talk shows but in 2015 the track record is even better. The technology is maturing and now the car can handle most of driving situations such as traffic zone lane closures or even a bike running a red light. The 23 actual self-driving test cars were involved in only 12 minor accidents on public roads and in these accidents, the self-driving cars were not at fault or it was being driven manually .

Self-driving cars are getting ready and their release for the public is planned for 2020. There are still lots of things to do and especially in terms of legislation. In the USA, four states have already allowed driverless cars so the major part of the world is still not prepared for the introduction of self-driving cars. Let us hope it will be done in time!

Deep learning and neural networks

Today we focus on Deep Learning and an important algorithm part of the Deep Learning; Neural Networks. Deep Learning correponds to a category among the machine learning methods. These methods are characterized by applying several complex functions to some inputs in order to obtain outputs that solve a problem. If Deep learning is highly complex and difficult to understand, the interesting thing about it is that it is inspired from neuroscience. Our brain works the same way applying complex functions to electric signals coming from our nerves. One of the algorithms trying to copy our brain is the neural network.
Modelling a simple neural network
with one hidden layer

The first layer corresponds to the inputs modeling the signal we get from our nerves. The last layer corresponds to the outputs characterizing the solution of the problem the neural network is solving. The layers between are called hidden layers. Each cell of these layers takes the inputs of the previous layer as argument, applies a function to them and passes the value of this function to the next layer until the output layer. These cells represent the neurons of our brain communicating through synapses. What is remarkable about these algorithms is that it works pretty well with automatic speech recognition, natural language processing or facial recognition. These things are hard to solve mathematically, using equations but when we know a language, we easily understand someone talking to us, no matter how his pitch is or his pronunciation. It shows how closer we are getting to modeling a human brain and to reach Artificial Intelligence. 

For example, Neural Networks have been used since the 80's to decipher the amount written on the checks you give to the bank. More recently, thanks to the increased power of our computers, more complex neural networks can be trained with up to one billion neurons and tenths or hidden layers. Recently in 2014, Facebook unveiled an algorithm called DeepFace that can recognize specific human faces in images around 97% of the time, even when those faces are partly hidden. A human would have a hard time being as good as this algorithm.

For more illustrations of what can be done with these deep learning algorithms, I recommend you to watch an incredible video from TED where you can see Jeremy Howard, the previous President of Kaggle, a community and competition platform of over 200,000 data scientists. The last point he exposes is an unsupervised algorithms. In other words, algorithms which can learn without the need of human help. Algorithms were developed that way because helping the machine to learn by giving examples can be very costly in human work. These kind of algorithms are amazing because after going through lots of images, it can recognize photos of cats sleeping from cats jumping. Of course, without human intervention, the only thing left to do is to give the corresponding labels to the classified photos so that the computer can communicate with us with words.

Now these machines have no human behavior, but add this technology to glasses and we get reality-augmented glasses capable to give you indications about who or what is standing in front of you.


vendredi, mai 29, 2015

Watson, the new Jeopardy Champion

Watson is a supercomputer developed by IBM to be good at question answering. But it is not only good at question answering but also playing at Jeopardy, where sometimes questions are tricky if not total nonsense for humans.
What does it have to do with Artificial Intelligence? Watson is fully autonomous, during a game of Jeopardy it does not need any human intervention to know the question and chose and answer for it.

We could say that Watson is the natural evolution of Deep Blue. Deep Blue is able to play chess which is - strategy aside - easy to implement for a computer. After this success, IBM wanted to aim for another challenge. They first thought about developing a machine able to pass the Turing test but felt that the public would not be that receptive to such an achievement. That's when they thought about the well-know television quiz Jeopardy. It was at that time that Ken Jennings was in his winning streak which still is the longest ever reached. They made the bet to develop Watson, a computer able to win against such a champion.

Therefore, to reach this goal, Watson is able to understand natural language, sentences which are not easy to understand for everyone, find the clues and provide the corresponding answer.

At first, Watson had a basic behavior. It tried to find the key words in the answers given by the Jeopardy host, look through its large database for texts which were related to these words, try to extract possible questions from it with an associated to probability and chose as a final answer the one with the largest probability. If it seems a quite easy task, this was already a challenge because it had to do it in a limited amount of time so that it would try to answer before the humans it would play against. Therefore, parallelization was used to do it as fast as it could.

When this had been done, Watson could barely beat a ten years old. It had not yet reached the level of a Jeopardy champion. There were too many questions it answered wrong, even if it was fast. The problem was often that Watson did not understand the kind of answer it should give. For example, when a month was required Watson could give the noun of a person as answer. That is when machine learning came in. Watson was fed with lots of Jeopardy questions with their right answer. Then an associated algorithm enabled it to give more importance to some words compared to others, understand the kind of answer expected for a given category.

Was it good enough? Not yet. The last critical point was to add some online learning (real time learning during a Jeopardy game). Sometimes, Watson would give the same answer than a candidate had given previously, when the candidate was wrong. Watson had to eliminate this answer from itss computations. Moreover it would help the computer to understand for a given category what kind of answer is expected after knowing the first answers given.

Watson competing against the two greatest Jeopardy champion. Image : IBM

In early 2011, Watson was ready for its well-know exhibition match. Its opponents were Ken Jennings who had the longest unbeaten run at 74 winning appearances and Brad Rutter who had earned the biggest prize pot with a total of $3.25 million. Ironically some considered Rutter and Jennings Jeopardy-winning machines.

By the end of the first of the special exhibition match shows, the score was tied and no one could guess the final outcome. Then Double Jeopardy started. Watson powered through questions, winning even with answers it was far from convinced about, and placing odd bets that came good. By the end of the second episode, it had $25,000 more than its closest opponent, Rutter.

At the end of the third episode, all three correctly answered the last question "William Wilkinson's 'An account of the principalities of Wallachia and Moldavia' inspired this author's most famous novel" with "who is Bram Stoker?" but Jennings appended his response with: "I for one welcome our new computer overlords". He, and Rutter, had lost to Watson.

Now IBM is trying to adapt Watson for a more business-like use. For example, they are trying to transform it into a healthcare assistant for doctors where after describing your symptoms, Watson would find the probable disease you have and propose the medicines you should use.

vendredi, mai 08, 2015

Deep blue, a champion at chess

As the Turing test I talked about in my article "Genesis", how good you are at playing chess is also considered as a measure of how intelligent a machine is. Mostly because it used to be a measure of how intelligent a man was. Moreover chess is a simple game with well-defined rules so it is easy to make a computer play chess. But it is much more complex to teach a computer how to win (i.e. implement a strategy) at chess and especially against worldwide chess champion.

Deep Blue was a project at IBM that started in 1989 with Feng-hsiung Hsu and Murray Campbell. It was based on their previous work on a chess-playing machine called ChipTest. Deep Blue became their new project to build a computer able to beat any human at chess. Deep blue was a supercomputer at that time with a computing power of 11.38 GFLOPS. Thanks to this computing power, Deep Blue was able to explore up to 200 million possible chess positions per second.

Here, machine learning had been used so that Deep Blue could learn from 700,000 grandmaster games. This is a technique, which is nowadays well-known and broadly used by scientists, especially with the advent of datamining to uncover patterns and hidden relationships in large databases. Machine learning enabled Deep Blue to learn abstract notions from games which were quantified by parameters as how important is a safe king position compared to a space advantage in the center.

The first match of six games between the reigning world champion Gary Kasparov and Deep Blue occurred on February 10, 1996.Though Deep blue won the first game, Kasparov won the three following games and then tied the two others. It was a loss for Deep blue and yet it had became the first machine to ever win against a reigning world champion. After some improvements, a rematch took place in May, 1997. Deep Blue became the first machine to win a match against a reigning world champion with a final score in the six-game rematch of 3.5–2.5 (wins count 1 point, draws count 0.5 point)

After the loss, Kasparov said that he sometimes saw deep intelligence and creativity in the machine's moves, a sign that Deep Blue was a success, an intelligent machine had been created. The project ended after this win but inspired other IBM projects, as we will see with Watson in the next article.

vendredi, mai 01, 2015

Genesis

When did people start to think that Artificial Intelligence was something reachable for humanity ? 
It is hard to put a precise date on it but surely artificial intelligence became more than just a dream with the remarkable work of the mathematician Alan Turing, it was in the 30's. He was a genius who theorized a machine that could solve any algorithm : The well known Turing's Machine. This theoretical machine could be considered as a computer with infinite memory.

For those who have seen the movie Imitation Game (Spoiler Alert) the machine he invented to break the Nazi's Enigma code in World War II was one of the first ancestors of our computer. But beyond the building of such a machine, what interests us more is its functionality. The "bombe" was used to find the daily setting of the German's writing machine which encrypted messages. It tried out the possible settings for encrypted messages for which they knew the plaintext and stopped when it had a correspondence between the original encrypted message and the newly ciphered plaintext with a given setting. Indeed this machine behavior was rather simple since it only had to answer the question: does our current ciphered message correspond to the original encrypted message and pass to the next setting if this was not the case. However this was a job that took too long for a human and here a machine was able to surpass a man.

From this starting point Computer Science was born. Several scientists invented algorithms which allowed computers to "think". These algorithms constitute the basis of the Machine learning theory, where a machine after "learning" the correlation between a set of input values and the resulting output  values is able to chose what will most probably be the output for new inputs. Machine learning is currently the most efficient tool humans have to mimic the human brain.

In parallel, the Turing Test was also invented. This test consists in a discussion between a human and a machine observed by an evaluator who does not know which one is which. If the evaluator is not reliably able to tell the machine from the human then the machine passes the test. It can be considered as a measure of the artificial intelligence of a machine. Though this test was limited to a few machines it showed how deep researchers thought artificial intelligence had became something possible.  (see: CAPTCHAs)

In the next articles we will see what intelligent machines have been created and how artificial intelligence could evolve in the years to come.

lundi, avril 27, 2015

What is Artificial Intelligence?

When speaking about Artificial Intelligence, one often thinks of humanoid robots as in the movies AI or I Robot. But we are still far from having such a technology in our hands. In this blog we will explain how artificial intelligence has developed through inventions and machines used by our society. Thus, before going into detail we need to define our central subject: What is artificial intelligence?

Defined as the “study and design of intelligent agents,” artificial intelligence characterizes a non-living thing able to take decisions in order to maximize its chances of success. Artificial intelligence is not only about reproducing human reactions but more generally about making "intelligent" decisions.

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race." However, machines have still a long way to go before taking over humanity.

This blog won't talk about military projects that obviously helped the development of artificial intelligence a lot thanks to almost unlimited funds. I made this choice because we have enough "public" examples and mostly because I am one of those who think that those intelligent machines would be more useful if designed for the use of everyone than if designed for a military application. I hope that this blog will convince you of the importance of artificial intelligence and how helpful it can become for humans.