Artificial intelligence prepares to predict what will happen next by creating videos of the future

Artificial intelligence

It is true that we reached a point where artificial intelligence is making characteristics that might seem fictitious or even futuristic, but this serves to be measuring their abilities and possibilities, leaving some classic joke ‘Skynet’ could Serve in applications within our daily life.

Today MIT through its Laboratory of Computer Science and Artificial Intelligence (CSAIL) Science is presenting a new project, that makes an algorithm – based deep learning is able to predict the future, come on, it’s just a rudimentary form of trying to guess What will happen next in a scene, but it is certainly very attractive.

Artificial intelligence
Image Source: Google Image

Creating videos of the future

This system has been exposed to a long workout that consisted of watching over two million videos on Flickr, equivalent to about two years of footage, with the intention that it be able to study the scenes and try to guess how Will develop an action in the near future.

An important part of this training was to display the videos without any label, so that the system could associate what it sees with actions without the aid of a complementary text that indicates to him what it is seeing. This ability is characteristic in humans, who possess the ability to determine the possible consequence of an act, for example, if we see, are preparing food, it’s natural to think that someone will eat.

You may also like to read another article on TheKindle3Books: The robots are programmable and their materials will be also thanks to MIT

This may be obvious to humans, but for a system of artificial intelligence is something totally new and whose goal is to understand the present, which would help to recognize if someone is about to fall or suffer any damage, or even Give notice in case of a potential automobile accident.

The experiment is based on something they have called ‘red confrontational’, in which we find two neural networks where generated videos and the other determines the likelihood of such videos, which causes try to lock each other, Since while one tries to create scenes and events as real as possible, the other tries to prove that what we see is false.

So far the videos generated by the system are low resolution and have a duration of more than one second, but the first tests has managed to succeed in the movement or action that follows the scene, for example, the movement of the waves Sea, the trajectory of a train in a straight line, or the possible facial expressions of a baby.

Despite the attractiveness of the project, those responsible say that they are still far from achieving an artificial intelligence system can accurately predict a complete action or more complex, because so far there is no way that the platform has “Common sense”, which complicates their ability to “understand” and everything is based on experiences and learning.