NVIDIA, from the video game industry to Deep Learning

May 23, 2017 - Tech News
NVIDIA, from the video game industry to Deep Learning

The name of NVIDIA will always be linked to the video game industry and graphics. They were responsible for a real revolution: the first GPUs or better known as graphics cards. Realism in 2D and 3D graphics, the ability to display more and more polygons on stage Do you remember the GeForce 256 capable of moving more than 10 million polygons? Now that seems insignificant to us.

There are few who continue to surprise when the CEO of NVIDIA, Jen-Hsun Huang, appears at a conference proclaiming that much of the company revenues now come from other sources of Deep Learning, computing in the cloud or the development of automotive systems (some of them autonomous). They have managed to get in and push many other industries like they already did with video games. Deep Learning and Machine Learning is one of its main focuses today.

The fact is that NVIDIA is not just up to this new revolution as a beginner with the mere interest of diversifying revenue. NVIDIA has been the engine of all these technologies for a few years . And as many analysts say, this has only just begun.


Image Source: Google Image

2006, the year when the NVIDIA revolution began beyond video games

2006 is a turning point for NVIDIA, in that year launched the development kit, CUDA (Compute Unified Device Architecture), which would mark a before and after in how it was programmed on GPUs. By simplifying the concept, what was wanted was to provide the independent calculations needed to render each pixel. Like rendering shadows, reflections, lighting or transparencies.

Until then it was unthinkable that scientists would use GPUs for their work, but from then on. CUDA makes it possible to use high-level languages such as C ++ or Python to program complex calculations and algorithms in GPUs. It allows you to program jobs in parallel and with a lot of data.

Currently, the CUDA platform is used in thousands of accelerated GPU applications and has been the driving force behind thousands of research articles. This new computational paradigm allows a “co-processing” distributed between the CPU and the GPU. CUDA is included in GeForce GPUs, ION Quadro and Tesla GPUs. Developers can make use of the various CUDA programming solutions; NVIDIA has an immense amount of tools and platforms supported within the ecosystem.


Image Source: Google Image

When Deep Learning knocked on the door of NVIDIA?

Andrew Ng, of whom we have already made an extensive profile, predicted the use of GPUs in the field of artificial intelligence and how this would make it possible to accelerate Deep Learning. It was the year 2008 when he published a paper talking about the subject, but it was not until a few years later when several experiments using Nvidia GeForce confirmed it. As for example, the one made by Alex Krizhervsky in 2012, PhD student at the University of Toronto who managed using 2 common NVIDIA GeForce graphics process about 1.2 million images with an accuracy of 15%, far more than Nobody had gotten to date.

At that time, Google Brain powered by Andrew Ng achieved the first milestones in Deep Learning: able to recognize a cat among more than 10 million YouTube videos, but with the inconvenience of practically needing a data center with more than 2,000 CPUs. Later on Bryan Catanzaro, a researcher at NVIDIA Research, would get a similar experiment replacing those 2,000 CPUs for only 12 NVIDIA GPUs.

Today companies like Google, Facebook, Microsoft or Amazon base their infrastructure on GPUs developed by NVIDIA. And it is estimated that with the Artificial Intelligence boom there are around 3000 startups worldwide working on the NVIDIA platform.

The NVIDIA GPUs have enabled work processes in parallel reducing training models Machine Learning week days. But the acceleration of this process exceeds the predictions of Moore’s famous law, as these same neural networks have reached a 50x performance compared in just three years.


Image Source: Google Image

Building Data Center and Smallest AI Boards

NVIDIA has a profitable line of components ready to integrate into the most demanding Data Center. It is not necessary to mount each of GPUs separately, but NVIDIA markets one of the most complex systems that currently exist as the NVIDIA DGX-1. It is a super-computer with a configuration of eight GPUs Nvidia Tesla P100, two Xenon processors and 7TB SSD storage, which gives us a performance of 170 teraflops, something equivalent to the power of 250 conventional servers, all within one Box the size of a small desk.

Capable of representing a complete gift for projects like Open AI driven by Elon Musk that a few months ago was an event when receiving the pot from the hands of the CEO of the company. It will serve to promote the development of tools ranging from basic tasks to advanced developments related to language learning, image recognition, as well as the interpretation of expressions.


Image Source: Google Image

Robots, drones or any IoT device connected by NVIDIA

In 2015 NVIDIA released the Jetson TX1 development kit integrating a 64-bit ARM processor along with a GPU with NVIDIA Maxwell architecture. Entering squarely on the smaller devices thanks to this board. Mainly, drones, small robots or any device connected to the “internet of things”.

He recently made a fabulous evolution the Jetson TX2 the size of a credit card. Doubling its power and consuming only 7.5 w. Integrated with Gigabit Ethernet, WiFi wireless ac and Bluetooth connectivity, and plenty of memory: 8GB RAM and 32GB in eMMC format.

Focused on working with two 4K video streams or managing up to 6 cameras at the same time, this card is a good invention for intelligent security systems. Soon we will see devices that among their specifications include this hardware or what we want to build as makers. Since it is designed to experiment and build things with it.


Image Source: Google Image

The democratization of Deep Learning through Cloud computing

The democratization of development tools in the cloud has also led to the use of such calculation GPUs within such systems. With the possibility of scaling and take advantage of the last processing hole thanks to the balancing and management of resources that enable these platforms.

It is no longer necessary to have an impressive infrastructure, just take a look at TensorFlow that allow you to apply Deep Learning and other techniques of Machine learning in a powerful way or other platforms such as IBM Watson Developer Cloud , Amazon Machine Learning or Azure Machine Learning.

Consulting Google Cloud Computing we realize how proud they are of having GPUs like the Tesla K80 or Tesla PS100 to automate large loads of data to analyze, announced on the go on your service front page.

We also see it in Azure, where NVIDIA and Microsoft signed a strong alliance.


Image Source: Google Image

Self-taught cars well-taught thanks to Deep Learning

One of the most exciting technologies is autonomous cars. We may always talk about Tesla as the innovator in that field, but NVIDIA is making very interesting technical advances. Some in collaboration with Tesla and independently with other manufacturers.

Agreements such as that of Mercedes-Benz to develop the digital system of the brand, through the center of operations of Artificial Intelligence with which it wants to equip each vehicle. And it is not the only one, since Honda, Audi or BMW are also integrated NVIDIA technology.

In addition, it has alliances as with Bosh very focused on IoT. And where it sees the use of GPUs of NVidia a differentiating element for the construction of a supercomputer on board able to identify pedestrians or cyclists, to warn in seconds when we have made risky maneuvers for our security.

NVIDIA DRIVE PX2 is your AI platform to accelerate the production of standalone cars. The size of the palm of the hand begins to integrate into many new models. With a consumption of just 10 watts, it adds a complex neural net of computation. It adds a huge amount of features such as auto cruise, analysis of updated HD maps in milliseconds with surrounding information, etc.

The risks of the future in which more manufacturers want to enter

The success in this blue ocean by NVIDIA has not gone unnoticed. Deep Learning is the technology currently desired, the one that dominates it will have a great advantage.

The success in this blue ocean by NVIDIA has not gone unnoticed. Due above all to that the bet of the future of the technology of the next years passes through the Deep Learning. Dozens of startups focused on application development have emerged thanks to this new chip architecture.

Allies in the past and present as Google are increasingly obsessed with building their own hardware around Tensor Flow and, of course, their textual search algorithms and maps. After years of learning about NVIDIA hardware, we will soon see something like the Tensor Processor Unit exclusively built by Google.

And let’s not forget Intel and AMD CPU-centric but see how the GPU market crowned by NVIDIA over these years provides huge sums of money and shifts them into processing units preferred by the main players in the technology. Intel walks behind Xeon Phi chip optimized for Deep Learning.

Although in the meantime it is not bad to remember a black spot in the company’s history by not being able to put up with its Tegra in that first batch of smartphones, where it was a strong ally of Android, maybe we have to wait to see chips in smartphone of new. AI and VR will be a good reason for your return.


Leave a Reply

Your email address will not be published. Required fields are marked *