Fueling the AI revolution with gaming

Posted on July 28th, 2018

Alison B Lowndes

Alison B Lowndes AI DevRel | EMEA – NVIDIA

After spending her first year with NVIDIA as a Deep Learning Solutions Architect, Alison is now responsible for NVIDIA’s Artificial Intelligence Developer Relations in the EMEA region. She is a mature graduate in Artificial Intelligence combining technical and theoretical computer science with a physics background & over 20 years of experience in international project management, entrepreneurial activities and the internet.

 She consults on a wide range of AI applications, including planetary defence with NASA, ESA & the SETI Institute and continues to manage the community of AI & Machine Learning researchers around the world, remaining knowledgeable in state of the art across all areas of research. She also travels, advises on & teaches NVIDIA’s GPU Computing platform, around the globe

Abstract

Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. AI won’t be an industry, it will be part of every industry. NVIDIA invests both in internal research and platform development to enable its diverse customer base, across gaming, VR, AR, AI, robotics, graphics, rendering, visualization, HPC, healthcare & more.

Alison’s article will introduce the hardware and software platform at the heart of this Intelligent Industrial Revolution: NVIDIA GPU Computing. She’ll provide insights into how academia, enterprise and startups are applying AI, as well as offer a glimpse into state-of-the-art research from world-wide labs & internally at NVIDIA, demoing, for example, the combination of robotics with VR and AI in an end-to-end simulator to train intelligent machines. Beginners might like to try our free online 40-minute class using GPU’s in the cloud: www.nvidia.com/dli

Introduction

My name is Alison and I am with NVIDIA. My field is actually artificial intelligence. I went back to university as a mature student, so I concentrated a lot more than most people. What I’m going to do is to explain who Nvidia are why I joined them. Why we’re all over artificial intelligence and also just give you some details on our software, hardware future plans. My first year with Nvidia was basically as a deep learning Solutions Architect across all of these major verticals. This is pretty much daily life for the whole of the of the world and what that allowed me to do on a consulting role was a massive great deal of Applied deep learning. understanding AI permeates everything that both we and all our customers do across gaming, graphics, virtual reality, augmented reality, simulation and medical bioinformatics. Even planetary defense so I wear many hats.

ai applicationsnvidiagamingai definitions

I’m most proud of the frontier development lab. Basically NASA came and said that despite everything that they are capable of doing that they really needed help on AI. Really important in today’s age is cross collaboration, cross discipline collaboration so it’s combining their skill sets in planetary science with skill sets of data scientists and coders like yourselves. nvidia pioneered a new form of computing and this is a new form of computing that’s loved by most of the world’s most demanding users which is gamers. Also scientists and designers and and it’s fueled by this insatiable demand for better and better 3d graphics. For much more realism and so we evolved the GPU into this computing brain. We invented it NVIDIA was formed in 1993. We invented it and introduced it to the world in 1999 and this sparked the growth of the of the PC gaming market which is actually now worth over a hundred sorry billion dollars. Gaming is now over 60 percent of our revenue despite the fact that we’ve pretty much turned our focus to to AI completely.

gpu feynmanatlasgpu computinggpu acceleration

Super Computing

We continuously reinvent ourselves, we have to. Adaptability is absolutely key to survival in today’s world. You have to be able to just pivot and adapt to it. To what’s being done and coding makes that really simple. We were already working with every car company out there so it was easy for us to pivot to the self-driving car side because they were already using us for infotainment and for visual and as well as the actual design space, VR as well. Obviously GPUs help with this. Our supercomputer capability goes worldwide across US, Japan and Europe. We are a learning machine ourselves that constantly evolves to solve problems that really matter to the to the world. Sheer physics and mathematics, AI can actually predict tornadoes but below is the Oklahoma finger of God that killed 24 people and injured over 350 other people and but what we’re doing here is we’re actually simulating it. The actual simulation itself takes takes upwards of supercomputer capability something called RVCA and eight of them. Digital globe and recon just a few months back demoed this in 3d and in virtual reality. Can you imagine being able to walk through a live tornado simulation. I mean this is basically the the state of play now.

gpu feynman

Essentially the year before, a similar disruptive force was just starting to build strength. I’m a massive fan of Feynman. I studied a bit of physics and Feynman. this is a Feynman diagram that basically shows the coming together of the GPU. All the data that we’re providing as a society which is key and the other integral which is the existing algorithms that we already had. It’s really important to understand the the definitions. You hear a whole load of hype about AI and a lot of it, thanks to the movies, is not true. We don’t have terminators yet but it’s really important that you understand so AI actually inlogic and rule-based learning as well as machine learning. Machine learning itself is this subset of deep learning. Deep just means it’s got more than two hidden layers. There’s a few other intricacies but they are related but they’re definitely not equivalent.

cudaninjasigoptconvolution

Neural Networks

The timeline here is really important as well this is not new technology basically it started in 1956. It’s something called the Dartmouth conference where it was Claude Shannon and a group of friends that put together this term artificial intelligence. They actually thought that they could probably solve it that summer as well but here we are now. If you don’t know the GPU the graphics processing unit which is NVIDIAs lifeblood is a coprocessor you still need the CPU. You’re just passing on the actual part of code that can be parallelized. We even have things called open ACC where you can literally shunt the parallelized sections, for example existing legacy code and then this leaves and frees up the CPU to take on its it’s typical serial jobs and running os’s etc.

convolutional neural networkgraphneural networktraining vs inference

CUDA

I don’t have time to actually go into the the intricacies of CUDA but we’ve got a stack of resources online and we run a very technical blog. CUDA itself is at the heart of AI because it’s at the heart of our room GPUs. Even Intel, if you look at some of the publicity that’s going around today it seems to be pivoting that way. Without guys and girls like you people reading the industry has no chance whatsoever in harnessing AI so again take a further look. Take some more courses and pivot towards this because AI is now central even to the people who don’t know it yet. To every single business that is out there today it’s probably the most profound thing since the transistor was invented. There is a whole lot that can that can happen and what you have to realize is that once you actually get the hang of deep learning or AI or any of the hype terms, that you’re here, this will actually help you in your job.

neuronrnnbikes

It’s about getting great coders into the workplace and also letting you sort of run free. AI is going to take a lot of the laborious tasks away from us and allow us lots and lots of time to to play in sandbox areas. To even break things and do what humans do really well which is get creative. When you start getting creative in code you can do some incredible things. I’m going to do a quick 101 on deep learning for those that actually don’t know it, so try and think about the differences between these two bikes so that’s my first-ever GSXR 750 on the left and this of course is the Ducati Monster.

What a computer will basically do is try and translate these images into pixels into maths into vectors and work out the actual differences on a pixel level. It will actually be able to capture things that we would never have thought about. It will pick up nuances about backgrounds. Like it has an aliens eyes that have been used to look at every single problem that humanity currently has. The problem that is to actually teach an AI system you have to have a large label data set for supervised learning and as far as I know we don’t yet have a data set like that.

neural networkfinger of godkernelgenerative models

I could go and perhaps create a million pictures of motorbikes off Google but then I would have to label each one. It was things like Stanford running the imagenet competition and the large part of that workload was actually labeling the data but I’m currently working with samasource. Samasource are literally pulling people out of slums in Kenya and India and teaching them how to help us provide these kind of data services like labeling data. It’s really quite profound so basically you’ve got things like like regression where you could take a million data points and and divide them up into a line but drawing that line through the data set you you still need something called a loss function. This measures how rubbish your system is at making a prediction and the key is to converge on a solution that’s acceptable. That’s to a certain level of accuracy and that depends on the actual applications themselves. Now the work course behind that is gradient descent.

generative adversarial networkneural inpaintingneuronvirtual human

Chris Ola of Google’s work, it’s a really cool way of reading papers and reading research papers is a really great way to actually keep up with this with this field. Take a look at this still so scary maths diagram but training a neural network is is all about trying to find a good minimum on an error. Surface deep learning systems, what they’re doing is they are just exploring through huge huge problem spaces and as humans we can’t cope with anything more than 3d maybe 4d and we’re at very very high dimensional capability here so we need AI to help us through this. To convert into maths and we need computers to actually do the computation and GPUs and they’re the workhorse they they take the brunt of it.

nerdrobotsdeep reinforcement learningboston dynamics

Deep Learning

Deep learning is split between two workloads. You have the computationally intensive training part, where although this is based on the on the brain itself, you’re in a supervised learning setup and feeding in lots and lots of labeled data. Once you actually got to a situation where you’ve actually trained and you’ve reached the end the accuracy that you want to get you have something called inference. An inference is basically just doing the forward path so you’re not doing forward backward and change the weights then repeating this process. You’re just doing the forward path so it’s very simple and this is how you can have it deployed on things like a mobile a mobile phone. Again I don’t have enough time to go deep into it but one thing you have to realize is that the world is not static. Images are great. Convolutional neural networks are very very good at working on static problems like like image recognition, but for everything else, which is dynamic, we need recurrent neural networks so things like speech recognition and pattern recognition in sequence looking at lots and lots of historical data. 

idsiaimitation learningdeepmind alphagouniverse openai

Recurrent neural networks

Recurrent neural networks are really vital because they they grasp the structure of data dynamically over time. There were several problems with implementation but basically this was over 25 years ago and a guy named Sepp Hochreiter who was actually the first PhD of jurgen schmidhuber who is now director with a Swiss AI lab in Lugano Switzerland. Sepp Hochreiter solved the problem and he created something called long short-term memory, which is used throughout every kind of AI dynamic problem that you actually see today. I’m proud to know him and his team are also working on healthcare problems. They’re winning things like the tox 21 challenge where you can take deep learning and assess and how toxic various chemicals are to humans. Take a look at his paper from from it as it’s a really good one because it gives you an indication of the real understanding of deep learning and how networks represent layer by layer.

roboticsscience fictionisaac labisaac robot simulatorotto dieters

Otto Friedrich Karl Deiters (German: November 15, 1834 – December 5, 1863) was a German neuroanatomist. He was born in Bonn, studied at the University of Bonn, and spent most of his professional career in Bonn. He is remembered for his microscopic research of the brain and spinal cord.

Around 1860, Deiters provided the most comprehensive description of a nerve cell that was known to exist at the time. He identified the cells’ axon, which he called an “axis cylinder”, and its dendrites, which he referred to as protoplasmic processes. He postulated that dendrites must fuse to form a continuous network.

neuron

This diagram was drawn in 1865 by a guy called Otto Dieters and it’s showing the human nerve cell body. There and all the synapses and dendrites that actually come off. There is a huge body of work now that is mapping together neuroscience and AI. Geoff Hinton considered one of the godfathers of AI who’s actually bristol born but now he’s in montreal. They’re changing the way that that we use layers instead of just multi layer they’re putting layers within layers and this basically allows you to to map a whole lot of more data. More information at a cellular level and again i don’t have time to get into this but this is the really key thing whatever the architecture that you are using, convolutional, neural networks, recurrent, etc. The real power comes from these AI systems. There is nothing more powerful than a human being that’s assisted or augmented by AI. I prefer the term augmented intelligence than artificial intelligence because what it does is it brings us to to what I consider the next stage.

robotdeep learning challengeself driving vehicleself driving vehicle

Reinforcement learning

The software is reinforcement learning or the theory is reinforcement learning. The hardware is both the GPUs that are running this and the CPUs but also what we’re deploying into which is robotics of varying shapes and forms both physical and virtual. So everybody has heard about alphago. This was reinforcement which combined dynamic programming and also supervised learning. Alphago zero now doesn’t even need that supervised learning part. It made a lot of headlines by saying that it was doing everything itself. Now when you actually go behind the scenes it couldn’t do anything without coders like you. Whether or not you’re working for Google, it makes no difference without the coders. Reinforcement learning basically is about learning a policy or the best next move to make, and this is across the board and but the key thing is that alphago zero whether it’s doing it entirely on its own or not still can’t play knots and crosses. It can’t play anything other than go so what they needed to do and this was a huge engineering effort.

AI carsdriverless vehiclepegasusxavier

Deep mind have their own deep mind lab. They needed to provide an environment where the AI agents could actually learn generalization. They could learn to play lots of other different games at the same time and games engines provide an infinite amount of training data. Deep minds latest work is where and I actually learned the word parkour. I’d never heard of the word parkour before but basically it is being able to do lots and lots of different actions like jumping running etc. It’s doing all this by itself. It’s actually learning to do the running and the jumping without any prior instructions whatsoever.

Imitation Learning

Imitation learning is another thing so this is where you you show a robot how to do something only once and this of course is getting close to how we learn. You need to see it twice or three times but ultimately we don’t need to see something a thousand times to actually do it. A researcher Berkeley left and along with them some other people he’s formed a company called embodied intelligence where he literally wants to work on just this. Since then this field has just exploded. I know because I kind of live right in the middle of everything and it’s part of my job to try and cover the research and and stay up with the actual progress. It’s drug design, it’s it’s astronomy, it’s use cases all over the place. I don’t have time as much as I would love to, to go into every single use case. Just in health care alone there is a plethora and there’s some really impactful work going on by harnessing AI. Deep Learning Toolkit (DLTK) is by Imperial College and it’s just a really great toolkit if you are working in the medical space.

This is Jamie by a company called Soul machines, founded by Mark Sagar. He’s responsible for avatar and the technology behind that and also won Oscars for King Kong. What he basically did was he got Cate Blanchett on board and recorded lots and lots of her dialogue and they’ve coupled together with a huge health insurance company in Australia to create this avatar that individuals can use. The Avatar talks to you just the same as Siri would but its actually got a face. She can actually read and understand emotion in voices. It’s an ongoing learning cycle as I’ve got the the the Samsung here, called their assistant Bixby and it tells you over and over again that it’s still learning. The more I use it the better it will be but there we go.

Explainable AImachine learning securityhealthcare AIdeep variant

Even unity3d now has an AI lab and here’s a tip. Take a look dopamine games because they’re doing some very very cool stuff considering they’re actually on the smaller end of games out games houses. EAS CEO Andrew Wilson was was recently talking about the fact of feeding an AI system every acting performance in all war films we’ve ever had. Feed that into the actual system. How much is that going to improve the game? how much is it going to improve your experience? Putting AI into characters as well. The game studio respawn is responsible for things like titanfall, so you’re going to start to see AI in those games very very shortly and of course we’re all over this. We have been for quite a few years. We recently launched holo deck simply because we’re already doing lots of work in rendering photorealistic levels. We’re already talking to the car manufacturers and we’ve already been working with them for decades. This is about being able to get together with your collaborators wherever they are in the world put on a VR headset and you’re there with a full res setup of whatever you’re actually working on. This is a supercar that we actually demoed but built on built on the back of this is is Isaac. Isaac is a robotics learning platform that incorporates reinforcement learning as well as any robot that you want to put in there.

deep learning toolkitearth sciencecernai

In 2013 I was working with the National Nuclear laboratory and we were coding a virtual robot. Controlling it with just an Xbox controller. So you can bring anything into this platform, teach it, get a trained system and then deploy it in a real robot. This is just a screenshot of Isaac in the nursery that we built within the holo deck and in there it can actually interact with you and play Domino’s and learn from you all using reinforcement learning. We’re only just scraping the surface here. We’re only in beta access but there are so many different opportunities here and of course the ultimate robot which we’ve been working on for quite a few years. Although this is a slightly different problem set because of course you’re talking about collision avoidance. With robots it’s all about touch, grasp and and haptics. The problem with this is the rest of the world that is so complex that we literally had to put together a whole new chipset. We put this together on various iterations starting with Drive px2 and and then we went to Xavier on the actual chipset.

video gamehigh performance computinggpu appsnvidia deep learning platform

The Pegasus chip is now capable of 320 Tflops, that’s trillion operations per second. It’s capable of perceiving the world through high res. It’s capable of 360 degree surround cameras localizing the the actual vehicle within centimeter accuracy but this is such a huge problem space that we want to see the fastest possible adoption of AI technology. We can’t address everything so we open-source the actual deep learning accelerator part of this chipset. We’re also right across the high-performance computing scenario and to prove that we have over 500 GPU ready apps. You can actually go to look at them and I don’t have time to list those 500 but it’s across each of those verticals in the slide right at the beginning.

nvidia sdkcuda 9cuda 9 toolkitdeep learning sdk

Just to give you an indication now of how we enable people, all our software is free. You simply just go online to developer NVIDIA .com and we work with every single framework so the frameworks are basically the building blocks. They’re the way of, literally in some cases in natural language, creating the layers of the neural network that you’re going to use in code. There are over 60 different frameworks but there’s probably a top ten. We work with as many as we possibly can. I don’t really want to recommend but cafe 2 is really rocking well at the moment. I personally started in torch so PI torch of course is a big favorite of mine. We work with all the teams directly. Apache and Amazon are widely used and so they’re really good if you’re looking for which framework out of those 60-odd. There’s lots of information online but basically all our software is free just go to developer.android.com and sign up. To actually gain from CUDA, the programming language for the GPU (launched it in in 2006) this is exactly why you’re seeing a revolution in AI now. It’s because people are able to program the GPUs.

titan VtensorRT 3tesla v100tensor core

Deep Learning Frameworks

caffe 2

Caffe2

Caffe2 is a deep-learning framework designed to easily express all model types, for example, CNN, RNN, and more, in a friendly python-based API, and execute them using a highly efficiently C++ and CUDA back-end. Users have flexibility to assemble their model using combinations of high-level and expressive operations in python allowing for easy visualization, or serializing the created model and directly using the underlying C++ implementation. Caffe2 supports single and multi-GPU execution, along with support for multi-node execution.

cognitive toolkit

Cognitive ToolkitThe Microsoft Cognitive Toolkit, formerly known as CNTK, is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs.

matlab

MATLABMATLAB makes deep learning easy for engineers, scientists and domain experts. With tools and functions for managing and labeling large data sets, MATLAB also offers specialized toolboxes for working with machine learning, neural networks, computer vision, and automated driving. With just a few lines of code, MATLAB lets you create and visualize models, and deploy models to servers and embedded devices without being an expert. MATLAB also enables users to generate high-performance CUDA code for deep learning and vision applications automatically from MATLAB code.

mxnet

MXNetMXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavors of symbolic programming and imperative programming to maximize efficiency and productivity.In its core is a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. The library is portable and lightweight, and it scales to multiple GPUs and multiple machines.

caffe

NVIDIA CaffeCaffe is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. NVIDIA Caffe, also known as NVCaffe, is an NVIDIA-maintained fork of BVLC Caffe tuned for NVIDIA GPUs, particularly in multi-GPU configurations.

pytorch

PyTorchPyTorch is a Python package that provides two high-level features:Tensor computation (like numpy) with strong GPU accelerationDeep Neural Networks built on a tape-based autograd systemYou can reuse your favorite Python packages such as numpy, scipy and Cython to extend PyTorch when needed.

tensorflow

TensorFlowTensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. For visualizing TensorFlow results, TensorFlow offers TensorBoard, suite of visualization tools.

chainer

ChainerChainer is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach, also known as dynamic computational graphs, as well as object-oriented high-level APIs to build and train neural networks. It supports CUDA and cuDNN using CuPy for high performance training and inference.

paddlepaddle

PaddlePaddlePaddlePaddle provides an intuitive and flexible interface for loading data and specifying model structures. It supports CNN, RNN, multiple variants and configures complicated deep models easily.It also provides extremely optimized operations, memory recycling, and network communication. PaddlePaddle makes it easy to scale heterogeneous computing resources and storage to accelerate the training process.

NVIDIA Deep Learning SDK

The NVIDIA Deep Learning SDK provides powerful tools and libraries for designing and deploying GPU-accelerated deep learning applications. It includes libraries for deep learning primitives, inference, video analytics, linear algebra, sparse matrices, and multi-GPU communications.

cudnntensorrtdeepstream sdkcublascusparsenccl

Graph analytics is across the board on many massive problem sets. So we’ve sped all that up with with MV graph and a visualization tool. When I was doing research in University I was just command-line and getting all my input from the from the very good output that’s given from torch and but then digits came along and you get to see exactly what’s going on in the actual layers. You get a visualization. You get a graphical understanding of the of the accuracy and how the loss is going. There’s also now a ton of pre trained models for data curation in the problem set. This is actually the majority of the work. It’s like 70% of the work. The AI side, you can just pick a pre-trained model and the job is done so as I said tensor RT is is for vastly faster inference.

nvidia dgx1frameworksproductivitynvidia gpu cloud

Deep stream is for if you do an intelligent video analysis and a lot of people are. On the hardware side we are spending literally billions to actually get the best that we possibly can. The recent launch, which was Walter, is 21 billion transistors. We are at the limit now. We are right at the edge of what is possible. What we’ve had to do is is make fine tunings right down at the instruction set to actually provide even more speed up. When you’re talking about using tensor RT these are the differences in just up from just our previous chip which was Pascal.

ngcnvidia saturnvgpu deep learningjetson tx2

AlexNet which is which is a type of convolutional neural network is a lot bigger now so it creates a lot more demand. The tensor core as I said has this brand new instruction set. AI is pretty much just matrix multiplication and accumulate or summation. That is really at the heart of it so what we did is we we managed to do the 4 by 4 multiply simultaneously and that in itself gives you 12 times the actual throughput already. The the tensor cores are part so there’s actually 640 of them with the over 5,000 cores that are now on our voltage chips. All that sounds great but how do you actually really cope with with massive problems? What we do is we put multiple cards into one unit. The dgx family was actually launched back in 2016 and this is eight high-end cards with our own interconnects because PCIe just doesn’t cut it anymore. We have our own interconnect called MV link.

nvidia techjetpack sdkjetson developer kitindustry deep learning

We’ve containerized all that software I’ve spent the hours and hours trying to get all the dependencies together to actually do deep learning work. We’ve put it all now inside containers optimized to all the major frameworks and onboard dgx. It’s now Volta and what’s really important is that the software is optimized. You simply login to the system. It’s actually designed to do tasks very quickly. Pascal is a hundred and seventy teraflops so trillion floating point operations and voltages just blown that out of the window.

This is addressing the simple fact that there are teams of people who are now working on these problems and so it’s the ability with containers. At the moment it’s docker but we are looking at all the other use cases because the HPC will prefer things like singularity. Kubernetes is is very popular so we’re looking at all that and we’re implementing it as fast as we can. Dgx is actually a server so obviously it needs to be rack mounted if you don’t have that capability you can actually now get a desk side version with Volta. This is actually for cards, it’s water-cooled so it’s nice and quiet. Dgx the server is not quiet because you’ve got eight massive cards doing a lot of work.

spacenetnvidia technvidia roboticsnvidia deep learning institute

Alternatively you can now access Walter right now via Amazon Web Services up in the cloud. We have something called NGC or the NVIDIA GPU cloud where you have the the capability in three simple steps to sign up. You can either choose to use the cloud or choose to use local compute. If you’ve got some of our geforce cards or you have dgx and then you simply just pull one of these containers down. We have an entire registry of containers now for all of the of the frameworks. All other possible combinations and versions of Cuda etc but the the the real key here is everything’s up to date. There’s no more going through the whole rigmarole. You’d simply just just click on that up-to-date container and it really just makes life a lot easier. We build the products for training things like dgx and we build the products for inferencing. That’s in the data center like our Tesla cards. 

nvidia nasa

There is this other part and that’s inference and this is this is huge. I mean this is in billions of cameras, billions of edge devices. We actually have no idea how big this problem is going to be but we have to address it now. This is just some of the actual use cases that we’re dealing with on on a daily basis. So of course we need an embedded GPU and we have a credit card size GPU called Jetson now tx2. The version that is capable of between one and one and a half trillion floating point operations per second in a credit card space and there’s lots and lots of people using this for various use cases. You can actually buy it very cheaply in a dev kit formats. Its got loads of IO, it’s got a camera on board, or if you’re in academic we actually give them away with an entire robotics and embedded teaching kit alongside.

nvidia jetson tx2jetson dev kit

TX 2 Development Kit

So we have these these teaching kits that we’ve worked with NYU and young laocoon as well specifically for deep learning. Servo city work with us on on these teaching kits. There is this entire embedded space. There’s $11,000 Nvidia but that’s not a big enough market so we have to put the entire embedded space online. That means you can get every single piece of information and the hardware purely via the website. Including jetpack and if I wrote another article that would not be enough to tell you all about the the various parts of jet pack and what it can do. Everything that we try to do is is so that you can develop and deploy right across the same architecture. That’s hardware architecture with the same software that you’ve been using and prototyping on.

We do so much training now that we developed something called the deep learning Institute and this is using ipython or Jupiter notebooks so you can do hands-on coding on our GPUs in Amazon Web Services right now. We also work with Microsoft Azure and our GPUs are in every cloud provider that there is so this will eventually branch out. We have over 200 different classes you can go online to that website now and do at least three of those for free. Hands-on coding in an in a variety of different frameworks. You just go on to a setup called quick labs which was actually bought by Google. Google’s entire cloud platform is going to be accessible via quick labs.

About 200 different use cases and there’s there’s a few that are actually free for you to have a go at and there you can actually get down into the actual code and work with code and start to understand it. If you’re actually thinking of setting up a start up then please just do a very quick sign up to NVIDIAs inception program where we can actually give you a ton of support and big discounts to hardware and of course all the software. I just want to leave you with the fact that we are hiring massively right now if you have any kind of deep learning machine learning skills or just the ninja coder please contact NVIDIA.