NVIDIA Corporation (NASDAQ:NVDA) Q3 2019 Earnings Conference Call - Final Transcript
Nov 14, 2019 • 05:30 pm ET
sequential and year-on-year growth. This holiday season our partners are addressing the growing demand for high-performance laptops for gamers, students and prosumers by bringing more than 130 NVIDIA-powered gaming and studio laptop models to market. This includes many thin and light form factors enabled by our Max-Q technology, tripled the number of Max-Q laptops last year.
In late October, we announced the GeForce GTX 1660 SUPER and the 1650 SUPER, which refresh our mainstream desktop GPUs with more performance, faster memory and new features. The 1660 SUPER delivers 50% more performance than our prior generation Pascal base 1060, the best selling gaming GPU of all time. It began shipping on October 29 priced at just $229. PCWorld called it the best GPU you can buy for 1080p gaming.
We also announced the next generation of our streaming media player with two new models SHIELD TV & SHIELD TV Pro, which launched on October 28. These bring AI to the streaming market for the first time with the ability to up-scale video real time from high-definition to 4K using NVIDIA trained deep neural networks. SHIELD TV has been widely recognized as the best streamer on the market.
Finally, we made progress in building out our cloud gaming business, two global service providers Taiwan Mobile and Russia Rostelcom with GFN.RU joined SoftBank and Korea's LG as partners for our GeForce NOW game streaming service. Additionally, Telefonica will kick off a cloud gaming proof of concept in Spain.
Moving to data center, revenue was $726 million, down 8% year-on-year and up 11% sequentially. Our hyperscale revenue grew both sequentially and year-on-year and we believe our visibility is improving. Hyperscale activity is being driven by conversational AI, the ability for computers to engage in human-like dialog capturing context and providing intelligent responses.
Google's breakthrough introduction of the BERT model with its super human levels of natural language understanding is driving away of neural networks for the language understanding. That in turn is driving demand for our GPUs on two fronts; first, these models are massive and highly complex. They have 10x to 20x, in some cases, 100x more parameters than image-based models. As a result, training these models requires V100 base compute infrastructure that in orders of magnitude beyond what is needed in the past. Model complexity is expected to grow significantly from here.
Second, real time conversational AI requires very low latency and multiple networks running in quick succession from de-noising to speech recognition, language understanding, text-to-speech and voice encoding. While conventional approaches fail at these tasks NVIDIA's GPUs can handle the entire inference chain in less than 30 milliseconds. This is the first AI application where inference requires acceleration. Conversational AI is a major driver for GPU accelerated inference.
In addition to this type of internal hyperscale activity, our T4 GPU continue to gain adoption in public clouds. In September, Amazon AWS announced general availability of the T4 globally, following the T4 roll out on Google Cloud platform earlier in the year.