The definitive weekly newsletter on A.I. and Deep Learning, published by Waikit Lau and Arthur Chan. Our background spans MIT, CMU, Bessemer Venture Partners, Nuance, BBN, etc. Every week, we curate and analyze the most relevant and impactful developments in A.I.
GTC 2018 was happening last week. So we have a special session with five items. Check out our notes on DGX-2, and Nvidia/ARM deal.
As always, if you like our newsletter, feel free to forward it to your friends/colleagues!
This newsletter is a labor of love from us. All publishing costs and operating expenses are paid out of our pockets. If you like what we do, you can help defray our costs by sending a donation via link. For crypto enthusiasts, you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.
V100 now also has 32G HPM. PCGames was disappointed because there is no GeForce [GTX11XX series upgrade], (https://www.digitaltrends.com/computing/nvidia-geforce-gtx-11-series-not-20-series/) as rumored
The NVSwitch switching architecture within DGX-2,
The Issac SDK, a robotic toolkit built on top of Jetson,
Kubernete container with CUDA acceleration.
Wow, quite a keynote. Many of these new products will have impact on deep learning.
"Monstrous" and "beastly", these are the adjectives which outlets use to describe the new DGX-2. Indeed, upgraded from DGX-1, DGX-2 now pack 16 Volta V100s into one single machine.
Perhaps worth a mention is the NVSwitch architecture. You can imagine building a DGX-2 is not just about putting 16 cards in a single machine. To truly utilize 16 Voltas, you also need to think of increase the throughput of transferring data to/from the GPU cards. NVSwitch seems to be the answer.
(Note: We also learned that AIRI and GTX-2 are different products with similar price tag. AIRI was created by a partner of Nvidia, Pure Storage, and it costs $600k. Whereas DGX-2 is Nvidia's own, and it cost $400k. Also see "Deals".)
This is a quiet launch and it doesn't receive much attention as GTC 2018 - Nvidia is partnering with ARM to port NVDLA to IoT devices. NVDLA was first designed for the DriveDX platform as an SoC. The partnership deal is to integrate NVDLA into Arm Project Trillium, which include the Arm machine learning (ML) processor and Arm object detection (OD) processor.
This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 120,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65. Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760
Artificial Intelligence and Deep Learning Weekly
Speech Recognition, Machine Learning, and Random Musing of Arthur Chan