Categories
Uncategorized

AIDL Weekly Issue 14 – Google I/O 2017, AMD Vega and Review of Five Basic Deep Learning Class

Editorial

The Two Battlefronts of Nvidia

This week, we cover Google I/O: TPU v2 is a monster, equivalent to 32 P100s. Then there is new free TPU research clusters, TensorFlow lite which is a package for easy TF development on embedded device, automatic search of network architecture. All the amazing features!

Google could have taken quite a bit of attention away from Nvidia GTC by announcing TPU v2 last week, as The Next Platform’s Nicole Hemsoth nicely put it:

[……] we have to harken back to Google’s motto from so many years ago… “Don’t be evil.” Because let’s be honest, going public with this beast during the Volta unveil would have been…yes, evil.

In other news, AMD also announced a competing product, the Radeon RX Vega. However, AMD has been having a hard time against Nvidia due to software issues (more details in this issue) even when its specs are slightly better and the card is cheaper. This is the power of the software moat. Hardware commoditizes fast but software makes things sticky.


Other than Google I/O and AMD new GPU card, we also include several nice resources and links this week including:

  • Arthur’s Review on Five Basic Deep Learning Classes
  • Adit Despande’s github on using Tensorflow.

As always, if you like our letter, feel free to subscribe and forward it to your colleagues!!
We also just reach 20000 members in our AIDL forum, so come join us!

Artificial Intelligence and Deep Learning Weekly


A Correction on Issue #13 Editorial

About the Editorial of Issue 13: Peter Morgan is kind enough to correct us – both Nervanna and TPU are based on ASIC, rather than FPGA. We have corrected the web version since.

Artificial Intelligence and Deep Learning Weekly

News




Blog Posts


Open Source


Jobs

Video

Member’s Question

Thought/Anecdote from a participant of GTC 2017

Ajay Juneja share this on AIDL:
“Thoughts from the admin of the Self-Driving Car group this week (I attended the Nvidia Conference):

  1. Bi-LSTMs (Bi directional LSTMs) are everywhere, and working quite well. If you aren’t using them yet, you really should. For those of us from the mechanical engineering world, think of them a bit like making closed-loop feedback control systems.
  2. The convergence of AI, VR, AR, Simulation, and Autonomous Driving. It’s happening. Need to generate good data for your neural nets, quickly? Build a realistic simulator using Unreal Engine or Unity, and work with gaming developers to do so. Want to make your VR and AR worlds more engaging? Add characters with personality and a witty voice assistant with emotion to them, while using cameras and audio to determine the emotional state of the players. Want to prototype a new car or building or surgery room? Build it in VR, create a simulator out of it. We need to cross pollinate these communities and have everyone working together 🙂
  3. Toyota signed with Nvidia. That’s 8 of 14 car companies… and they have signed the 2 largest ones (VW and Toyota). I hear rumblings from the AI community saying “If you want to build a self driving car TODAY, your choices are Nvidia and… nothing. What can I actually buy from Intel and Mobileye? Where are the engineers to support it? Qualcomm may have something for the 845 but they are drunk on mobile profits and again, no one knows their tools.”

500K Nvidia Developers vs. next to nothing for Intel and Qualcomm’s solutions.

I believe Nvidia has its moat now.”

Artificial Intelligence and Deep Learning Weekly

About Us

 

Leave a Reply

Your email address will not be published. Required fields are marked *