The definitive weekly newsletter on A.I. and Deep Learning, published by Waikit Lau and Arthur Chan. Our background spans MIT, CMU, Bessemer Venture Partners, Nuance, BBN, etc. Every week, we curate and analyze the most relevant and impactful developments in A.I.
We also run Facebook’s most active A.I. group with 191,000+ members and host a weekly “office hour” on YouTube.
Editorial
Thoughts From Your Humble Curators
This week we cover Google I/O. In particular, Google Duplex has some surprising backlashes. Check out our fact-checking section.
As always, if you like our newsletter, feel free to subscribe and forward it to your colleagues!
This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook’s most active A.I. group with 140,000+ members and host an occasional “office hour” on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65. Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760
News
CMU launches AI Undergraduate degree
CMU has always been one of the top schools in computer science, artificial intelligence and machine learning. They were early to start specialized graduate programs on machine learning/AI, such as programs from the Machine Learning institute, or Robotic Institute. They are now starting an entire undergraduate program just on AI.
Google Research is now Google AI
Google Research changed its name to Google AI. Google and other Alphabet subsidiaries are deeply involved in AI, and currently many Google products are actually deep-learning based. If Google wasn’t all in before, it certain is now.
Deals
- Tailor Brands raises $15.5M
- Gamalon raises $20M – Sounds like Gamalon has moved away from Bayesian program synthesis.
- Nokia bought SpaceTime Insight
AI Summary of Google I/O 2018
Google I/O, just like last year, kept churning out interesting products:
- Duplex, a dialogue system which sounds very close to humans with natural understanding capability,
- Smart Compose – automatic suggestions based on machine learning,
- TPU 3.0 – TPU 3.0, yet another upgrade of Google’s revolutionary ASIC-based neural network chip,
- Google ML Kit – Google ML Kit, more a new interface to add Google AI product to apps.
Duplex has some surprising backlashes after Google I/O. Some of these, in our view, could be misunderstanding in technicalities. See our fact-checking section on our analysis. In particular, as impressive as it is, Duplex is not AGI.
Factchecking
Fact-checking: Google Duplex/AI
Google Duplex has been making waves last few days. WaPo has an article on whether such intelligent assistant technology would be too close to human. And of course, you will see ten thousands of posts in the next few months saying “AGI is coming!” So here’s some pre-emptive fact-checking.
So first-thing-first, Google Duplex is not an AGI. Here is an excerpt from the blog:
“One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.” (Emphasis is mine.)
Google is Google, and their researchers are the honest bunch. I guess after learning from how the public perceive AI in the past (e.g. Facebook was said to “kill” its own AI), they are always very careful when they write.
But you might ask, is Duplex still a good technical achievement? Unequivocal yes. In some sense, Google researchers are understating. Unlike commercial dialogue systems in the past, their method is trainable. More interestingly, they are using end-to-end neural network to come up with the best response. So this approach goes beyond the two usual methods we know: first is the hand-crafted dialogue response, which is costly to build. The second is the seq2seq approach.
More on the second approach: Many of you might have learned it from deeplearning.ai (Course 5) already. An NNMT systems can be used as a dialogue system as well. But that is making a big assumption: i.e. dialogue system can be modeled as a machine translator. Google is not taking such an approach. From the diagram in the Google’s blog post, you also see that the network has context as input. This is an old idea in dialogue systems, but it’s interesting to see it is applied in neural networks.
This is what we have, so far the Google hadn’t released any technical papers about the Duplex. So if we see it, we will look into the issue deeper.
Blog Posts
How Ian Goodfellow Failed
Ian Goodfellow gave an honest and heartfelt interview about he failed before he was famous. The part we like the most? Here:
“12. What is the best piece of advice you could give to your past self?
I wish I’d used some of those GPUs I bought for deep learning to mine some bitcoin.”
Video
Emojify by Dibakar Saha
One of our members at AIDL, Dibakar Saha, always churn out interesting projects in computer vision. This time is emojify, which can turn analysis human expression and dup it with emoji. See his thread on AIDL too?
About Us