The definitive weekly newsletter on A.I. and Deep Learning, published by Waikit Lau and Arthur Chan. Our background spans MIT, CMU, Bessemer Venture Partners, Nuance, BBN, etc. Every week, we curate and analyze the most relevant and impactful developments in A.I.
We also run Facebook’s most active A.I. group with 191,000+ members and host a weekly “office hour” on YouTube.
Editorial
Thoughts From Your Curators
This week on AIDL:
- AI news from CES 2019
- Facebook’s poaching of Edward Grefenstette from DeepMind
- We also talk about the significance of the Common Voice Initiative by Mozilla
Join our community for real-time discussions here: Expertify
This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook’s most active A.I. group with 193,000+ members and host an occasional “office hour” on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.
News
Edward Grefenstette is now with FAIR
How much does a top AI researcher make? We did not know until OpenAI make the salary of several top researchers such as Ilya Sutskever public. And we learn that top-level researcher can make a million a year even in a non-profit. It does explain why there is so much buzz when we learn a researcher like Edward Grefenstette is changing house.
What does Dr. Grefenstette do? He was first a researcher in mathematical logic, but then in 2014, he co-authored a landmark paper with Nal Kalchbrenner (now at Google Brain) and Phil Blunsom (now at Oxford), where it suggested you might use convolutional neural network to model sentence. For some context, before the paper, most researchers believe that the best sentence model is perhaps BLSTM. But with CNN in the mix, it opens up a lot of interesting possibilities to research and development: CNN parallelizes much better than BLSTM.
Pulse of AI Last Week
Intel and Alibaba partner on 3-D athlete tracking
DARPA tries to use AI to find hidden patterns in events
Blog Posts
Auto Keras and AutoML by Adrian Rosebrock.
Adrian Rosebrock investigated the Keras equivalent of AutoML. Other than his usual meticulous step-by-step procedure, he found that the automatically tuned network is not as good as his hand-defined structure.
Quote From Prof. Hinton
The future depends on some graduate student who is deeply suspicious of everything I have said. – Geoffrey Hinton
Open Source
Mozilla’s Common Voice
Many people asked the question – “Where is Imagenet for speech recognition?” The closest thing we know is the Voxforge.org, which attempts to collect voice data from volunteers who speak different languages. The open secret in voice data collection is that it has to be very contextual to the modality. So if your domain is telephone speech, collecting speech from desktop microphone will give you a very poor model.
Once you have the data, you also have to transcribe it. That is the equivalent of labeling in a general machine learning task, but speech is a sequence of sound, so you also need to annotate the word order correctly. You might also want to come up with deeper annotations such as the accents or mispronunciation of the speakers. All these variations are the reasons why transcription is expensive.
So if collecting speech is hard and transcription is expensive, very few companies want to share their data, even when companies are very happy to open source their recognizer.
All these should make you realize Mozilla’s effort of Common Voice is very important. What they try to do is to come up with a large scale speech database, such that researchers around the world can create models.
Other News
Other Interesting Stories
- Deep 500 : A group of researchers set out to create a deep learning benchmark for supercomputers.
- 1% of Finland is learning AI: What will happen if a significant portion of a population learned the basics of AI?
- Quatz AI open source a platform to help journalists Mostly about searching for relevant events, but it does sound like a good feature for journalists.
About Us
Join our community for real-time discussions here: Expertify