The definitive weekly newsletter on A.I. and Deep Learning, published by Waikit Lau and Arthur Chan. Our background spans MIT, CMU, Bessemer Venture Partners, Nuance, BBN, etc. Every week, we curate and analyze the most relevant and impactful developments in A.I.
We also run Facebook’s most active A.I. group with 191,000+ members and host a weekly “office hour” on YouTube.
Editorial
Thoughts From Your Humble Curators
In this issue, we take a closer look at AI-researchers’ salaries as reported by NYT, in the Blog section we discuss the article by Prof. Michael I. Jordan.
This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook’s most active A.I. group with 136,000+ members and host an occasional “office hour” on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65. Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760
Sponsor
Comet.ml – Machine Learning Experimentation Management
Comet.ml allows tracking of machine learning experiments with an emphasis on collaboration and knowledge sharing.
It allows you to keep your current workflow and report your result from any machine – whether it’s your laptop or a GPU cluster.
It will automatically track your hyperparameters, metrics, code, model definition and allows you to easily compare and analyze models.
Just add a single line of code to your training script:
experiment = comet_ml.Experiment(api_key=”YOUR_API_KEY”)
Comet is completely free for open source projects. We also provide 100% free unlimited access for students and academics. Click here to sign up!
News
Salaries of Deep Learning Researchers
OpenAI make the salaries of her top researchers public, we learn that Ilya Sutskever makes more than $1.9 million and Ian Goodfellow $800k. These numbers are jaw-dropping even in Silicon Valley.
Speaking as old-timers of the business: Machine learning engineers (MLE) wasn’t really compensated well. Why?
In the past, MLE was in the rank of software engineers. Yet machine learning’s tasks have more indeterministic timeline and success rate, their paid grade is usually lower than programmers. So, as an employee, MLE really get the worst of it: babying both machine learning and programming tasks, but often seen as poor performers. Only when AI/ML were more understood, we finally have superstars earner among MLE. Of course, we are glad that they can make a good living.
We also learn from the NYT article, that other companies are spending 2-3 times more money than OpenAI. So maybe we should ask : how should small startups compete even a non-profit is willing to pay ~$2M for one researcher? The answer may be is options, but more importantly is whether you can give an MLE a reason to fight for, or to have passion so to speak.
Blog Posts
AI Revolution hasn’t happened yet.
When Prof. Michael Jordan speaks, people listen. But if he is not the basketball player, who are we talking about?
Prof. Michael I. Jordan is a machine learning researcher, one of the his influential works is “Learning in Graphical Models” which used to be a must-read when you study models such as hidden Markov model or Markov random field. Concepts of graphical models are still widely used in deep learning, (perhaps not widely-known by beginners). So his work should be very interesting to you if you are a serious learner of machine learning.
Back to the article, while his conclusion is that “AI Revolution hasn’t happened yet”, his major complaint is really that the term “A.I.” assumes a much broader meaning now, and it start to eclipse other terms such as Statistics and Operation Science. In fact, according to him, this confusion is one of the reasons why current AI’s progress is stalled.
The article is making rounds on different social medias, and is a great read even for experienced practitioners.
An Overview of Proxy-label Approaches for Semi-supervised Learning
Sebastian Ruder wrote on semi-supervised learning, which discussed several approached based on proxy-labels. He also discussed his own finding on a paper from ACL 2018.
Write an AI to win at Pong from scratch with Reinforcement Learning
Perhaps the nicest tutorial for beginners on deep reinforcement learning.
Open Source
Awesome Text Summarization
mathsyouth’s github is not strictly for deep learning per se, but we found it a great resource for text summarization. Check out the part about abstractive summarization. Most major papers on DL-based methods are there.
PyTorch 0.4 is out
In 0.4.0, PyTorch seems to many interesting improvements. So here are few highlights:
- Merging Tensor and Variable,
- C++ Extension,
- A new container class of autograd which allows you to trade compute and memory,
- Various ONMX improvement.
About Us