The definitive weekly newsletter on A.I. and Deep Learning, published by Waikit Lau and Arthur Chan. Our background spans MIT, CMU, Bessemer Venture Partners, Nuance, BBN, etc. Every week, we curate and analyze the most relevant and impactful developments in A.I.
Interesting week for AI and deep learning: ICLR 2017 results are out, deeplearning.ai's Course 5 is also out. We have two items on ICLR 2017, and you will also see Arthur's "Quick Impression" on Course 5.
We also learned the third act of Andrew Ng: a $175M AI Fund. We'll cover the Fund in our News section.
As always, if you like our newsletter, feel free to forward it to your friends/colleagues!
This newsletter is a labor of love from us. All publishing costs and operating expenses are paid out of our pockets. If you like what we do, you can help defray our costs by sending a donation via link. For crypto enthusiasts, you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.
Driverless AI speeds up data science workflows by automating feature engineering, model tuning, ensembling, and model deployment. Use Driverless AI to avoid common mistakes such as under or overfitting, data leakage or improper model validation. Try Driverless AI today - request a free 21-day trial.
After deeplearning.ai and landing.ai, we finally see the third act of Andrew, which is a 175M AI fund. We first learned about the fund mid of August.
Unlike other VC, AI Fund is similar to Betaworks, which can be seen as an incubator, an accelerator and a fund (see this article for discussion). Not surprisingly, the first company which The AI Fund is supporting is landing.ai, which Andrew himself created and managed.
One way this could evolve is this fund/incubator ends up effectively being an outsourced AI R&D arm for large companies where spin-outs get acquired by them. We fully expect the fund to return many times its capital.
Wow, someone stitched Nicholas Cage's face onto characters of many famous movies such as Terminator 2, Superman and Raiders Of The Lost Ark. As you know, there have been concerns of whether such deep-learning-based video editing technology would be abused. e.g. We just learned that AI can create fake pornography using similar technique. A good problem for future research: how we can differentiate natural image and those which are generated by AI?
A new version of fast.ai is out. And this time is based on the popular DL framework PyTorch. Looks very interesting - given that fast.ai generally got very good reviews - may be we should check the course out as well!
This is a link shared by our members recently on matrix calculus. Matrix calculus is a crucial tool to understand proofs of machine learning techniques, specially for parameter estimation. Yet good text are rare, we can name the matrix cookbook and few others. But these sources do require some preliminary knowledge to read. This text by Terence Parr and Jeremy Howard fills the gap and present a comprehensive overview of the topic.
This is great piece written by Douglas Hofstadter on the limitation of current deep-learning-based machine translation, or neural network-based machine translation (NNMT). While he is negative about the technology, he did point out many weaknesses of the current NNMT such as fluency, the word orders. It sounds like interesting research directions for AIDLers in the future.