Artificial Intelligence and Deep Learning Weekly – Issue 85
The definitive weekly newsletter on A.I. and Deep Learning, published by Waikit Lau and Arthur Chan. Our background spans MIT, CMU, Bessemer Venture Partners, Nuance, BBN, etc. Every week, we curate and analyze the most relevant and impactful developments in A.I.
We also run Facebook’s most active A.I. group with 191,000+ members and host a weekly “office hour” on YouTube.
Editorial
Thoughts From Your Curators
We were on vacation during Christmas and we are back this week. We bring you several interesting links:
- How does Prof. Hinton think of AGI?
- The MIT Course on deep learning, reinforcement learning and SDC,
- 2018 in review.
Join our community for real-time discussions here: Expertify
This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook’s most active A.I. group with 193,000+ members and host an occasional “office hour” on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.
News
Pulse of AI (Since Last Issue)
Funding:
- Ada AI raised $14M
- Pronto.ai, at $4999 for “special introductory program”
- DataIku’s $100M
- Baraja $32M
- Sophia Genetics Series E $77M
Investment:
Blog Posts
AGI is Nowhere Near, According to Demis Hassabis and Geoffrey Hinton
In ML reporting, we often see VentureBeat outdoes itself from time to time. And this piece, though appears as a blog post, deserves your attention.
There are two parts of the piece. First is how Hassabis think about AGI. As you know, DeepMind created a super-human level Go-engine with deep reinforcement learning. And in AlphaZero, researchers also found that they can configure engine to learn multiple board games such as Shogi to be super-human level . This should make you ask, can we use the same technology to create AGI as well?
Hassabis says no:
Despite DeepMind’s impressive achievements, Hassabis cautions that they by no means suggest AGI is around the corner — far from it. Unlike the AI systems of today, he says, people draw on intrinsic knowledge about the world to perform prediction and planning. Compared to even novices at Go, chess, and shogi, AlphaGo and AlphaZero are at a bit of an information disadvantage.
Now let’s look at Prof. Hinton’s view, which is even more interesting. So say you create a robotic agent and put him into society. Now can’t you just run a reinforcement learning algorithm to train the robot to be human? Or aka creating an AGI?
Not that easy, you have a scalability issue. Because most real-life reward signal is weak. As Prof. Hinton said,
“Every so often you get a scalar signal that tells you that you did good, and it’s not very often, and there’s not very much information, and you’d like to train the system with millions of parameters or trillions of parameters just based on this very wimpy signal,”
So the kicker here is how do you solve such scalability issue in reinforcement learning. Prof. Hinton thinks that a hierarchy of goals could be the answer:
““By creating subgoals, and paying off people to achieve these subgoals, you can magnify these wimpy signals by creating many more wimpy signals,” he added.”
In any case this article has a lot of gems. So it deserves your time to take a closer look. We also think both Hassabis and Prof. Hinton’s view are based on their extensive experienced in the existing deep learning. To us, their arguments are more convincing than .. say… pure futurists who blindly believe the coming of singularity.
2018 in Review.
Here are couple of posts which reviews AI development in 2018.
Not exactly review, but this is a good read about convolutional neural networks.
MIT Courses on Deep Learning, Reinforcement Learning and Autonomous Vehicles
The very popular MIT course on SDC is now expanded to three more classes which include the fundamental of deep learning and reinforcement learning.
Strang on deep learning activation functions.
We saw Prof. Strang’s essay on deep learning activation functions, and of course we are excited about whatever he writes. But then more interestingly, Prof. Strang is also writing a new book called “Linear Algebra and Learning from Data” and is going to publish soon. (!) It is interesting because there has always been a void on how to learn the mathematics of machine learning. That includes topics such as parameter estimation, or more basic techniques in matrix calculus.
You may now order the book, and the publisher says they will confirm order in January.
lexfridman.com
AIDL Members frequently asked whether there are interesting podcasts. The answer is yes. One of our recommendations is Prof. Lex Fridman’s “AI Podcast”. All guests he invited are prominent researchers or developers who have contributed to AI. So what they said should interest you.
Grow your expertise with these 5 out-of-field reads
AIDL member and Data Scientist, Briana Brownell, came up with 5 interesting reads which is supposed to be out-of-field. Yet, one you look at the list closely, all topics are deeply relevant to what MLEs and data scientist works on.
The one we like most? Perhaps is Homo Deus by Yuval Noah Harari. We are reading his “Sapien” which is equally thought-provoking.
Other News
Other News
NYT wrote an obituary for Prof. Karen Sparck Jones
SDC
- Mini Car Delivery Groceries
- The Audi SDC Program
- Uber’s SDC is heading back to Pennsylvania
- SDC Zoox’s get license
- Cruise partnered with GM
Alexa :
Others:
Lighthouse is shutting down . Also story of Babylon Health.
About Us
Join our community for real-time discussions here: Expertify