Especially towards Waymo's SDC in Arizona.
In ML reporting, we often see VentureBeat outdoes itself from time to time. And this piece, though appears as a blog post, deserves your attention.
There are two parts of the piece. First is how Hassabis think about AGI. As you know, DeepMind created a super-human level Go-engine with deep reinforcement learning. And in AlphaZero, researchers also found that they can configure engine to learn multiple board games such as Shogi to be super-human level . This should make you ask, can we use the same technology to create AGI as well?
Hassabis says no:
Despite DeepMind’s impressive achievements, Hassabis cautions that they by no means suggest AGI is around the corner — far from it. Unlike the AI systems of today, he says, people draw on intrinsic knowledge about the world to perform prediction and planning. Compared to even novices at Go, chess, and shogi, AlphaGo and AlphaZero are at a bit of an information disadvantage.
Now let's look at Prof. Hinton's view, which is even more interesting. So say you create a robotic agent and put him into society. Now can't you just run a reinforcement learning algorithm to train the robot to be human? Or aka creating an AGI?
Not that easy, you have a scalability issue. Because most real-life reward signal is weak. As Prof. Hinton said,
“Every so often you get a scalar signal that tells you that you did good, and it’s not very often, and there’s not very much information, and you’d like to train the system with millions of parameters or trillions of parameters just based on this very wimpy signal,”
So the kicker here is how do you solve such scalability issue in reinforcement learning. Prof. Hinton thinks that a hierarchy of goals could be the answer:
"“By creating subgoals, and paying off people to achieve these subgoals, you can magnify these wimpy signals by creating many more wimpy signals,” he added."
In any case this article has a lot of gems. So it deserves your time to take a closer look. We also think both Hassabis and Prof. Hinton's view are based on their extensive experienced in the existing deep learning. To us, their arguments are more convincing than .. say... pure futurists who blindly believe the coming of singularity.
The very popular MIT course on SDC is now expanded to three more classes which include the fundamental of deep learning and reinforcement learning.
We saw Prof. Strang's essay on deep learning activation functions, and of course we are excited about whatever he writes. But then more interestingly, Prof. Strang is also writing a new book called "Linear Algebra and Learning from Data" and is going to publish soon. (!) It is interesting because there has always been a void on how to learn the mathematics of machine learning. That includes topics such as parameter estimation, or more basic techniques in matrix calculus.
You may now order the book, and the publisher says they will confirm order in January.
AIDL Members frequently asked whether there are interesting podcasts. The answer is yes. One of our recommendations is Prof. Lex Fridman's "AI Podcast". All guests he invited are prominent researchers or developers who have contributed to AI. So what they said should interest you.
AIDL member and Data Scientist, Briana Brownell, came up with 5 interesting reads which is supposed to be out-of-field. Yet, one you look at the list closely, all topics are deeply relevant to what MLEs and data scientist works on.
The one we like most? Perhaps is Homo Deus by Yuval Noah Harari. We are reading his "Sapien" which is equally thought-provoking.
Notable AI Blog Posts