deeplearning.ai just announce a new class which cater to non-technical people. So if you are thinking of preparing material for not technical staff for your company, this could be a good resource. Coming in 2019.
If you start AI only the last few years, one thing you might heard of is the possibility of AGI. You must feel remote for these ideas because they are so different from ideas such as supervised learning or reinforcement learning.
We feel the same way, AI practitioners these days are quite a different group from AGI advocates. So when Open AI Founder, Ilya Sutskever, said short-term AGI is a real possibility, you should probably take notice.
So why do people believe AGI is possible in the first place? Setting aside that some people just blindly believe in AGI. Most AGI advocates actually based their belief upon projection from technological progress. Moore's Law is one of the most well-quoted. Here, Sutskever's arguments are mostly based on the recent progress of deep learning. One of his arguments is that deep learning has "repeatedly and rapidly broken through ‘insurmountable’ barriers." and he enumerated several important progress in deep learning such as Imagenet, GAN and AlphaGo.
Would his prediction comes true though? You asked. And of course, just that we can achieve something impossible few years ago, it doesn't always mean we can do that. So say, speaking in deep learning: can we come up with a network architecture which integrate multiple senses together? Or if we can think of one, do we have enough computational power to train them?
Perhaps it begs a relevant question: how do you judge a certain prediction on AI? In our view, since they are predictions, you cannot quite verify if they are true now. Perhaps a better way to judge them is to look at the feasibility of their underlying theory. Most controversy about AI prediction such as the emergence of the more popular "Singularity" on how we can predict progress on other technologies.
We don't pretend we have the answer. We suggest you to think independently and come up with your view.
Here is an interesting interview of Prof. Vladimir Vapnik by MIT Prof. Lex Friedman. Just on the two minutes segments we have several thoughts and you may listen to the whole pod cast here.
The first is what Vapnik really means. It seems to us, when Vapnik said "Math is a language that uses God". The "God" has a sense of "deterministic", i.e. close to the meaning when Einstein said to Niels Bohr: "God doesn't play dice". So this saying was more like "Math is a language which is deterministic". And later when he wondered how reality can be described by simple language of Math. It gives us a sense that he believe that reality can be described by Math.
Perhaps the more important question is "Should reality in machine learning be described by Math?" If we see reality as data we can observed and measured, then we should be more reserved if we should blindly pursue a simple equation as descriptions. For example, when we are talking about using regression to describe a set of data, what we are looking for is the best description according to some criteria. e.g. say when we are choose the optimal number of mixtures in a Gaussian mixture model(GMM), we can use likelihood difference, or BIC etc to decide. We wonder other than actual performance metrics, we can have anything to decide upon these criteria.
All-in-all, this is a very interesting interview. We also asked Prof. Friedman on how Prof. Vapnik thinks about the rise of deep learning. Prof. Friedman promise to post another clip on AIDL again. So stay tuned.
AI Blogs Last Week