As you all know, Prof. Ng has a new specialization on Deep Learning. I wrote about the course extensively yet informally, which include two "Quick Impressions" before and after I finished Course 1 to 3 of the specialization. I also wrote three posts just on Heroes on Deep Learning including Prof. Geoffrey Hinton, Prof. Yoshua Bengio and Prof. Pieter Abbeel and Dr. Yuanqing Lin . And Waikit and I started a study group, Coursera deeplearning.ai (C. dl-ai), focused on just the specialization. This is my full review of Course 1 after finish watching all the videos. I will give a description what the course is about, and why you want to take it. There are already few very good reviews (from Arvind and Gautam). So my perspective will base on my experience as the admin of AIDL, as well as a learner of deep learning myself.
The Most Frequently Asked Question in AIDL
If you don't know, AIDL is one of most active Facebook group on the matter of A.I. and deep learning. So what is the most frequently asked question (FAQ) in our group then? Well, nothing fancy:
How do I start deep learning?
In fact, we got asked that question daily and I have personally answered that question for more than 500 times. Eventually I decided to create an FAQ - which basically points back to "My Top-5 List" which gives a list of resources for beginners.
The Second Most Important Class
That brings us to the question what should be the most important class to take? Oh well, for 90% of the learners these days, I would first recommend Andrew Ng's "Machine Learning", which is both good for beginners or more experienced practitioners like me. Lucky for me I took it around 2 years ago and got benefited from the class since then. It is a good basic class if you want to take any other ML/DL classes.
But then what's next? What would be a good second class? That's always a question in my mind. Karpathy cs231n comes to mind, or may be Socher's cs224[dn] is another choice. But they are too specialized in the subfields. If you view it from the study of general deep learning, the material on architecture are incomplete. Or you can think of general class such as Hinton's NNML. But the class confuses even PhD friends I know. And asking beginners to learn restricted Boltzmann machine is just too much. Same can be said for Koller's PGM. Hinton's and Koller's class, to be frank, are quite advanced. It's better to take them if you already know what you are doing.
That narrows us to several choices which you might already consider: first is fast.ai by Jeremy Howard, second is deep learning specialization from Udacity. But in my view, those are really the best for beginners. It is always a good idea to approach a technical subject from ground up. e.g. If I want to study string search, I would want to rewrite some classic algorithms such as KMP. And for deep learning, I always think you should start with a good implementation of back-propagation.
That's why for a long time, Top-5 List picked cs231n and cs224d as the second and third class. They are the best I can think of viewing ~20 DL classes I know of. Of course, this changes with deeplearning.ai.
Learning Deep Learning by Program Verification
So what so special about deeplearning.ai? Just like Andrew's Machine Learning class, deeplearning.ai follows an approach what I would call program verification. What that means is that instead of guessing whether your algorithm just by staring at the algorithm, deeplearning.ai gives you an opportunity to come up with an algorithm your own provided that you match with its official algorithm.
Why is it an important feature then? Well, let's just say that not everyone believes this is right approach. e.g. Back around when I started, many well-intentioned senior scientists told me that such a matching approach is not really good experimentally. Because supposed you have randomness, you should simply run your experiment N times, and calculate the variance.
So I certainly understand the point of what the scientists said. But then, it was a huge pain in the neck to verify if you program is correct.
But can you learn in another way? Nope, you got to have some practical experience in implementation. So that's the strength of deeplearning.ai - a guided approach of implementing deep learning algorithm.
What do you Learn in Course 1?
For the most part, implementing feed-forward (FF) algorithm and back-propagation (BP) algorithm from scratch. So that should be a very valuable experience for a majority of us. Because other than people who has the fortune/misfortune to write BP in production, you really don't have any opportunities to write one yourself. But this class gives you a chance.
Another important aspect of the class is that the mathematical formulation of BP is fined tuned such that it is suitable for implementing on Python numpy, the course designated language.
Wow, Implementing Back Propagation from scratch? Wouldn't it be very difficult?
Not really, in fact, many members finish the class in less than a week. So the key here: when many of us calling it a from-scratch implementation, in fact it is highly guided. All the tough matrix differentiation is done for you. There are also strong hints on what numpy function you should use. At least for me, homework is very simple. (Also see Footnote )
Do you need to take Ng's "Machine Learning" before you take this class?
That's preferable but not mandatory. Although I found that without knowing the more classical view of ML, you won't be able to understand some of the ideas in the class. e.g. the difference how bias and variance are viewed. In general, all good-old machine learning (GOML) techniques are still used in practice. Learning it up doesn't seem to have any downsides.
You may also notice that both "Machine Learning" and deeplearning.ai covers neural network. So will the material duplicated? Not really. deeplearning.ai would guide you through implementation of multi-layer of deep neural networks, IMO which requires a more careful and consistent formulation than a simple network with one hidden layer. So doing both won't hurt and in fact it's likely that you will have to implement a certain method multiple times in your life anyway.
Wouldn't this class be too Simple for Me?
So another question you might ask. If the class is so simple, does it even make sense to even take it? The answer is a resounding yes. I am quite experienced in deep learning (~4 years by now) and machine learning which I learned since college. I still found the course very useful. Because it offers many useful insights which only industry expert knows. And of course, when a luminary such as Andrew speaks, you do want to listen.
In my case, I also want to take the course so that I can write reviews about it. Of course, my colleagues in Voci would be benefited with my knowledge as well. But with that in mind, I still learn several things new through listening to Andrew.
That's what I have so far. Follow us on Facebook AIDL, I will post reviews of the later courses in the future.
 So what is a true from-scratch implementation? Perhaps you write everything from C and even the matrix manipulation part?
NMT tutorial written by Thang Luong - my impression is that it is a shorter tutorial with step-by-step procedure. The part which is slightly disappointing is that it doesn't quite record exactly how the benchmarking experiments were run and evaluated. Of course, it's kind of trivial to fix it, but it did take me a bit of time.
a bit special: Tensor2Tensor uses a novel architecture instead of pure RNN/CNN decoder/encoder. It gives a surprisingly large amount of gain. So it's likely that it will become a trend in NNMT in the future.
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation by Cho Et al. (link) - Very innovative and smart paper by Kyunghyun Cho. It also introduces GRU.
Sequence to Sequence Learning with Neural Networks by Ilya Sutskever (link) - By Google's researchers, and perhaps it shows for the first time an NMT system is comparable to the traditional pipeline.
Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation (link)
Neural Machine Translation by Joint Learning to Align and Translate by Dzmitry Bahdanau (link) - The paper which introduce attention
Neural Machine Translation by Min-Thuong Luong (link)
Effective Approaches to Attention-based Neural Machine Translation by Min-Thuong Luong (link) - On how to improve attention approach based on local attention.
Massive Exploration of Neural Machine Translation Architectures by Britz et al (link)
Recurrent Convolutional Neural Networks for Discourse Compositionality by Kalchbrenner and Blunsom (link)
Quick Impression on deeplearning.ai's "Heroes of Deep Learning". This time is the interview of Prof. Yoshua Bengio. As always, don't post any copyrighted material here at the forum!
* Out of the 'Canadian Mafia', Prof Bengio is perhaps the less known among the three. Prof. Hinton and Prof. Lecun have their own courses, and as you know they work for Google and Facebook respectively. Whereas Prof. Bengio does work for MS, the role is more of a consultant.
* You may know him as one of the coauthors of the book "Deep Learning". But then again, who really understand that book, especially part III?
* Whereas Prof. Hinton strikes me as an eccentric polymath, Prof. Bengio is more a conventional scholar. He was influenced by Hinton in his early study of AI which was mostly expert-system based.
* That explains why everyone seems to leave his interview out, which I found it very intersting.
* He named several of his group's contributions: most of what he named was all fundamental results. Like Glorot and Bengio 2010 on now widely called Xavier's initialization or attention in machine translation, his early work in language model using neural network, of course, the GAN from GoodFellow. All are more technical results. But once you think about these ideas, they are about understanding, rather than trying to beat the current records.
* Then he say few things about early deep learning researcher which surprised me: First is on depth. As it turns out, the benefit of depth was not as clear early in 2000s. That's why when I graduated in my Master (2003), I never heard of the revival of neural network.
* And then there is the doubt no using ReLU, which is the current day staple of convnet. But the reason makes so much sense - ReLU is not smooth on all points of R. So would that causes a problem. Many one who know some calculus would doubt rationally.
* His idea on learning deep learning is also quite on point - he believe you can learn DL in 5-6 months if you had the right training - i.e. good computer science and Math education. Then you can just pick up DL by taking courses and reading proceedings from ICML.
* Finally, it is his current research on the fusion of neural networks and neuroscience. I found this part fascinating. Would backprop really used in brain a swell?
Following experienced guys like Arvind Nagaraj and Gautam Karmakar, I just finished all course works for deeplearning.ai. I haven't finished all videos yet. But it's a good idea to write another "impression" post.
* It took me about 10 days clock time to finish all course works. The actual work would only take me around 5-6 hours. I guess my experience speaks for many veteran members at AIDL.
* python numpy has its quirk. But if you know R or matlab/octave, you are good to go.
* Assignment of Course 1 is to guide you building an NN "from scratch". Course 2 is to guide you to implement several useful initialization/regularization/optimization algorithms. They are quite cute - you mostly just fill in the right code in python numpy.
* I quoted "from scratch" because you actually don't need to write your own matrix routine. So this "from scratch" is quite different from people who try to write a NN package "from scratch using C", in which you probably need to write a bit of code on matrix manipulation, and derive a set of formulate for your codebase. So Ng's Course gives you a taste of how these program feel like. In that regard, perhaps the next best thing is Michael Nielsen's NNDL book.
* Course 3 is quiz-only. So by far, is the easiest to finish. Just like Arvind and Gautam, I think it is the most intriguing course within the series (so far). Because it gives you a lot of many big picture advice on how to improve an ML system. Some of these advices are new to me.
Anyway, that's what I have, once I watch all the videos, I will also come up with a full review. Before that, go check out our study group "Coursera deeplearning.ai"?
So I was going through deeplearning.ai. You know we started a new FB group on it? We haven't public it yet but yes we are v. exited.
Now one thing you might notice of the class is that there is this optional lectures which Andrew Ng is interviewing luminaries of deep learning. Those lectures, in my view, are very different from the course lectures. Most of the topics mentioned are research and beginners would find it very perplexed. So I think these lectures deserve separate sets of notes. I still call it "quick impression" because usually I will do around 1-2 layers of literature search before I'd say I grok a video.
* Sorry I couldn't post the video because it is copyrighted by Coursera, but it should be very easy for you to find it. Of course, respect our forum rules and don't post the video here.
* This is a very interesting 40-min interview of Prof. Geoffrey Hinton. Perhaps it should also be seen as an optional material after you finish his class NNML on coursera.
* The interview is in research-level. So that means you would understand more if you took NNML or read part of Part III of deep learning.
* There are some material you heard from Prof. Hinton before, including how he became a NN/Brain researcher, how he came up with backprop and why he is not the first one who come up.
* There are also some which is new to me, like why does his and Rumelhart's paper was so influential. Oh, it has to do with his first experience on marriage relationship (Lecture 2 of NNML).
* The role of Prof. Ng in the interview is quite interesting. Andrew is also a giant in deep learning, but Prof Hinton is more the founder of the field. So you can see that Prof. Ng was trying to understand several of Prof. Hinton's thought, such as 1) Does back-propagation appear in brain? 2) The idea of capsule, which is a distributed representation of a feature vector, and allow a kind of what Hinton called "agreement". 3) Unsupervised learning such as VAE.
* On Prof. Hinton's favorite idea, and not to my surprise:
1) Boltzmann machine, 2) Stacking RBM to SBN, 3) variational method. I frankly don't fully understand Pt. 3. But then L10 to L14 of NNML are all about Pt 1 and 2. Unfortunately, not everyone love to talk about Boltzmann machine - they are not hot as GAN, and perceived as not useful at all. But if you want to understand the origin of deep learning, and one way to pre-train your DNN, you should go to take NNML.
* Prof. Hinton's advice on research is also very entertaining - he suggest you don't always read up from literature first - which according to him is good for creative researchers.
* The part I like most is Prof Hinton's view of why computer science departments are not catching up on teaching deep learning. As always, he words are penetrating. He said, " And there's a huge sea change going on, basically because our relationship to computers has changed. Instead of programming them, we now show them, and they figure it out."
* Indeed, when I first start out at work, thinking as an MLer is not regarded as cool - programming is cool. But things are changing. And we AIDL is embracing the change.
Fellows, as you all know by now, Prof. Andrew Ng has started a new Coursera Specialization on Deep Learning. So many of you came to me today and ask my take on the class. As a rule, I usually don't comment on a class unless I know something about it. (Search for my "Learning Deep Learning - Top 5 Lists" for more details.) But I'd like to make an exception for the Good Professor's class.
So here is my quick take after browsing through the specialization curriculum:
* Only Course 1 to 3 are published now, they are short classes, more like 2-4 weeks. It feels like the Data Science Specialization so it feels good for beginners. Assume that Course 4 and 5 are long: 4 weeks. So we are talking about 17 weeks of study.
* Unlike the standard Ng's ML class, python is the default language. That's good in my view because close to 80-90% of practitioners are using python-based framework.
* Course 1-3 has around 3 weeks of curriculum overlapped with "Intro to Machine Learning" Lecture 2-3. Course 1's goal seems to implement NN from scratch. Course 2 is on regularization. Course 3 on different methodologies of deep learning and it's short, only 2 weeks long.
* Course 4 and 5 are about CNN and RNN.
* So my general impression here is that it is more a comprehensive class, comparable with Hugo Larochelle's Lectures, as well as Hinton's lecture. Yet the latter two classes are known to be more difficult. Hinton's class in particular, are know to confuse even PhDs. So that shows one of the values of this new DL class, it is a great transition from "Intro to ML" to more difficult classes such as Hinton's.
* But how does it compared with other similar course such as Udacity's DL nanodegree then? I am not sure yet, but the price seems to be more reasonable if you go through the Coursera route. Assume we are talking about 5 months of study, you are paying $245.
* I also found that many existing beginner classes advocate too much on running scripts, but avoid linking more fundamental concepts such as bias/variance with DL. Or go deep to describe models such as Convnet and RNN. cs231n did a good job on Convnet, and cs224n teach you RNN. But they seem to be more difficult than Ng or Udacity's class. So again, Ng's class sounds like a great transition class.
* My current take: 1) I am going to take the class myself. 2) It's very likely this new deeplearning.ai class will change my recommendations of class on Top-5 list.
What is the Difference between Deep Learning and Machine Learning?
Usually I don't write a full blog message to answer member's questions. But what is "deep" is such a fundamental concept in deep learning, yet there are many well-meaning but incorrect answers floating around. So I think it is a great idea to answer the question clearly and hopefully disabuse some of the misconceptions as well. Here is a cleaned up and expanded version of my comment to the thread.
Deep Learning is Just a Subset of Machine Learning
First of all, as you might read from internet, deep learning is just a subset of machine learning. There are many "Deep Learning Consultants"-type would tell you deep learning is completely different from from Machine Learning. When we are talking about "deep learning" these days, we are really talking about "neural networks which has more than one layer". Since neural network is just one type of ML techniques, it doesn't make any sense to call DL as "different" from ML. It might work for marketing purpose, but the thought was clearly misleading.
Deep Learning is a kind of Representation Learning
So now we know that deep learning is a kind of machine learning. We still can't quite answer why it is special. So let's be more specific, deep learning is a kind of representation learning. What is representation learning? Representation learning is an opposite of another school of thought/practice: feature engineering. In feature engineering, humans are supposed to hand-craft features to make machine works better. If you Kaggle before, this should be obvious to you, sometimes you just want to manipulate the raw inputs and create new feature to represent your data.
Yet in some domains which involve high-dimensional data such as images, speech or text, hand-crafting feature was found to be very difficult. e.g. Using HOG type of approaches to do computer vision usually takes a 4-5 years of a PhD student. So here we come back to representation learning - can computer automatically learn good features?
What is a "Deep" Technique?
Now we come to the part why deep learning is "deep" - usually we call a method "deep" when we are optimizing a nested function in the method. So for example, if you can express such functions as a graph, you would find that it has multiple layers. The term "deep" really is describing such "nestedness". That should explain why we typically called any artificial neural network (ANN) with more than 1 hidden layer as "deep". Or the general saying, "deep learning is just neural network which has more layers".
(Another appropriate term is "hierarchical". See footnote  for more detail.)
This is also the moment Karpathy in cs231n will show you the multi-layer CNN such that features are automatically learned from the simplest to more complex one. Eventually your last layer can just differentiate them using a linear classifier. As there is a "deep" structure that learn the right feature (last layer). Note the key term here is "automatic", all these Gabor-filter like feature are not hand-made. Rather, they are results from back-propagation .
Are there Anything which is "Deep" but not a Neural Network?
Yes and no. It depends on who you talk to. If you talk with ANN researchers/practitioners, they would just tell you "deep learning is just neural network which has more than 1 hidden layer". Indeed, if you think from their perspective, the term "deep learning" could just be a short-form. Yet as we just said, you can also called other methods "deep". So the adjective is not totally void of meaning. But many people would also tell you that because "deep learning" has become such a marketing term, it can now mean many different things. I will say more in next section.
Also the term "deep learning" has been there for a century. Check out Prof. Schmidhuber's thread for more details?
"No Way! X is not Deep but it is also taught in Deep Learning Class, You made a Horrible Mistake!"
I said it with much authority and I know some of you guys would just jump in and argue:
"What about word2vec? It is nothing deep at all, but people still call it Deep learning!!!" "What about all wide architectures such as "wide-deep learning"?" "Arthur, You are Making a HORRIBLE MISTAKE!"
Indeed, the term "deep learning" is being abused these days. More learned people, on the other hand, are usually careful to call certain techniques "deep learning" For example, in cs221d 2015/2016 lectures, Dr. Richard Socher was quite cautious to call word2vec as "deep". His supervisor, Prof. Chris Manning, who is an authority in NLP, is known to dispute whether deep learning is always useful in NLP, simply because some recent advances in NLP really due to deep learning .
I think these cautions make sense. Part of it is that calling everything "deep learning" just blurs what really should be credited in certain technical improvement. The other part is we shouldn't see deep learning as the only type of ML we want to study. There are many ML techniques, some of them are more interesting and practical than deep learning in practice. For example, deep learning is not known to work well with small data scenario. Would I just yell at my boss and say "Because I can't use deep learning, so I can't solve this problem!"? No, I would just test out random forest, support vector machines, GMM and all these nifty methods I learn over the years.
Misleading Claim About Deep Learning (I) - "Deep Learning is about Machine Learning Methods which use a lot of Data!"
So now we come to the arena of misconceptions, I am going to discuss two claims which many people have been drumming about deep learning. But neither of them is the right answer to the question "What is the Difference between Deep and Machine Learning?
The first one you probably heard all the time, "Deep Learning is about ML methods which use a lot of data". Or people would tell you "Oh, deep learning just use a lot of data, right?" This sounds about right, deep learning in these days does use a lot of data. So what's wrong with the statement?
Here is the answer: while deep learning does use a lot of data, before deep learning, other techniques use tons of data too! e.g. Speech recognition before deep learning, i.e. HMM+GMM, can use up to 10k hours of speech. Same for SMT. And you can do SVM+HOG on Imagenet. And more data is always better for those techniques as well. So if you say "deep learning use more data", then you forgot the older techniques also can use more data.
What you can claim is that "deep learning is a more effective way to utilize data". That's very true, because once you get into either GMM or SVM, they would have scalability issues. GMM scales badly when the amount of data is around 10k hour. SVM (with RBF-kernel in particular) is super tough/slow to use when you have ~1 million point of data.
Misleading Claim About Deep Learning II - "Deep Learning is About Using GPU and Having Data Center!"
This particular claim is different from the previous "Data Requirement" claim, but we can debunk it in a similar manner. The reason why it is wrong? Again before deep learning, people have GPUs to do machine learning already. For example, you can use GPU to speed up GMM. Before deep learning is hot, you need a cluster of machines to train acoustic model or language model for speech recognition. You also need tons of RAM to train a language model for SMT. So calling GPU/Data Center/RAM/ASIC/FPGA a differentiator of deep learning is just misleading.
You can say though "Deep Learning has change the computational model from distributed network model to more a single machine-centric paradigm (which each machine has one GPU). But later approaches also tried to combine both CPU-GPU processing together".
Conclusion and "What you say is Just Your Opinion! My Theory makes Equal Sense!"
Indeed, you should always treat what you read on-line with a grain of salt. Being critical is a good thing, having your own opinion is good. But you should also try to avoid equivocate an issue. Meaning: sometimes things have only one side, but you insist there are two equally valid answers. If you do so, you are perhaps making a logical error in your thinking. And a lot of people who made claims such as "deep learning is learning which use more data and use a lot of GPUS" are probably making such thinking errors.
Saying so, I would suggest you to read several good sources to judge my answer, they are:
In any case, I hope that this article helps you. I thank Bob to ask the question, Armaghan Rumi Naik has debunked many misconceptions in the original thread - his understanding on machine learning is clearly above mine and he was able to point out mistakes from other commenters. It is worthwhile for your reading time.
 See "Last Words: Computational Linguistics and Deep Learning"
 Generally whether DL is useful in NLP is widely disputed topic. Take a look of Yoav Goldberg's view on some recent GAN results on language generation. AIDL Weekly #18 also gave an expose on the issue.
 Perhaps another useful term is "hierarchical". In the case of ConvNet the term is right on. As Eric Heitzman comments at AIDL: "(deep structure) They are *not* necessarily recursive, but they *are* necessarily hierarchical since layers always form a hierarchical structure." After Eric's comment, I think both "deep" and "hierarchical" are fair terms to describe methods in "deep learning". (Of course, "hierarchical learning" is a much a poorer marketing term.)
 In earlier draft. I use the term recursive to describe the term "deep", which as Eric Heitzman at AIDL, is not entirely appropriate. "Recursive" give people a feeling that the function is self-recursive or. but actual function are more "nested", like . As a result, I removed the term "recursive" but just call the function "nested function".
Of course, you should be aware that my description is not too mathematically rigorous neither. (I guess it is a fair wordy description though)
20170709 at 6: fix some typos.
20170711: fix more typos.
20170711 at 7:05 p.m.: I got a feedback from Eric Heitzman who points out that the term "recursive" can be deceiving. Thus I wrote footnote .
I have been self-learning deep learning for a while, informally from 2013 when I first read Hinton's "Deep Neural Networks for Acoustic Modeling in Speech Recognition" and through Theano, more "formally" from various classes since the 2015 Summer when I got freshly promoted to Principal Speech Architect . It's not an exaggeration that deep learning changed my life and career. I have been more active than my previous life. e.g. If you are reading this, you are probably directed from the very popular Facebook group, AIDL, which I admin.
So this article was written at the time I finished watching an older version on Richard Socher's cs224d on-line . That, together with Ng's, Hinton's, Li and Karpathy's and Silvers's, are the 5 classes I recommended in my now widely-circulated "Learning Deep Learning - My Top-Five List". I think it's fair to give these sets of classes a name - Basic Five. Because IMO, they are the first fives classes you should go through when you start learning deep learning.
In this post I will say a few words on why I chose these five classes as the Five. Compared to more established bloggers such as Kapathy, Olah or Denny Britz, I am more a learner in the space , experienced perhaps, yet still a learner. So this article and my others usually stress on learning. What you can learn from these classes? Less talk-about, but as important: what is the limitation of learning on-line? As a learner, I think these are interesting discussion, so here you go.
What are the Five?
Just to be clear, here is the classes I'd recommend:
And the ranking is the same as I wrote in Top-Five List. Out of the five, four has official video playlist published on-line for free. With a small fee, you can finish the Ng's and Hinton's class with certification.
How much I actually Went Through the Basic Five
Many beginner articles usually come with gigantic set of links. The authors usually expect you to click through all of them (and learn through them?) When you scrutinize the list, it could amount to more than 100 hours of video watching, and perhaps up to 200 hours of work. I don't know about you, but I would suspect if the author really go through the list themselves.
So it's fair for me to first tell you what I've actually done with the Basic Five as of the first writing (May 13, 2017)
Ng's "Machine Learning"
Finished the class in entirety without certification.
Li and Karpathy's "Convolutional Neural Networks for Visual Recognition" or cs231n
Listened through the class lectures about ~1.5 times. Haven't done any of the homework
Socher's "Deep Learning for Natural Language Processing" or cs224d
Listened through the class lecture once. Haven't done any of the homework.
Silver's "Reinforcement Learning"
Listened through the class lecture 1.5 times. Only worked out few starter problems from Denny Britz's companion exercises.
Hinton's "Neural Network for Machine Learning"
Finished the class in entirety with certification. Listen through the class for ~2.5 times.
This table is likely to update as I go deep into a certain class, but it should tell you the limitation of my reviews. For example, while I have watched through all the class videos, only on Ng's and Hinton's class I have finished the homework. That means my understanding on two of the three "Stanford Trinities" is weaker, nor my understanding of reinforcement learning is solid. Together with my work at Voci, the Hinton's class gives me stronger insight than average commenters on topics such as unsupervised learning.
Why The Basic Five? And Three Millennial Machine Learning Problems
Taking classes is for learning of course. The five classes certainly give you the basics, and if you love to learn the fundamentals of deep learning. And take a look of footnote . The five are not the only classes I sit through last 1.5 years so their choice is not arbitrary. So oh yeah. Those are the stuffs you want to learn. Got it? That's my criterion. 🙂
But that's what other one thousand bloggers would tell you as well. I want to give you a more interesting reason. Here you go:
If you go back in time to the Year 2000. That was the time Google just launched their search engine, and there was no series of Google products and surely there was no Imagenet. What was the most difficult problems for machine learning? I think you would see three of them:
Statistical machine learning,
So what's so special about these three problems then? If you think about that, back in 2000, all three were known to be hard problems. They represent three seemingly different data structures -
Object classification - 2-dimensional, dense array of data
Statistical machine learning (SMT) - discrete symbols, seemingly related by loose rules human called grammars and translation rules
Automatic speech recognition (ASR)- 1-dimensional time series, has similarity to both object classification (through spectrogram), and loosely bound by rules such as dictionary and word grammar.
And you would recall all three problems have interest from the government, big institutions such as Big Four, and startup companies. If you master one of them, you can make a living. Moreover, once you learn them well, you can transfer the knowledge into other problems. For example, handwritten character recognition (HWR) resembles with ASR, and conversational agents work similarly as SMT. That just has to do with the three problems are great metaphor of many other machine learning problems.
Now, okay, let me tell one more thing: even now, there are people still (or trying to) make a living by solving these three problems. Because I never say they are solved. e.g. What about we increase the number of classes from 1000 to 5000? What about instead of Switchboard, we work on conference speech or speech from Youtube? What if I ask you to translate so well that even human cannot distinguish it? That should convince you, "Ah, if there is one method that could solve all these three problems, learning that method would be a great idea!"
And as you can guess, deep learningis that one method revolutionize all these three fields. Now that's why you want to take the Basic Five. Basic Five is not meant to make you the top researchers in the field of deep learning, rather it teaches you just the basic. And at this point of your learning, knowing powerful template of solving problems is important. You would also find going through Basic Five makes you able to read majority of the deep learning problems these days.
So here's why I chose the Five, Ng's and NNML are the essential basics of deep learning. Li and Kaparthy's teaches you object classification to the state of the art. Whereas, Socher would teach you where deep learning is on NLP, it forays into SMT and ASR a little bit, but you have enough to start.
My explanation excludes Silver's reinforcement learning. That admittedly is the goat from the herd. I add Silver's class because increasingly RL is used in even traditionally supervised learning task. And of course, to know the place of RL, you need a solid understanding. Silver's class is perfect for the purpose.
What You Actually Learn
In a way, it also reflect what's really important when learning deep learning. So I will list out 8 points here, because they are repeated them among different courses.
Basics of machine learning: this is mostly from Ng's class. But theme such bias-variance would be repeated in NNML and Silver's class.
Gradient descent: its variants (e.g. ADAM), its alternatives (e.g. second-order method), it's a never-ending study.
Backpropagation: how to view it? As optimizing function, as a computational graph, as flowing of gradient. Different classes give you different points of view. And don't skip them even if you learn it once.
Architecture: The big three family is DNN, CNN and RNN. Why some of them emerge and re-emerge in history. The detail of how they are trained and structured. None of the courses would teach you everything, but going through the five will teach you enough to survive
Image-specific technique: not just classification, but localization/detection/segmentation (as in cs231n 2016 L8, L13). Not just convolution, but "deconvolution" and why we don't like it is called "deconvolution". 🙂
NLP-specific techniques: word2vec, Glovec, how they were applied in NLP-problem such as sentiment classification
(Advanced) Basics of unsupervised learning; mainly from Hinton's, and mainly about techniques 5 years ago such as RBM, DBN, DBM and autoencoders, but they are the basics if you want to learn more advanced ideas such as GAN.
(Advanced) Basics of reinforcement learning: mainly from Silver's class, from the DP-based model to Monte-Carlo and TD.
The Limitation of Autodidacts
By the time you finish the Basic Five, and if you genuinely learn something out of them. Recruiters would start to knock your door. What you think and write about deep learning would appeal to many people. Perhaps you start to answer questions on forums? Or you might even write LinkedIn articles which has many Likes.
All good, but be cautious! During my year of administering AIDL, I've seen many people who purportedly took many deep learning class, but upon few minutes of discussion, I can point out holes in their understanding. Some, after some probing, turned out only take 1 class in entirety. So they don't really grok deeper concept such as back propagation. In other words, they could still improve, but they just refuse to. No wonder, with the hype of deep learning, many smart fellows just choose to start a company or code without really taking time to grok the concepts well.
That's a pity. And all of us should be aware is that self-learning is limited. If you decide to take a formal education path, like going to grad schools, most of the time you will sit with people who are as smart as you and willing to point out your issues daily. So any of your weaknesses will be revealed sooner.
You should also be aware that as deep learning is hyping, your holes of misunderstanding is unlikely to be uncovered. That has nothing to do with whether you work in a job. Many companies just want to hire someone to work on a task, and expect you learn while working.
So what should you do then? I guess my first advice is be humble, be aware of Dunning-Kruger Effect. Self-learning usually give people an intoxicating feeling that they learn a lot. But learning a lot doesn't mean you know everything. There are always higher mountains, you are doing your own disservice to stop learning.
The second thought is you should try out your skill. e.g. It's one thing to know about CNN, it's another to run a training with Imagenet data. If you are smart, the former took a day. For the latter, it took much planning, a powerful machine, and some training to get even Alexnet trained.
My final advice is to talk with people and understand your own limitation. e.g. After reading many posts on AIDL, I notice that while many people understand object classification well enough, they don't really grasp the basics of object localization/detection. In fact, I didn't too even after the first parse of the videos. So what did I do?
I just go through the videos on localization/detection again and again until I understand.
After the Basic Five.......
So some of you would ask "What's next?" Yes, you finished all these classes, as if you can't learn any more! Shake that feeling off! There are tons of things you still want to learn. So I list out several directions you can go:
Completionist: As of the first writing, I still haven't really done all the homework on all five classes, notice that doing homework can really help your understand, so if you are like me, I would suggest you to go back to these homework and test your understanding.
Drilling the Basics of Machine Learning: So this goes another direction - let's work on your fundamentals. For that, you can any Math topics forever. I would say the more important and non-trivial parts perhaps Linear Algebra, Matrix Differentiation and Topology. Also check out this very good link on how to learn college-level of Math.
Specialize on one field: If you want to master just one single field out of the Three Millennial Machine Learning Problems I mentioned, it's important for you to just keep on looking at specialized classes on computer vision or NLP. Since I don't want to clutter this point, let's say I will discuss the relevant classes/material in future articles.
Writing: That's what many of you have been doing, and I think it helps further your understanding. One thing I would suggest is to always write something new and something you want to read yourself. For example, there are too many blog posts on Computer Vision Using Tensorflow in the world. So why not write one which is all about what people don't know? For example, practical transfer learning for object detection. Or what is deconvolution? Or literature review on some non-trivial architectures such as Mask-RCNN? And compare it with existing decoding-encoding structures. Writing this kind of articles takes more time, but remember quality trumps quantity.
Coding/Githubbing: There is a lot of room for re-implementing ideas from papers and open source them. It is also a very useful skill as many companies need it to repeat many trendy deep learning techniques.
Research: If you genuinely understand deep learning, you might see many techniques need refinement. Indeed, currently there is plenty of opportunities to come up with better techniques. Of course, writing papers in the level of a professional researchers is tough and it's out of my scope. But only when you can publish, people would give you respect as part of the community.
Framework: Hacking in C/C++ level of a framework is not for faint of hearts. But if you are my type who loves low-level coding, try to come up with a framework yourself could be a great idea to learn more. e.g. Check out Darknet, which is surprisingly C!
So here you go. The complete Basic Five, what they are, why they were basic, and how you go from here. In a way, it's also a summary of what I learned so far from various classes since Jun 2015. As in my other posts, if I learn more in the future, I would keep this post updated. Hope this post keep you learning deep learning.
 Before 2017, there was no coherent set of Socher's class available on-line. Sadly there was also no legitimate version. So the version I refer to is a mixture of 2015 and 2016 classes. Of course, you may find a legitimate 2017 version of cs224n on Youtube.
 My genuine expertise is speech recognition, unfortunately that's not a topic I can share much due to IP issue.
 Some of you (e.g. from AIDL) would jump up and say "No way! I thought that NLP wasn't solved by deep learning yet!" That's because you are one lost soul and misinformed by misinformed blog post. ASR is the first field being tackled by deep learning, and it dated back to 2010. And most systems you see in SMT are seq2seq based.
 I was in the business of speech recognition from 1998 when I worked on voice-activated project for my undergraduate degree back in HKUST. It was a mess, but that's how I started.
 And the last one, you may always search it through youtube. Of course, it is not legit for me to share it here.