I have been self-learning deep learning for a while, informally from 2013 when I first read Hinton's "Deep Neural Networks for Acoustic Modeling in Speech Recognition" and through Theano, more "formally" from various classes since the 2015 Summer when I got freshly promoted to Principal Speech Architect . It's not an exaggeration that deep learning changed my life and career. I have been more active than my previous life. e.g. If you are reading this, you are probably directed from the very popular Facebook group, AIDL, which I admin.
So this article was written at the time I finished watching an older version on Richard Socher's cs224d on-line . That, together with Ng's, Hinton's, Li and Karpathy's and Silvers's, are the 5 classes I recommended in my now widely-circulated "Learning Deep Learning - My Top-Five List". I think it's fair to give these sets of classes a name - Basic Five. Because IMO, they are the first fives classes you should go through when you start learning deep learning.
In this post I will say a few words on why I chose these five classes as the Five. Compared to more established bloggers such as Kapathy, Olah or Denny Britz, I am more a learner in the space , experienced perhaps, yet still a learner. So this article and my others usually stress on learning. What you can learn from these classes? Less talk-about, but as important: what is the limitation of learning on-line? As a learner, I think these are interesting discussion, so here you go.
What are the Five?
Just to be clear, here is the classes I'd recommend:
And the ranking is the same as I wrote in Top-Five List. Out of the five, four has official video playlist published on-line for free. With a small fee, you can finish the Ng's and Hinton's class with certification.
How much I actually Went Through the Basic Five
Many beginner articles usually come with gigantic set of links. The authors usually expect you to click through all of them (and learn through them?) When you scrutinize the list, it could amount to more than 100 hours of video watching, and perhaps up to 200 hours of work. I don't know about you, but I would suspect if the author really go through the list themselves.
So it's fair for me to first tell you what I've actually done with the Basic Five as of the first writing (May 13, 2017)
Ng's "Machine Learning"
Finished the class in entirety without certification.
Li and Karpathy's "Convolutional Neural Networks for Visual Recognition" or cs231n
Listened through the class lectures about ~1.5 times. Haven't done any of the homework
Socher's "Deep Learning for Natural Language Processing" or cs224d
Listened through the class lecture once. Haven't done any of the homework.
Silver's "Reinforcement Learning"
Listened through the class lecture 1.5 times. Only worked out few starter problems from Denny Britz's companion exercises.
Hinton's "Neural Network for Machine Learning"
Finished the class in entirety with certification. Listen through the class for ~2.5 times.
This table is likely to update as I go deep into a certain class, but it should tell you the limitation of my reviews. For example, while I have watched through all the class videos, only on Ng's and Hinton's class I have finished the homework. That means my understanding on two of the three "Stanford Trinities" is weaker, nor my understanding of reinforcement learning is solid. Together with my work at Voci, the Hinton's class gives me stronger insight than average commenters on topics such as unsupervised learning.
Why The Basic Five? And Three Millennial Machine Learning Problems
Taking classes is for learning of course. The five classes certainly give you the basics, and if you love to learn the fundamentals of deep learning. And take a look of footnote . The five are not the only classes I sit through last 1.5 years so their choice is not arbitrary. So oh yeah. Those are the stuffs you want to learn. Got it? That's my criterion. 🙂
But that's what other one thousand bloggers would tell you as well. I want to give you a more interesting reason. Here you go:
If you go back in time to the Year 2000. That was the time Google just launched their search engine, and there was no series of Google products and surely there was no Imagenet. What was the most difficult problems for machine learning? I think you would see three of them:
Statistical machine learning,
So what's so special about these three problems then? If you think about that, back in 2000, all three were known to be hard problems. They represent three seemingly different data structures -
Object classification - 2-dimensional, dense array of data
Statistical machine learning (SMT) - discrete symbols, seemingly related by loose rules human called grammars and translation rules
Automatic speech recognition (ASR)- 1-dimensional time series, has similarity to both object classification (through spectrogram), and loosely bound by rules such as dictionary and word grammar.
And you would recall all three problems have interest from the government, big institutions such as Big Four, and startup companies. If you master one of them, you can make a living. Moreover, once you learn them well, you can transfer the knowledge into other problems. For example, handwritten character recognition (HWR) resembles with ASR, and conversational agents work similarly as SMT. That just has to do with the three problems are great metaphor of many other machine learning problems.
Now, okay, let me tell one more thing: even now, there are people still (or trying to) make a living by solving these three problems. Because I never say they are solved. e.g. What about we increase the number of classes from 1000 to 5000? What about instead of Switchboard, we work on conference speech or speech from Youtube? What if I ask you to translate so well that even human cannot distinguish it? That should convince you, "Ah, if there is one method that could solve all these three problems, learning that method would be a great idea!"
And as you can guess, deep learningis that one method revolutionize all these three fields. Now that's why you want to take the Basic Five. Basic Five is not meant to make you the top researchers in the field of deep learning, rather it teaches you just the basic. And at this point of your learning, knowing powerful template of solving problems is important. You would also find going through Basic Five makes you able to read majority of the deep learning problems these days.
So here's why I chose the Five, Ng's and NNML are the essential basics of deep learning. Li and Kaparthy's teaches you object classification to the state of the art. Whereas, Socher would teach you where deep learning is on NLP, it forays into SMT and ASR a little bit, but you have enough to start.
My explanation excludes Silver's reinforcement learning. That admittedly is the goat from the herd. I add Silver's class because increasingly RL is used in even traditionally supervised learning task. And of course, to know the place of RL, you need a solid understanding. Silver's class is perfect for the purpose.
What You Actually Learn
In a way, it also reflect what's really important when learning deep learning. So I will list out 8 points here, because they are repeated them among different courses.
Basics of machine learning: this is mostly from Ng's class. But theme such bias-variance would be repeated in NNML and Silver's class.
Gradient descent: its variants (e.g. ADAM), its alternatives (e.g. second-order method), it's a never-ending study.
Backpropagation: how to view it? As optimizing function, as a computational graph, as flowing of gradient. Different classes give you different points of view. And don't skip them even if you learn it once.
Architecture: The big three family is DNN, CNN and RNN. Why some of them emerge and re-emerge in history. The detail of how they are trained and structured. None of the courses would teach you everything, but going through the five will teach you enough to survive
Image-specific technique: not just classification, but localization/detection/segmentation (as in cs231n 2016 L8, L13). Not just convolution, but "deconvolution" and why we don't like it is called "deconvolution". 🙂
NLP-specific techniques: word2vec, Glovec, how they were applied in NLP-problem such as sentiment classification
(Advanced) Basics of unsupervised learning; mainly from Hinton's, and mainly about techniques 5 years ago such as RBM, DBN, DBM and autoencoders, but they are the basics if you want to learn more advanced ideas such as GAN.
(Advanced) Basics of reinforcement learning: mainly from Silver's class, from the DP-based model to Monte-Carlo and TD.
The Limitation of Autodidacts
By the time you finish the Basic Five, and if you genuinely learn something out of them. Recruiters would start to knock your door. What you think and write about deep learning would appeal to many people. Perhaps you start to answer questions on forums? Or you might even write LinkedIn articles which has many Likes.
All good, but be cautious! During my year of administering AIDL, I've seen many people who purportedly took many deep learning class, but upon few minutes of discussion, I can point out holes in their understanding. Some, after some probing, turned out only take 1 class in entirety. So they don't really grok deeper concept such as back propagation. In other words, they could still improve, but they just refuse to. No wonder, with the hype of deep learning, many smart fellows just choose to start a company or code without really taking time to grok the concepts well.
That's a pity. And all of us should be aware is that self-learning is limited. If you decide to take a formal education path, like going to grad schools, most of the time you will sit with people who are as smart as you and willing to point out your issues daily. So any of your weaknesses will be revealed sooner.
You should also be aware that as deep learning is hyping, your holes of misunderstanding is unlikely to be uncovered. That has nothing to do with whether you work in a job. Many companies just want to hire someone to work on a task, and expect you learn while working.
So what should you do then? I guess my first advice is be humble, be aware of Dunning-Kruger Effect. Self-learning usually give people an intoxicating feeling that they learn a lot. But learning a lot doesn't mean you know everything. There are always higher mountains, you are doing your own disservice to stop learning.
The second thought is you should try out your skill. e.g. It's one thing to know about CNN, it's another to run a training with Imagenet data. If you are smart, the former took a day. For the latter, it took much planning, a powerful machine, and some training to get even Alexnet trained.
My final advice is to talk with people and understand your own limitation. e.g. After reading many posts on AIDL, I notice that while many people understand object classification well enough, they don't really grasp the basics of object localization/detection. In fact, I didn't too even after the first parse of the videos. So what did I do?
I just go through the videos on localization/detection again and again until I understand.
After the Basic Five.......
So some of you would ask "What's next?" Yes, you finished all these classes, as if you can't learn any more! Shake that feeling off! There are tons of things you still want to learn. So I list out several directions you can go:
Completionist: As of the first writing, I still haven't really done all the homework on all five classes, notice that doing homework can really help your understand, so if you are like me, I would suggest you to go back to these homework and test your understanding.
Drilling the Basics of Machine Learning: So this goes another direction - let's work on your fundamentals. For that, you can any Math topics forever. I would say the more important and non-trivial parts perhaps Linear Algebra, Matrix Differentiation and Topology. Also check out this very good link on how to learn college-level of Math.
Specialize on one field: If you want to master just one single field out of the Three Millennial Machine Learning Problems I mentioned, it's important for you to just keep on looking at specialized classes on computer vision or NLP. Since I don't want to clutter this point, let's say I will discuss the relevant classes/material in future articles.
Writing: That's what many of you have been doing, and I think it helps further your understanding. One thing I would suggest is to always write something new and something you want to read yourself. For example, there are too many blog posts on Computer Vision Using Tensorflow in the world. So why not write one which is all about what people don't know? For example, practical transfer learning for object detection. Or what is deconvolution? Or literature review on some non-trivial architectures such as Mask-RCNN? And compare it with existing decoding-encoding structures. Writing this kind of articles takes more time, but remember quality trumps quantity.
Coding/Githubbing: There is a lot of room for re-implementing ideas from papers and open source them. It is also a very useful skill as many companies need it to repeat many trendy deep learning techniques.
Research: If you genuinely understand deep learning, you might see many techniques need refinement. Indeed, currently there is plenty of opportunities to come up with better techniques. Of course, writing papers in the level of a professional researchers is tough and it's out of my scope. But only when you can publish, people would give you respect as part of the community.
Framework: Hacking in C/C++ level of a framework is not for faint of hearts. But if you are my type who loves low-level coding, try to come up with a framework yourself could be a great idea to learn more. e.g. Check out Darknet, which is surprisingly C!
So here you go. The complete Basic Five, what they are, why they were basic, and how you go from here. In a way, it's also a summary of what I learned so far from various classes since Jun 2015. As in my other posts, if I learn more in the future, I would keep this post updated. Hope this post keep you learning deep learning.
 Before 2017, there was no coherent set of Socher's class available on-line. Sadly there was also no legitimate version. So the version I refer to is a mixture of 2015 and 2016 classes. Of course, you may find a legitimate 2017 version of cs224n on Youtube.
 My genuine expertise is speech recognition, unfortunately that's not a topic I can share much due to IP issue.
 Some of you (e.g. from AIDL) would jump up and say "No way! I thought that NLP wasn't solved by deep learning yet!" That's because you are one lost soul and misinformed by misinformed blog post. ASR is the first field being tackled by deep learning, and it dated back to 2010. And most systems you see in SMT are seq2seq based.
 I was in the business of speech recognition from 1998 when I worked on voice-activated project for my undergraduate degree back in HKUST. It was a mess, but that's how I started.
 And the last one, you may always search it through youtube. Of course, it is not legit for me to share it here.
Many of you might know about the MIT DL4SDC class by Lex Friedman. Recently I listen through the 5 videos and decide to write a "quick impression" post. I usually write these "impression posts" when I only gone through some parts of the class' material. So here you go:
* 6.S094, compared to Stanford cs231n or cs224d, is a more a short class which takes <6 hours to watch through all materials.
* ~40-50% of the class was spent basic material such backprop or Q-learning. Mostly because the class is short, the treatment of these topics feels incomplete. e.g. You might want to listen to Silver's class to understand systematically about RL and the place of Q-learning. And you might want to listen to Kaparty's at cs231n to know the basic of backprop. Then finish Hinton's or Socher's to completely grok it. But again, this is a short class, you really can't expect too much.
Actually, I like Friedman's stand on these standard algorithms: he asked audience tough questions on whether human brain ever behave as backprop or RL.
* The rest of the class is mostly on SDC, planning with RL, steering with all-in-one CNN. The part which is gem (Lecture 5) is Friedman's own research on driver's state. If you don't have too much time, I think that's the lecture you want to sit through.
* Now, my experience doesn't quite include the two very interesting homeworks, DeepTraffic or DeepTesla. Both I heard great stories from students. Unfortunately I never try to play with them.
That's what I have. Hope the review is useful for you. 🙂
(Redacted from a conversation between me and Gautam.)
Q: "Guys, what is the difference between ML engineer and a data scientist? How they work together? How their work activity differ? Can you walk through with an use case example?"
A: (From Arthur)
"Generally, it is hard to decide what a title means unless you know a bit about the nature of the job, usually it is described in the job description.
But then you can asked what are these terms usually imply. So here is my take:
ML vs data: Usually there is the part of testing/integrating an algorithm and the part of analyzing the data. It's hard to say how much the proportion on both sides. But high dimensional data is more refrained form simple exploratory analysis. So usually people would use the term "ML" more, which means "your job is to run/tune algorithms for us, fun for you right?" But if you are looking at table-based data, then it's like to be "data" type of job.
Engineer vs scientist: In large organization, there is usually a difference between the one who come up with the mathematical model (scientist) vs the one who control the production platform (engineer). e.g. If you are solving a prediction problem, usually "scientist" is the one who come up with a model, but the "engineer" is the guy who create the production system. So you can think of them as the "R" and the "D" in the organization.
IMO, healthy companies usually balance R&D. So you would find a lot of companies would have "junior", "senior", "principal", "director", "VP" prefixed the both track of the titles.
You will sometimes see terms such as "Programmer" or "Architect" replacing "engineer"/"scientist". "Programmer" implies their job is more coding-related, i.e. the one who actual write code. "Architect" is rare, they usually oversee big picture issues among programmers, or act as a balance between R&D organizations."
For years, my social networks don't follow a single theme. For the most part I have varieties of interest and don't feel like pushing any news ....... until there is deep learning. As Waikit and I started AIDL at FB with a newsletter and a youtube channel, I also start to see more traffic comes to thegrandjanitor as well. Of course, there are also more legit followers on Twitter.
From Ardian Umam (shortened, rephrased):
"Now I'm taking AI course in my University using Peter Norvig and Stuart J. Russell textbook. In the same time, I'm learning DNN (Deep Neural Network) for visual recognition by watching Standford's Lecure on CNN (Convolutional Neural Networks) knowing how powerful a DNN to learn something from dataset. Whereas, on AI class, I'm learning about KB (Knowledge Base) including such as Logical Agent, First Order Logic that in short is kind of inferring "certain x" from KB, for example using "proportional resolution".
My question : "Is technique like what I learn in AI class I describe above good in solving real AI problem?" I'm still not get strong intuition about what I study in AI class in real AI problem."
My answer: "We usually call "Is technique .... real AI problem?" GOAI (Good Old Artificial Intelligence). So your question is weather GOAI is still relevant.
Yes, it is. Let's take search as an example. More complicated systems usually have certain components in search. e.g. Many speech recognition these days still use Viterbi algorithm which is large-scaled search. NNMT type of technique still requires some kind of stack decoding. (Edit, was beam search, but I am not quite sure.)
More importantly, you can see many things as a search. e.g. optimization of a function, you can solve it by Calculus, but in practice, you actually use search algorithm to find the best solution. Of course, in real-life, you rarely implement beam search to optimization. But idea of search would give you better feeling many ML algorithms like."
AU: "Ah, I see. Thank you Arthur Chan for your reply. Yes, for search, it is. Many real problems now are still utilizing search approach to solve. As for "Knowledge, reasoning" (Chapter 3 in the Norvig book) for example using "proportional resolution" to do inference from KB (Knowledge Base), is it still relevant?"
My Answer: "I think the answer is it is and it is not. Here is a tl;dr answer:
It is not: because many practical systems these days are probabilistic. So it makes Part V of Norvig's book *feel* more relevant now. Most people in this forum are ML/DL fans. That's probably the first feeling you should have in these days.
But then, it is also relevant. In what sense? There are perhaps 3 reasons. First is it allows you to talk with people who learn A.I. from the last generation, because people in their 50-60s (aka, your boss) learn solving AI problem with logic. So if you want to talk with them, learning logic/knowledge type of system would help. Also in AI, no one knows what topic would revive. e.g. Fractal is now the least talked-about topic in our community now. But you never know what happen in the future 10-20 years. So keep ing breath is a good thing.
Then there is the part of how you think about search, in both Norvig and Russell's books, the first few search problem is to solve logic problem such as first-order logic, chess. While they are only used in fewer system, compare to search which requires probabilities, they are much easier to understand. e.g. You may heard of people in their teens write their first chess engine, but I heard no one write (good) speech recognizer or machine translator before grad school.
The final reason is perhaps more theoretical: many DL/ML system you use, yeah... .they are powerful, but not all of them are making decision human understand. So they are not *interpretable*. That's a big problem. So it is still a research problem of how to link these system to GOAI-type of work."
Speech Recognition, Machine Learning, and Random Musing of Arthur Chan