Usually I would write an impression post when I audit a class, but write a full blog post when I completed all homeworks.
In this case, while I actually finished "Computational Neuroscience", I am not qualified enough to comments on some Neuroscientific concept such as Hogkins-Huxley models, Cable Theory or Brain plasticity, so I would stay at the "impression"-level.
Strictly speaking, CN is more an OoT for AIDL, but we are all curious about the brain, aren't we?
It's a great class if you know ML, but want to learn more about the brain. It's also great if you know something about brain, but want to know how the brain is similar to modern-days ML.
You learn nifty concepts such as spike-triggered-averages, neuronal coding/decoding and of course main dishes such as HH-models, cable theory.
I only learned these topics amateurishly, and there are around 3-4 classes I might take to further knowledge. But it is absolutely interesting. e.g. this class is very helpful if you want to understand the difference between biological neural network and artificial neural network. You will also get insights on why deeper ML people don't just use more biologically-realistic model in ML problems.
My take: while this is not a core class for us ML/DLers, it is an interesting class to take, especially if you want to sound smarter than "We just run CNN with our CIFAR-10" data. In my case, it humbles me a lot because I now know that we human just don't really understand our brain that well (yet).
I have been self-learning deep learning for a while, informally from 2013 when I first read Hinton's "Deep Neural Networks for Acoustic Modeling in Speech Recognition" and through Theano, more "formally" from various classes since the 2015 Summer when I got freshly promoted to Principal Speech Architect . It's not an exaggeration that deep learning changed my life and career. I have been more active than my previous life. e.g. If you are reading this, you are probably directed from the very popular Facebook group, AIDL, which I admin.
So this article was written at the time I finished watching an older version on Richard Socher's cs224d on-line . That, together with Ng's, Hinton's, Li and Karpathy's and Silvers's, are the 5 classes I recommended in my now widely-circulated "Learning Deep Learning - My Top-Five List". I think it's fair to give these sets of classes a name - Basic Five. Because IMO, they are the first fives classes you should go through when you start learning deep learning.
In this post I will say a few words on why I chose these five classes as the Five. Compared to more established bloggers such as Kapathy, Olah or Denny Britz, I am more a learner in the space , experienced perhaps, yet still a learner. So this article and my others usually stress on learning. What you can learn from these classes? Less talk-about, but as important: what is the limitation of learning on-line? As a learner, I think these are interesting discussion, so here you go.
What are the Five?
Just to be clear, here is the classes I'd recommend:
And the ranking is the same as I wrote in Top-Five List. Out of the five, four has official video playlist published on-line for free. With a small fee, you can finish the Ng's and Hinton's class with certification.
How much I actually Went Through the Basic Five
Many beginner articles usually come with gigantic set of links. The authors usually expect you to click through all of them (and learn through them?) When you scrutinize the list, it could amount to more than 100 hours of video watching, and perhaps up to 200 hours of work. I don't know about you, but I would suspect if the author really go through the list themselves.
So it's fair for me to first tell you what I've actually done with the Basic Five as of the first writing (May 13, 2017)
Ng's "Machine Learning"
Finished the class in entirety without certification.
Li and Karpathy's "Convolutional Neural Networks for Visual Recognition" or cs231n
Listened through the class lectures about ~1.5 times. Haven't done any of the homework
Socher's "Deep Learning for Natural Language Processing" or cs224d
Listened through the class lecture once. Haven't done any of the homework.
Silver's "Reinforcement Learning"
Listened through the class lecture 1.5 times. Only worked out few starter problems from Denny Britz's companion exercises.
Hinton's "Neural Network for Machine Learning"
Finished the class in entirety with certification. Listen through the class for ~2.5 times.
This table is likely to update as I go deep into a certain class, but it should tell you the limitation of my reviews. For example, while I have watched through all the class videos, only on Ng's and Hinton's class I have finished the homework. That means my understanding on two of the three "Stanford Trinities" is weaker, nor my understanding of reinforcement learning is solid. Together with my work at Voci, the Hinton's class gives me stronger insight than average commenters on topics such as unsupervised learning.
Why The Basic Five? And Three Millennial Machine Learning Problems
Taking classes is for learning of course. The five classes certainly give you the basics, and if you love to learn the fundamentals of deep learning. And take a look of footnote . The five are not the only classes I sit through last 1.5 years so their choice is not arbitrary. So oh yeah. Those are the stuffs you want to learn. Got it? That's my criterion. 🙂
But that's what other one thousand bloggers would tell you as well. I want to give you a more interesting reason. Here you go:
If you go back in time to the Year 2000. That was the time Google just launched their search engine, and there was no series of Google products and surely there was no Imagenet. What was the most difficult problems for machine learning? I think you would see three of them:
Statistical machine learning,
So what's so special about these three problems then? If you think about that, back in 2000, all three were known to be hard problems. They represent three seemingly different data structures -
Object classification - 2-dimensional, dense array of data
Statistical machine learning (SMT) - discrete symbols, seemingly related by loose rules human called grammars and translation rules
Automatic speech recognition (ASR)- 1-dimensional time series, has similarity to both object classification (through spectrogram), and loosely bound by rules such as dictionary and word grammar.
And you would recall all three problems have interest from the government, big institutions such as Big Four, and startup companies. If you master one of them, you can make a living. Moreover, once you learn them well, you can transfer the knowledge into other problems. For example, handwritten character recognition (HWR) resembles with ASR, and conversational agents work similarly as SMT. That just has to do with the three problems are great metaphor of many other machine learning problems.
Now, okay, let me tell one more thing: even now, there are people still (or trying to) make a living by solving these three problems. Because I never say they are solved. e.g. What about we increase the number of classes from 1000 to 5000? What about instead of Switchboard, we work on conference speech or speech from Youtube? What if I ask you to translate so well that even human cannot distinguish it? That should convince you, "Ah, if there is one method that could solve all these three problems, learning that method would be a great idea!"
And as you can guess, deep learningis that one method revolutionize all these three fields. Now that's why you want to take the Basic Five. Basic Five is not meant to make you the top researchers in the field of deep learning, rather it teaches you just the basic. And at this point of your learning, knowing powerful template of solving problems is important. You would also find going through Basic Five makes you able to read majority of the deep learning problems these days.
So here's why I chose the Five, Ng's and NNML are the essential basics of deep learning. Li and Kaparthy's teaches you object classification to the state of the art. Whereas, Socher would teach you where deep learning is on NLP, it forays into SMT and ASR a little bit, but you have enough to start.
My explanation excludes Silver's reinforcement learning. That admittedly is the goat from the herd. I add Silver's class because increasingly RL is used in even traditionally supervised learning task. And of course, to know the place of RL, you need a solid understanding. Silver's class is perfect for the purpose.
What You Actually Learn
In a way, it also reflect what's really important when learning deep learning. So I will list out 8 points here, because they are repeated them among different courses.
Basics of machine learning: this is mostly from Ng's class. But theme such bias-variance would be repeated in NNML and Silver's class.
Gradient descent: its variants (e.g. ADAM), its alternatives (e.g. second-order method), it's a never-ending study.
Backpropagation: how to view it? As optimizing function, as a computational graph, as flowing of gradient. Different classes give you different points of view. And don't skip them even if you learn it once.
Architecture: The big three family is DNN, CNN and RNN. Why some of them emerge and re-emerge in history. The detail of how they are trained and structured. None of the courses would teach you everything, but going through the five will teach you enough to survive
Image-specific technique: not just classification, but localization/detection/segmentation (as in cs231n 2016 L8, L13). Not just convolution, but "deconvolution" and why we don't like it is called "deconvolution". 🙂
NLP-specific techniques: word2vec, Glovec, how they were applied in NLP-problem such as sentiment classification
(Advanced) Basics of unsupervised learning; mainly from Hinton's, and mainly about techniques 5 years ago such as RBM, DBN, DBM and autoencoders, but they are the basics if you want to learn more advanced ideas such as GAN.
(Advanced) Basics of reinforcement learning: mainly from Silver's class, from the DP-based model to Monte-Carlo and TD.
The Limitation of Autodidacts
By the time you finish the Basic Five, and if you genuinely learn something out of them. Recruiters would start to knock your door. What you think and write about deep learning would appeal to many people. Perhaps you start to answer questions on forums? Or you might even write LinkedIn articles which has many Likes.
All good, but be cautious! During my year of administering AIDL, I've seen many people who purportedly took many deep learning class, but upon few minutes of discussion, I can point out holes in their understanding. Some, after some probing, turned out only take 1 class in entirety. So they don't really grok deeper concept such as back propagation. In other words, they could still improve, but they just refuse to. No wonder, with the hype of deep learning, many smart fellows just choose to start a company or code without really taking time to grok the concepts well.
That's a pity. And all of us should be aware is that self-learning is limited. If you decide to take a formal education path, like going to grad schools, most of the time you will sit with people who are as smart as you and willing to point out your issues daily. So any of your weaknesses will be revealed sooner.
You should also be aware that as deep learning is hyping, your holes of misunderstanding is unlikely to be uncovered. That has nothing to do with whether you work in a job. Many companies just want to hire someone to work on a task, and expect you learn while working.
So what should you do then? I guess my first advice is be humble, be aware of Dunning-Kruger Effect. Self-learning usually give people an intoxicating feeling that they learn a lot. But learning a lot doesn't mean you know everything. There are always higher mountains, you are doing your own disservice to stop learning.
The second thought is you should try out your skill. e.g. It's one thing to know about CNN, it's another to run a training with Imagenet data. If you are smart, the former took a day. For the latter, it took much planning, a powerful machine, and some training to get even Alexnet trained.
My final advice is to talk with people and understand your own limitation. e.g. After reading many posts on AIDL, I notice that while many people understand object classification well enough, they don't really grasp the basics of object localization/detection. In fact, I didn't too even after the first parse of the videos. So what did I do?
I just go through the videos on localization/detection again and again until I understand.
After the Basic Five.......
So some of you would ask "What's next?" Yes, you finished all these classes, as if you can't learn any more! Shake that feeling off! There are tons of things you still want to learn. So I list out several directions you can go:
Completionist: As of the first writing, I still haven't really done all the homework on all five classes, notice that doing homework can really help your understand, so if you are like me, I would suggest you to go back to these homework and test your understanding.
Drilling the Basics of Machine Learning: So this goes another direction - let's work on your fundamentals. For that, you can any Math topics forever. I would say the more important and non-trivial parts perhaps Linear Algebra, Matrix Differentiation and Topology. Also check out this very good link on how to learn college-level of Math.
Specialize on one field: If you want to master just one single field out of the Three Millennial Machine Learning Problems I mentioned, it's important for you to just keep on looking at specialized classes on computer vision or NLP. Since I don't want to clutter this point, let's say I will discuss the relevant classes/material in future articles.
Writing: That's what many of you have been doing, and I think it helps further your understanding. One thing I would suggest is to always write something new and something you want to read yourself. For example, there are too many blog posts on Computer Vision Using Tensorflow in the world. So why not write one which is all about what people don't know? For example, practical transfer learning for object detection. Or what is deconvolution? Or literature review on some non-trivial architectures such as Mask-RCNN? And compare it with existing decoding-encoding structures. Writing this kind of articles takes more time, but remember quality trumps quantity.
Coding/Githubbing: There is a lot of room for re-implementing ideas from papers and open source them. It is also a very useful skill as many companies need it to repeat many trendy deep learning techniques.
Research: If you genuinely understand deep learning, you might see many techniques need refinement. Indeed, currently there is plenty of opportunities to come up with better techniques. Of course, writing papers in the level of a professional researchers is tough and it's out of my scope. But only when you can publish, people would give you respect as part of the community.
Framework: Hacking in C/C++ level of a framework is not for faint of hearts. But if you are my type who loves low-level coding, try to come up with a framework yourself could be a great idea to learn more. e.g. Check out Darknet, which is surprisingly C!
So here you go. The complete Basic Five, what they are, why they were basic, and how you go from here. In a way, it's also a summary of what I learned so far from various classes since Jun 2015. As in my other posts, if I learn more in the future, I would keep this post updated. Hope this post keep you learning deep learning.
 Before 2017, there was no coherent set of Socher's class available on-line. Sadly there was also no legitimate version. So the version I refer to is a mixture of 2015 and 2016 classes. Of course, you may find a legitimate 2017 version of cs224n on Youtube.
 My genuine expertise is speech recognition, unfortunately that's not a topic I can share much due to IP issue.
 Some of you (e.g. from AIDL) would jump up and say "No way! I thought that NLP wasn't solved by deep learning yet!" That's because you are one lost soul and misinformed by misinformed blog post. ASR is the first field being tackled by deep learning, and it dated back to 2010. And most systems you see in SMT are seq2seq based.
 I was in the business of speech recognition from 1998 when I worked on voice-activated project for my undergraduate degree back in HKUST. It was a mess, but that's how I started.
 And the last one, you may always search it through youtube. Of course, it is not legit for me to share it here.
For me, finishing Hinton's deep learning class, or Neural Networks and Machine Learning(NNML) is a long overdue task. As you know, the class was first launched back in 2012. I was not so convinced by deep learning back then. Of course, my mind changed at around 2013, but the class was archived. Not until 2 years later I decided to take Andrew Ng's class on ML, and finally I was able to loop through the Hinton's class once. But only last year October when the class relaunched, I decided to take it again, i.e watch all videos the second times, finish all homework and get passing grades for the course. As you read through my journey, this class is hard. So some videos I watched it 4-5 times before groking what Hinton said. Some assignments made me takes long walks to think through. Finally I made through all 20 assignments, even bought a certificate for bragging right; It's a refreshing, thought-provoking and satisfying experience.
So this piece is my review on the class, why you should take it and when. I also discuss one question which has been floating around forums from time to time: Given all these deep learning classes now, is the Hinton's class outdated? Or is it still the best beginner class? I will chime in on the issue at the end of this review.
The Old Format Is Tough
I admire people who could finish this class in the Coursera's old format. NNML is well-known to be much harder than Andrew Ng's Machine Learning as multiple reviews said (here, here). Many of my friends who have PhD cannot quite follow what Hinton said in the last half of the class.
No wonder: at the time when Kapathay reviewed it in 2013, he noted that there was an influx of non-MLers were working on the course. For new-comers, it must be mesmerizing for them to understand topics such as energy-based models, which many people have hard time to follow. Or what about deep belief network (DBN)? Which people these days still mix up with deep neural network (DNN). And quite frankly I still don't grok some of the proofs in lecture 15 after going through the course because deep belief networks are difficult material.
The old format only allows 3 trials in quiz, with tight deadlines, and you only have one chance to finish the course. One homework requires deriving the matrix form of backprop from scratch. All of these make the class unsuitable for busy individuals (like me). But more for second to third year graduate students, or even experienced practitioners who have plenty of time (but, who do?).
The New Format Is Easier, but Still Challenging
I took the class last year October, when Coursera had changed most classes to the new format, which allows students to re-take.  It strips out some difficulty of the task, but it's more suitable for busy people. That doesn't mean you can go easy on the class : for the most part, you would need to review the lectures, work out the Math, draft pseudocode etc. The homework requires you to derive backprop is still there. The upside: you can still have all the fun of deep learning. 🙂 The downside: you shouldn't expect going through the class without spending 10-15 hours/week.
Why the Class is Challenging - I: The Math
Unlike Ng's and cs231n, NNML is not too easy for beginners without background in calculus. The Math is still not too difficult, mostly differentiation with chain rule, intuition on what Hessian is, and more importantly, vector differentiation - but if you never learn it - the class would be over your head. Take at least Calculus I and II before you join, and know some basic equations from the Matrix Cookbook.
Why the Class is Challenging - II: Energy-based Models
Another reason why the class is difficult is that last half of the class was all based on so-called energy-based models. i.e. Models such as Hopfield network (HopfieldNet), Boltzmann machine (BM) and restricted Boltzmann machine (RBM). Even if you are used to the math of supervised learning method such as linear regression, logistic regression or even backprop, Math of RBM can still throw you off. No wonder: many of these models have their physical origin such as Ising model. Deep learning research also frequently use ideas from Bayesian networks such as explaining away. If you have no basic background on either physics or Bayesian networks, you would feel quite confused.
In my case, I spent quite some time to Google and read through relevant literature, that power me through some of the quizzes, but I don't pretend I understand those topics because they can be deep and unintuitive.
Why the Class is Challenging - III: Recurrent Neural Network
If you learn RNN these days, probably from Socher's cs224d or by reading Mikolov's thesis. LSTM would easily be your only thought on how to resolve exploding/vanishing gradients in RNN. Of course, there are other ways: echo state network (ESN) and Hessian-free methods. They are seldom talked about these days. Again, their formulation is quite different from your standard methods such as backprop and gradient-descent. But learning them give you breadth, and make you think if the status quote is the right thing to do.
But is it Good?
You bet! Let me quantify the statement in next section.
Why is it good?
Suppose you just want to use some of the fancier tools in ML/DL, I guess you can just go through Andrew Ng's class, test out bunches of implementations, then claim yourself an expert - That's what many people do these days. In fact, Ng's Coursera class is designed to give you a taste of ML, and indeed, you should be able to wield many ML tools after the course.
That's said, you should realize your understanding of ML/DL is still .... rather shallow. May be you are thinking of "Oh, I have a bunch of data, let's throw them into Algorithm X!". "Oh, we just want to use XGBoost, right! It always give you the best results!" You should realize performance number isn't everything. It's important to understand what's going on with your model. You easily make costly short-sighted and ill-informed decision when you lack of understanding. It happens to many of my peers, to me, and sadly even to some of my mentors.
Don't make the mistake! Always seek for better understanding! Try to grok. If you only do Ng's neural network assignment, by now you would still wonder how it can be applied to other tasks. Go for Hinton's class, feel perplexed by the Prof said, and iterate. Then you would start to build up a better understanding of deep learning.
Another more technical note: if you want to learn deep unsupervised learning, I think this should be the first course as well. Prof. Hinton teaches you the intuition of many of these machines, you will also have chance to implement them. For models such as Hopfield net and RBM, it's quite doable if you know basic octave programming.
So it's good, but is it outdated?
Learners these days are perhaps luckier, they have plenty of choices to learn deep topic such as deep learning. Just check out my own "Top 5-List". cs231n, cs224d and even Silver's class are great contenders to be the second class.
But I still recommend NNML. There are four reasons:
It is deeper and tougher than other classes. As I explained before, NNML is tough, not exactly mathematically (Socher's, Silver's Maths are also non-trivial), but conceptually. e.g. energy-based model and different ways to train RNN are some of the examples.
Many concepts in ML/DL can be seen in different ways. For example, bias/variance is a trade-off for frequentist, but it's seen as "frequentist illusion" for Bayesian. Same thing can be said about concepts such as backprop, gradient descent. Once you think about them, they are tough concepts. So one reason to take a class, is not to just teach you a concept, but to allow you to look at things from different perspective. In that sense, NNML perfectly fit into the bucket. I found myself thinking about Hinton's statement during many long promenades.
Hinton's perspective - Prof Hinton has been mostly on the losing side of ML during last 30 years. But then he persisted, from his lectures, you would get a feeling of how/why he starts a certain line of research, and perhaps ultimately how you would research something yourself in the future.
Prof. Hinton's delivery is humorous. Check out his view in Lecture 10 about why physicists worked on neural network in early 80s. (Note: he was a physicist before working no neural networks.)
Conclusion and What's Next?
All-in-all, Prof. Hinton's "Neural Network and Machine Learning" is a must-take class. All of us, beginners and experts include, will be benefited from the professor's perspective, breadth of the subject.
I do recommend you to first take the Ng's class if you are absolute beginners, and perhaps some Calculus I or II, plus some Linear Algebra, Probability and Statistics, it would make the class more enjoyable (and perhaps doable) for you. In my view, both Kapathy's and Socher's class are perhaps easier second class than Hinton's class.
If you finish this class, make sure you check out other fundamental class. Check out my post "Learning Deep Learning - My Top 5 List", you would have plenty of ideas for what's next. A special mention here perhaps is Daphne Koller's Probabilistic Graphical Model, which found it equally challenging, and perhaps it will give you some insights on very deep topic such as Deep Belief Network.
Another suggestion for you: may be you can take the class again. That's what I plan to do about half a year later - as I mentioned, I don't understand every single nuance in the class. But I think understanding would come up at my 6th to 7th times going through the material.
 To me, this makes a lot of sense for both the course's preparer and the students, because students can take more time to really go through the homework, and the course's preparer can monetize their class for infinite period of time.
(20170410) First writing
(20170411) Fixed typos. Smooth up writings.
(20170412) Fixed typos
(20170414) Fixed typos.
I have been refreshing myself on various aspects of machine learning and data science. For the most part it has been a very nice experience. What I like most is that I finally able to grok many machine learning jargons people talk about. It gave me a lot of trouble even as merely a practitioner of machine learning. Because most people just assume you have some understanding of what they mean.
Here is a little secret: all these jargons can be very shallow to very deep. For instance, "lasso" just mean setting the regularization terms with exponent 1. I always think it's just people don't want to say the mouthful: "Set the regularization term to 1", so they come up with lasso.
Then there is bias-variance trade off. Now here is a concept which is very hard to explain well. What opens my mind is what Andrew Ng said in his Coursera lecture, "just forget the term bias and variance". Then he moves on to talk about over and under-fitting. That's a much easier to understand concept. And then he lead you to think. In the case, when a model underfits, we have an estimator that has "huge bias", and when the model overfit, the estimator would allow too much "variance". Now that's a much easier way to understand. Over and under-fitting can be visualized. Anyone who understands the polynomial regression would understand what overfitting is. That easily leads you to have a eureka moment: "Oh, complex models can easily overfit!" That's actually the key of understanding the whole phenomenon.
Not only people are getting better to explain different concepts. Several important ideas are enunciated better. e.g. reproducibility is huge, and it should be huge in machine learning as well. Yet even now you see junior scientists in entry level ignore all important measures to make sure their work reproducible. That's a pity. In speech recognition, e.g. I remember there was a dark time where training a broadcast news model was so difficult, despite the fact that we know people have done it before. How much time people waste to repeat other peoples' work?
Nowadays, perhaps I would just younger scientists to take the John Hopkins' "Reproducible Research". No kidding. Pay $49 to finish that class.
Anyway, that's my rambling for today. Before I go, I have been actively engaged in the Facebook's Deep Learning group. It turns out many of the forum uses love to hear more about how to learn deep learning. Perhaps I will write up more in the future.
I have been refreshing myself on the general topic of machine learning. Mostly motivated by job requirements as well as my own curiosity. That's why you saw my review post on the famed Andrew Ng's class. And I have been taking the Dragomir Radev's NLP class, as well as the Machine Learning Specialization by Emily Fox and Carlos Guestrin . When you are at work, it's tough to learn. But so far, I managed to learn something from each class and was able to apply them in my job.
So, one question you might ask is how applicable are on-line or even university machine learning courses in real life? Short answer, they arequite different. Let me try to answer this question by giving an example that come up to me recently.
It is a gender detection task based on voice. This comes up at work and I was tasked to improve the company's existing detector. For the majority of the my time, I tried to divide the data set, which has around 1 million data point to train/validation/test sets. Furthermore, from the beginning of the task I decided to create sets of dataset with increasing size. For example, 2k, 5k, 10k..... and up to 1 million. This simple exercise, done mostly in python, took me close to a week.
Training, aka the fun part, was comparatively short and anti-climatic. I just chose couple of well-known methods in the field. And test on the progressively sized data set. Since prototyping a system is so easy, I was able to weed out weaker methods very early and come up with a better system. I was able to get high relative performance gain. Before I submitted the system to my boss, I also worked out an analysis of why the system doesn't give 100%. No surprise. it turns out volume of the speech matters, and some individual of the world doesn't like their sex stereotypes. But so far the tasks are still quite well-done because we get better performance as well as we know why certain things don't work well. Those are good knowledge in practice.
One twist here, after finishing the system, I found that the method which gives the best classification performance doesn't give the best speed performance. So I decided to choose a cheaper but still rather effective method. It hurts my heart to see the best method wasn't used but that's the way it is sometimes.
Eventually, as one of the architects of the system, I also spent time to make sure integration is correct. That took coding, much of it was done in C/C++/python. Since there were couple of bugs from some existing code, I was spending about a week to trace code with gdb.
The whole thing took me about three months. Around 80% of my time was spent on data preparation and coding. Machine learning you do in class happens, but it only took me around 2 weeks to determine the best model. I could make these 2 weeks shorter by using more cores. But compare to other tasks, the machine learning you do in class, which is usually in the very nice form, "Here is a training set, go train and evaluate it with evaluation set.", seldom appears in real-life. Most of the time, you are the one who prepare the training and evaluation set.
So if you happen to work on machine learning, do expect to work on tasks such as web crawling and scraping if you work on text processing, listen to thousands of waveforms if you work on speech or music processing, watch videos that you might not like to watch if you try to classify videos. That's machine learning in real-life. If you happen to be also the one who decide which algorithm to use, yes, you will have some fun. If you happen to design a new algorithm. then you will have a lot of fun. But most of the time, practitioners need to handle issues, which can just be .... mundane. Tasks such as web crawling, is certainly not as enjoyable as to apply advanced mathematics to a problem. But they are incredibly important and they will take up most of time of your or your organization as a whole.
Perhaps that's why you heard of the term "data munging" or in Bill Howe's class: "data jujitsu". This is a well-known skill but not very advertised and unlikely to be seen as important. But in real-life, such data processing skill is crucial. For example, in my case, if I didn't have a progressive sized datasets, prototyping could take a long time. And I might need to spend 4 to 5 times more experimental time to determine what the best method is. Of course, debugging will also be slower if you only have a huge data set.
In short, data scientists and machine learning practitioners spend majority of their time as data janitors. I think that's a well-known phenomenon for a long time. But now as machine learning become a thing, there are more awareness . I think this is a good thing because it helps better scheduling and division of labors if you want to manage a group of engineers in a machine learning task.
I heard about Prof. Andrew Ng's Machine Learning Class for a long time. As MOOC goes, this is a famous one. You can say the class actually popularized MOOC. Many people seem to be benefited from the class and it has ~70% positive rating. I have no doubt that Prof. Ng has done a good job in teaching non-data scientist on a lot of difficult concepts in machine learning.
On the other hand, if you are more a experienced practitioner of ML, i.e. like me, who has worked on a sub field of the industry (eh, speech recognition......) for a while, would the class be useful for you?
I think the answer is yes for several reasons:
You want to connect the dots : most of us work in a particular machine learning problem for a while, it's easy to fall into certain tunnel vision inherent to a certain type of machine learning. e.g. For a while, people think that using 13 dimension of MFCC is the norm in ASR. So if you learn machine learning through ASR, it's natural to think that feature engineering is not important. That cannot be more wrong! If you look at reviews of Kaggle winners, most will tell you they spent majority of time to engineer feature. So learning machine learning from ground up would give you a new perspective.
You want to learn the language of machine learning properly: One thing I found which is useful Ng's class is that it doesn't assume you know everything (unlike many postgraduate level classes). e.g. I found that Ng's explanation of the term of bias vs variance makes a lot of sense - because the terms have to be interpreted differently to make sense. Before his class, I always have to conjure in my head on the equation of bias and variance. True, it's more elegant that way, but for the most part an intuitive feeling is more crucial at work.
You want to practice: Suppose you are like me, who has been focusing on one area in ASR, e.g. in my case, I spent quite a portion of my time just work on the codebase of the in-house engine. Chances are you will lack of opportunities to train yourself on other techniques. e.g. I never implemented linear regression (a one-liner), logistic regression before. So this class will give you an opportunity to play with these stuffs hand-ons.
Your knowledge is outdated: You might have learned pattern recognition or machine learning once back in school. But technology has changed so you want to keep up. I think Ng's class is a good starter class. There are more difficult ones such as Hinton's Neural Network for Machine Learning, the Caltech class by Prof. Yaser Abu-Mostafa, or the CMU's class by Prof. Toni Mitchell. If you are already proficient, yes, may be you should jump to those first.
So this is how I see Ng's class. It is deliberately simple and leaned on the practical side. Math is minimal and calculus is nada. There is no deep learning and you don't have to implement algorithm to train SVM. There is o latest stuffs such as random forest and gradient boosting. But it's a good starter class. It also get you good warm up if you hadn't learn for a while.
Of course, this also speaks quite a bit of the downsides of the class, there are just too many practical techniques which are not covered. For example, if you work on a few machine learning class, you will notice that SVM with RBF kernel is not the most scalable option. Random forest and gradient boosting is usually a better choice. And even when using SVM, using a linear kernel with right packages (such as pegasus-ml) would give you much faster run. In practice, it could mean if you can deliver or not. So this is what Ng's class is lacking, it doesn't cover many important modern techniques.
In a way, you should see it as your first machine learning class. The realistic expectation should be you need to keep on learning. (Isn't that speak for everything?)
Issues aside, I feel very grateful to learn something new in machine learning again. That was since I took my last ML class back in 2002, the landscape of the field was so different back then. For that, let's thank to Prof. Ng! And Happy Learning.
I had couple of vacation days last week. For fun, I decided to train a statistical machine translator (SMT). Since I want to use tools from open source. The natural choice is Moses w GIZA++. So this note is how you can start smoothly. I don't plan to write a detail tutorial because Moses' tutorial is nice enough already. What I note here is more on how you should deal with different stumbling blocks.
Which Tutorial to Follow?
If you never run an SMT training before, perhaps the more solid way to start is to follow the "Baseline System" link (a better name could be "How to train a baseline system"). At here, there is a rather detail tutorial on how to train a sets of models from WMT13 mini news commentary.
I found that the most difficult part of the process is to compile moses. I don't blame anybody, C++ program can generally be difficult to compile.
Use source of boost, make sure libbz2 was first installed. Then life would be much easier.
While it is not mandatory, I would highly recommend you to install cmph first before compiling moses because compiling cmph would trigger compilation of file compressing tools such as processPhraseTableMin and processLexicalTableMin. Without them, it will take a long long time to do decoding.
Do ./bjam --with-boost=<boost_dir> --with-cmph=<cmph_dir> -j 4
works fairly well for me until I tried to compile the ./misc directory. That I found I need to manually add a path of boost to the compilation.
Training is fairly trivial once you have moses compiled correctly and put everything in your root directory.
On the hand, if you compiled your code somewhere other than ~/, do expect some debugging is necessary. e.g. mert-moses.pl would require full path at the --merdir argument.