Categories
DBN deep leaerning deep learning deep neural network energy-based models ESN Geoff Hinton Machine Learning RBM recurrent neural network

A Review on Hinton’s Coursera “Neural Networks and Machine Learning”

CajalCerebellum
Cajal’s drawing chick cerebellum cells, from Estructura de los centros nerviosos de las aves, Madrid, 1905

For me, finishing Hinton’s deep learning class, or Neural Networks and Machine Learning(NNML) is a long overdue task. As you know, the class was first launched back in 2012. I was not so convinced by deep learning back then. Of course, my mind changed at around 2013, but the class was archived. Not until 2 years later I decided to take Andrew Ng’s class on ML, and finally I was able to loop through the Hinton’s class once. But only last year October when the class relaunched, I decided to take it again, i.e watch all videos the second times, finish all homework and get passing grades for the course. As you read through my journey, this class is hard.  So some videos I watched it 4-5 times before groking what Hinton said. Some assignments made me takes long walks to think through. Finally I made through all 20 assignments, even bought a certificate for bragging right; It’s a refreshing, thought-provoking and satisfying experience.

So this piece is my review on the class, why you should take it and when.  I also discuss one question which has been floating around forums from time to time: Given all these deep learning classes now, is the Hinton’s class outdated?   Or is it still the best beginner class? I will chime in on the issue at the end of this review.

The Old Format Is Tough

I admire people who could finish this class in the Coursera’s old format.  NNML is well-known to be much harder than Andrew Ng’s Machine Learning as multiple reviews said (here, here).  Many of my friends who have PhD cannot quite follow what Hinton said in the last half of the class.

No wonder: at the time when Kapathay reviewed it in 2013, he noted that there was an influx of non-MLers were working on the course. For new-comers, it must be mesmerizing for them to understand topics such as energy-based models, which many people have hard time to follow.   Or what about deep belief network (DBN)? Which people these days still mix up with deep neural network (DNN).  And quite frankly I still don’t grok some of the proofs in lecture 15 after going through the course because deep belief networks are difficult material.

The old format only allows 3 trials in quiz, with tight deadlines, and you only have one chance to finish the course.  One homework requires deriving the matrix form of backprop from scratch.  All of these make the class unsuitable for busy individuals (like me).  But more for second to third year graduate students, or even experienced practitioners who have plenty of time (but, who do?).

The New Format Is Easier, but Still Challenging

I took the class last year October, when Coursera had changed most classes to the new format, which allows students to re-take.  [1]  It strips out some difficulty of the task, but it’s more suitable for busy people.   That doesn’t mean you can go easy on the class : for the most part, you would need to review the lectures, work out the Math, draft pseudocode etc.   The homework requires you to derive backprop is still there.  The upside: you can still have all the fun of deep learning. 🙂 The downside:  you shouldn’t expect going through the class without spending 10-15 hours/week.

Why the Class is Challenging –  I: The Math

Unlike Ng’s and cs231n, NNML is not too easy for beginners without background in calculus.   The Math is still not too difficult, mostly differentiation with chain rule, intuition on what Hessian is, and more importantly, vector differentiation – but if you never learn it – the class would be over your head.  Take at least Calculus I and II before you join, and know some basic equations from the Matrix Cookbook.

Why the Class is Challenging – II:  Energy-based Models

Another reason why the class is difficult is that last half of the class was all based on so-called energy-based models. i.e. Models such as Hopfield network (HopfieldNet), Boltzmann machine (BM) and restricted Boltzmann machine (RBM).  Even if you are used to the math of supervised learning method such as linear regression, logistic regression or even backprop, Math of RBM can still throw you off.   No wonder: many of these models have their physical origin such as Ising model.  Deep learning research also frequently use ideas from Bayesian networks such as explaining away.  If you have no basic background on either physics or Bayesian networks, you would feel quite confused.

In my case, I spent quite some time to Google and read through relevant literature, that power me through some of the quizzes, but I don’t pretend I understand those topics because they can be deep and unintuitive.

Why the Class is Challenging – III: Recurrent Neural Network

If you learn RNN these days, probably from Socher’s cs224d or by reading Mikolov’s thesis.  LSTM would easily be your only thought on how  to resolve exploding/vanishing gradients in RNN.  Of course, there are other ways: echo state network (ESN) and Hessian-free methods.  They are seldom talked about these days.   Again, their formulation is quite different from your standard methods such as backprop and gradient-descent.  But learning them give you breadth, and make you think if the status quote is the right thing to do.

But is it Good?

You bet! Let me quantify the statement in next section.

Why is it good?

Suppose you just want to use some of the fancier tools in ML/DL, I guess you can just go through Andrew Ng’s class, test out bunches of implementations, then claim yourself an expert – That’s what many people do these days.  In fact, Ng’s Coursera class is designed to give you a taste of ML, and indeed, you should be able to wield many ML tools after the course.

That’s said, you should realize your understanding of ML/DL is still …. rather shallow.  May be you are thinking of “Oh, I have a bunch of data, let’s throw them into Algorithm X!”.  “Oh, we just want to use XGBoost, right! It always give you the best results!”   You should realize performance number isn’t everything.  It’s important to understand what’s going on with your model.   You easily make costly short-sighted and ill-informed decision when you lack of understanding.  It happens to many of my peers, to me, and sadly even to some of my mentors.

Don’t make the mistake!  Always seek for better understanding! Try to grok.  If you only do Ng’s neural network assignment, by now you would still wonder how it can be applied to other tasks.   Go for Hinton’s class, feel perplexed by the Prof said, and iterate.  Then you would start to build up a better understanding of deep learning.

Another more technical note:  if you want to learn deep unsupervised learning, I think this should be the first course as well.   Prof. Hinton teaches you the intuition of many of these machines, you will also have chance to implement them.   For models such as Hopfield net and RBM, it’s quite doable if you know basic octave programming.

So it’s good, but is it outdated?

Learners these days are perhaps luckier, they have plenty of choices to learn deep topic such as deep learning.   Just check out my own “Top 5-List“.   cs231n, cs224d and even Silver’s class are great contenders to be the second class.

But I still recommend NNML.  There are four reasons:

  1. It is deeper and tougher than other classes.  As I explained before, NNML is tough, not exactly mathematically (Socher’s, Silver’s Maths are also non-trivial), but conceptually.  e.g. energy-based model and different ways to train RNN are some of the examples.
  2. Many concepts in ML/DL can be seen in different ways.  For example, bias/variance is a trade-off for frequentist, but it’s seen as “frequentist illusion” for Bayesian.    Same thing can be said about concepts such as backprop, gradient descent.  Once you think about them, they are tough concepts.    So one reason to take a class, is not to just teach you a concept, but to allow you to look at things from different perspective.  In that sense, NNML perfectly fit into the bucket.  I found myself thinking about Hinton’s statement during many long promenades.
  3. Hinton’s perspective – Prof Hinton has been mostly on the losing side of ML during last 30 years.   But then he persisted, from his lectures, you would get a feeling of how/why he starts a certain line of research, and perhaps ultimately how you would research something yourself in the future.
  4. Prof. Hinton’s delivery is humorous.   Check out his view in Lecture 10 about why physicists worked on neural network in early 80s.  (Note: he was a physicist before working on neural networks.)

Conclusion and What’s Next?

All-in-all, Prof. Hinton’s “Neural Network and Machine Learning” is a must-take class.  All of us, beginners and experts include, will be benefited from the professor’s perspective, breadth of the subject.

I do recommend you to first take the Ng’s class if you are absolute beginners, and perhaps some Calculus I or II, plus some Linear Algebra, Probability and Statistics, it would make the class more enjoyable (and perhaps doable) for you.  In my view, both Kapathy’s and Socher’s class are perhaps easier second class than Hinton’s class.

If you finish this class, make sure you check out other fundamental classes.  Check out my post “Learning Deep Learning – My Top 5 List“, you would have plenty of ideas for what’s next.   A special mention here perhaps is Daphne Koller’s Probabilistic Graphical Model, which I found equally challenging, and perhaps it will give you some insights on very deep topics such as Deep Belief Network as well.

Another suggestion for you: may be you can take the class again. That’s what I plan to do about half a year later – as I mentioned, I don’t understand every single nuance in the class.  But I think understanding would come up at my 6th to 7th times going through the material.

Arthur Chan

[1] To me, this makes a lot of sense for both the course’s preparer and the students, because students can take more time to really go through the homework, and the course’s preparer can monetize their class for infinite period of time.

History:

(20170410) First writing
(20170411) Fixed typos. Smooth up writings.
(20170412) Fixed typos
(20170414) Fixed typos.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm. Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
AIDL deep learning Machine Learning Thought

AIDL Pinned Post V2

(Just want to keep a record for myself.)

Welcome! Welcome! We are the most active FB group for Artificial Intelligence/Deep Learning, or AIDL. Many of our members are knowledgeable so feel free to ask questions.

We have a tied-in newsletter: https://aidlweekly.curated.co/ and

a YouTube-channel, with (kinda) weekly show “AIDL Office Hour”,
https://www.youtube.com/channel/UC3YM5TEbSqIpFGH85d6gjKg

Posting is strict at AIDL, your post has to be relevant, accurate and non-commerical (FAQ Q12). Commercial posts are only allowed on Saturday. If you don’t follow this rule, you might be banned.

FAQ:

Q1: How do I start AI/ML/DL?
A: Step 1: Learn some Math and Programming,
Step 2: Take some beginner classes. e.g. Try out Ng’s Machine Learning.
Step 3: Find some problem to play with. Kaggle has tons of such tasks.
Iterate the above 3 steps until you become bored. From time to time you can share what you learn.

Q2: What is your recommended first class for ML?
A: Ng’s Coursera, the CalTech edX class, the UW Coursera class is also pretty good.

Q3: What are your recommended classes for DL?
A: Go through at least 1 or 2 ML class, then go for Hinton’s, Karparthay’s, Socher’s, LaRochelle’s and de Freitas. For deep reinforcement learning, go with Silver’s and Schulmann’s lectures. Also see Q4.

Q4: How do you compare different resources on machine learning/deep learning?
A: (Shameless self-promoting plug) Here is an article, “Learning Deep Learning – Top-5 Resources” written by me (Arthur) on different resources and their prerequisites. I refer to it couple of times at AIDL, and you might find it useful: http://thegrandjanitor.com/…/learning-deep-learning-my-top…/ . Reddit’s machine learning FAQ has another list of great resources as well.

Q5: How do I use machine learning technique X with language L?
A: Google is your friend. You might also see a lot of us referring you to Google from time to time. That’s because your question is best to be solved by Google.

Q6: Explain concept Y. List 3 properties of concept Y.
A: Google. Also we don’t do your homework. If you couldn’t Google the term though, it’s fair to ask questions.

Q7: What is the most recommended resources on deep learning on computer vision?
A: cs231n. 2016 is the one I will recommend. Most other resources you will find are derivative in nature or have glaring problems.

Q8: What is the prerequisites of Machine Learning/Deep Learning?
A: Mostly Linear Algebra and Calculus I-III. In Linear Algebra, you should be good at eigenvectors and matrix operation. In Calculus, you should be quite comfortable with differentiation. You might also want to have a primer on matrix differentiation before you start because it’s a topic which is seldom touched in an undergraduate curriculum.
Some people will also argue Topology as important and having a Physics and Biology background could help. But they are not crucial to start.

Q9: What are the cool research papers to read in Deep Learning?
A: We think songrotek’s list is pretty good: https://github.com/son…/Deep-Learning-Papers-Reading-Roadmap. Another classic is deeplearning.net‘s reading list: http://deeplearning.net/reading-list/.

Q10: What is the best/most recommended language in Deep Learning/AI?
A: Python is usually cited as a good language because it has the best support of libraries. Most ML libraries from python links with C/C++. So you get the best of both flexibility and speed.
Other also cites Java (deeplearning4j), Lua (Torch), Lisp, Golang, R. It really depends on your purpose. Practical concerns such as code integration, your familiarity with a language usually dictates your choice. R deserves special mention because it was widely used in some brother fields such as data science and it is gaining popularity.

Q11: I am bad at Math/Programming. Can I still learn A.I/D.L?
A: Mostly you can tag along, but at a certain point, if you don’t understand the underlying Math, you won’t be able to understand what you are doing. Same for programming, if you never implement one, or trace one yourself, you will never truly understand why an algorithm behave a certain way.
So what if you feel you are bad at Math? Don’t beat yourself too much. Take Barbara Oakley’s class on “Learning How to Learn”, you will learn more about tough subjects such as Mathematics, Physics and Programming.

Q12: Would you explain more about AIDL’s posting requirement?
A: This is a frustrating topic for many posters, albeit their good intention. I suggest you read through this blog posthttp://thegrandjanitor.com/2017/01/26/posting-on-aidl/ before you start any posting.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
AIDL deep learning Machine Learning Thought

Thoughts From Your Humble Administrators – Feb 5, 2017

Last week:

Libratus is the biggest news item this week.  In retrospect, it’s probably as huge as AlphaGo.   The surprising part is it has nothing to do with deep-learning.   So it worths our time to look at it closely.

  • We learned that Libratus crushes human professional player in head-up no-limit holdem (NLH).  How does it work?  Perhaps the Wired and the Spectrum articles tell us the most.
    • First of all, NLH is not as commonly played in Go, but it is interesting because people play real-money on it.  And we are talking about big money.  World Series of Poker holds a yearly poker tournament, all top-10 players will become instant millionaires. Among pros, holdem is known as the “Cadillac of Poker” coined by Doyle Brunson. That implies mastering holdem is the key skill in poker.
    • Limit Holdem, which pros generally think it is a “chess”-like game.  Polaris from University of Alberta bested humans in three wins back in 2008.
    • Not NLH until now, so let’s think about how you would model a NLH in general. In NLH, the game states is 10^165, close to Go.  Since the game only 5 streets, you easily get into what other game players called end-game.   It’s just that given the large number of possibility of bet size, the game-state blow up very easily.
    • So in run-time you can only evaluate a portion of the game tree.    Since the betting is continuous, the bet is usually discretized such that the evaluation is tractable with your compute, known as “action abstraction”,  actual bet size is usually called “off-tree” betting.   These off-tree betting will then translate to in tree action abstraction in run-time, known as “action translation”.   Of course, there are different types of tree evaluation.
    • Now, what is the merit of Libratus, why does it win? There seems to be three distinct factors, the first two is about the end-game.
      1. There is a new end-game solver (http://www.cs.cmu.edu/~noamb/papers/17-AAAI-Refinement.pdf) which features a new criterion to evaluate game tree, called Reach-MaxMargin.
      2. Also in the paper, the authors suggest a way to solve an end-game given the player bet size.  So they no longer use action translation to translate an off-tree bet into the game abstraction.  This considerably reduce “Regret”.
    • What is the third factor? As it turns out, in the past human-computer games, humans were able to easily exploit machine by noticing machine’s betting patterns.   So the CMU team used an interesting strategy, every night, the team will manually tune the system such that repeated betting patterns will be removed.   That confuses human pro.  And Dong Kim, the best player against the machine, feel like they are dealing with a different machine every day.
    • These seems to be the reasons why the pro is crushed.  Notice that this is a rematch, the pros won in a small margin back in 2015, but the result this time shows that there are 99.8% chance the machine is beating humans.  (I am handwaving here because you need to talk about the big blinds size to talk about winnings.  Unfortunately I couldn’t look it up.)
    • To me, this Libratus win is very closed to say computer is able to beat the best tournament head-up players.  But poker players will tell you the best players are cash-game players.  And head-up plays would not be representative because bread-and-butter games are usually 6 to 10 player games. So we will probably hear more about pokerbot in the future.

Anyway, that’s what I have this week.  We will resume our office hour next week.  Waikit will tell you more in the next couple of days.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
AIDL deep learning Machine Learning Thought

Thoughts From Your Humble Administrators – Jan 29, 2017

This week at AIDL:

Must-read:  I would read the Stanford’s article and Deep Patient’s paper in tandem.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
deep learning Machine Learning

Facebook Artificial Intelligence/Deep Learning Group @ 1000 Members

I (Arthur) always remember comp.speech and comp.speech.research which I was able to cross path with many great developers/researchers.   Another fond memory of mine related to discussion forum was with CMU Sphinx, a large vocabulary speech recognizer, which many users later become very advanced, and spawned numerous projects.   You always learn something new from people around the world.  That was the reason why Internet is really really great.

Translate to now, wow, searching for a solid discussion forum for deep learning is hard.   Many of them, in Facebook or LinkedIn are really spammy.  I tried Plus for a while, but for the most part no one digs my message. (My writing style? 🙂 )  So when Waikit Lau, an old friend + veteran startup investors/mentor/helper, asked me to help admin the group.  I was more than happy to oblige.

Yes, you hear it right,  Artificial Intelligence & Deep Learning Group is a curated discussion forum,  we rejected spammers, ads and only blog posts which are relevant to us are allowed.

Alright everyone does it, I might as well:
WE ARE 1000 MEMBERS STRONG!
WE ARE 1000 MEMBERS STRONG!
WE ARE 1000 MEMBERS STRONG!

(Just kidding, we are not really chasing for a bigger group, but more quality discussion.)

Some come join us.  We are very happy to chat with you on deep learning.

Arthur and Waikit

You might also like Learning Machine Learning,  Some Personal Experience and Learning Deep Learning, My Top-5 List.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
deep learning Machine Learning reinforcement learning

Learning Deep Learning – My Top-Five List

Many people have been nagging me to write a beginner guide on deep learning.    Geez, that’s a difficult task – there are so many tutorials, books, lectures to start with, and the best way to start highly depends on your background, knowledge and skill sets.  So it’s very hard to give a simple guideline.

In this post, I will do something less ambitious: I gather what I think is the top-5 most important resources which let you to start to learn deep learning.   Check out the “Philosophy” section on why this list is different from other lists you saw.

Philosophy

There are many lists of resources of deep learning.  To name a few, the “Awesome”  list,  the Reddit machine learning FAQ. I think they are quality resources, and it’s fair to ask why I started “Top-Five” a year ago.

Unlike all the deep learning resource list you saw, “Top-Five” is not meant to be an exhaustive list.  Rather it assumes you have only limited amount of time to study and gather resources while learning deep learning.    For example, suppose you like to learn through on-line classes.  Each machine/deep learning class would likely take you 3 months to finish. It will take you a year to finish all the classes.   As a result, having a priority is good.  For instance, without any guidance, reading Goodfellow’s Deep Learning would confuse you.   A book such as Bishop’s Pattern Recognition and Machine Learning (PRML) would likely be a better “introductory book”.

Another difference between Top-Five list and other resource list is that the resource are curated. Unless specified, I have either finished the material myself.  So for classes I have at least audit the whole lecture once.  For books I probably browse it once. In a way,  this is more an “Arthur’s list”, rather than some disorganized links.  You also see a short commentary why (IMO) they are useful.

Which Top-Five?

As the number of sections in my list grow, it’s fair to ask what resources should you spend time on first.   That’s a tough question because humans differ in their preference of learning.  My suggestion is start from the following,

  1. Taking classes – by far I think it is the most effective way to learn.  Listening+doing homework usually teach you a lot.
  2. Book Reading – this is important because usually lectures only summarize a subject.   Only when you read through a certain subject, you start to get deeper understanding.
  3. Playing with Frameworks – This allows you to actually create some deep learning applications, and turn some your knowledge in real-life
  4. Blog Reading – this is useful but you better know which blogs to read (Look at the section “Blogs You Want To Read”).  In general, there are just too many blog writers these days, and they might only have murky understanding of the topic.   Reading those would only make you feel more confused.
  5. Joining Forums and ask questions – this is where you can dish out some of your ideas and ask for comments.  Once again, the quality of the forum matters.   So take a look of the section “Facebook Forums”.

Lectures/Courses

Basic Deep Learning (Also check out “The Basic-Five“)

This are more the must-take courses if you want to learn the basic jargons of deep learning.   Ng’s, Karparthy’s and Socher’s class teach you basic concepts but they have a theme of building applications.   Silver’s class link deep learning concepts with reinforcement learning. So after these 4 classes, you should be able to talk deep learning well and work with some basic applications.

  1. Andrew Ng’s Coursera Machine Learning class
    • You need to walk before you run.   Ng’s class is the best beginner class on machine learning in my opinion.  Check out this page for my review.
  2. Andrew Ng’s deeplearning.ai Specialization
  3. Fei-Fei Li and Andrew Karpathy’s Computer Vision class (Stanford cs231n 2015/2016)
    • I listen through the lectures once.  Many people just call this a Karpathy’s class, but it is also co-taught by another experienced graduate student, Justin Johnson.  For the most part this is the class for learning CNN,  it also brings you to the latest technology of more difficult topics such as image localization, detection and segmentation.
  4. Richard Socher’s Deep Learning and Natural Language Processing (Standard cs224d)
    • I listen to the whole lecture once, the first few lectures were very useful for me when I tried to understand RNN and LSTM.   This might also be the best set of lecture to learn Socher’s recursive neural network. Compare to Karpathy’s class, Socher’s place more emphasis on mathematical derivation.  So if you are not familiar with matrix differentiation, this would be a good class to start with and get your hands wet.
  5. David Silver’s Reinforcement Learning
    • This is a great class taught by the main programmer of AlphaGo.  It starts from the basic of reinforcement learning such as DP-based method, then proceeds to more difficult topic such as Monte-Carlo and TD method, as well as function approximation and policy gradient.   It takes quite a bit of understanding even if you already background of supervised learning.   As RL is being used more and more applications, this class should be a must-take for all of you.

You should also consider:

  • Fast.ai‘s Deep Learning for Coders
    • a class which has generally good review.  I would suggest you read Arvind Nagaraj’s post which compare deeplearning.ai and fast.ai.
  • Theories of Deep Learning
    • Or Stanford Stat 385, which is one of the theory class of deep learning.
  • Hugo Larochelle’s Neural Network class
    • by another star-level innovator of the field.  I only heard Larochelle’s lecture in a deep learning class, but he is succinct and to the point than many.
  • MIT Self Driving 6.S094
    • See the description in the session of Reinforcement Learning.
  • Nando de Freita’s class on Machine/Deep Learning
    • I don’t have a chance to go through this one, but it is both for beginner and more advanced learners.  It covers topics such as reinforcement learning and siamese network.    I also think this is the class if you want to use Torch as your deep learning language.
Intermediate Deep/Machine Learning

The intermediate courses are meant to be the more difficult sets of classes.  They are much more difficult to finish – Math is necessary. There are also many confusing concepts even if you already have Master.

  1. Hinton’s Neural Network Machine Learning
    •  While the topics are advanced, Prof. Hinton’s class is probably the one which can teach you the most on the philosophical difference between deep learning and general machine learning.  The first time I audit the class in 2016 October, his explanation on models based on statistical mechanical model blew my mind.   I finished the course around 2017 April, which results in a popular review post. Unfortunately, due to the difficulty of the class, it was ranked lower in this list. (It was ranked 2nd, then 4th on the Basic Five, but I found that it requires deeper understanding than the Karparthy’s, Socher’s and Silver’s.  Later on when deeplearning.ai comes up, I shift Prof Hinton’s course to one of the Intermediate classes. )
  2. Daphne Koller’s Probabilistic Graphical Model
    • if you want to understand tougher concepts in models such as DBN, you want to have some background in Bayesian network as well.  If that’s the route you like, Koller’s class is for you.  But this class, just like Hinton’s NNML, is notoriously difficult and not for faint of heart – you will be challenged on probability concepts (Course 1), graph theory and algorithm (Course ) and parameter estimation (Course 3).
Reinforcement Learning

Reinforcement learning has deep history by itself and you can think it has the heritage from both computer science and electrical engineering.

My understanding of RL is fairly shallow so I can only tell you which are the easier class to take, but all of these classes are more advanced. Georgia Tech CS8803 should probably be your first. Silvers’ is fun, and it’s based on Sutton’s book, but be ready to read the book in order to finish some of the exercises.

  1. Udacity’s Reinforcement Learning 
    • This is a class which is jointly published by Georgia Tech and you can take it as an advanced course CS8803.  I took Silver’s class first, but I found the material this class provides a non-deep learning take and quite refreshing if you start out at reinforcement learning.
  2. David Silver’s Reinforcement Learning
    • See description in the “Introductory Deep Learning” section.
  3. MIT Self Driving 6.S094
    • A specialized class in self-driving.  The course is mostly computer vision, but there is one super-entertaining exercise on self driving, which mostly likely you want to use RL to solve the problem. (Here is some quick impression about the class.)

You should also consider:

I heard good things about them……
  • Oxford Deep NLP 2017 
    • This is perhaps the second class of deep learning on NLP. I found the material interesting because it covers material which wasn’t covered by the Socher’s class.  I haven’t takem it yet.  So I will comment later.
  • Statistical Computing by Nicholas Zabara  
    • looks super interesting and most material are actually ML-based.
  • CMU CS11-747 Neural Networks and NLP
    •  A great sets of lecture by Graham Neubig. Neubig has written few useful tutorial on DL in NLP.  So I add his as more promising candidate here as well.
  • NYU Deep Learning class at 2014
    • Prof. Yann LeCun.  To me this is an important class, with similar importance as Prof. Hinton’s class.  Mostly because Prof. LeCun is one of the earliest experimenters on BackProp and SGD.  Unfortunately these NYU’s lecture was removed.   But do check out the slides though.
  • Also from Prof. Yann LeCun, Deep Learning inaugural lectures.
  • Berkely’s Seminar on Deep Learning: by Prof.  Ruslan Salakhutdinov, an early researcher on unsupervised learning.
  • University of Amsterdam Deep Learning
    • If you have already audit cs231n and cs224d, perhaps the material here is not too new, but I found it useful to have a second source when I look at some of the material.   I also like the presentation of back-propagation, which is more mathematical than most beginner class.
  • Special Topics in Deep Learning
    • I found it great resource if you want to drill on more exoteric topics in deep learning.
  • Deep Learning for Speech and Language
    • of my own curiosity on speech recognition. This course is perhaps is the only one I can find on DL on ASR.   If you happen to stumble this paragraph, I’d say most software you find on-line are not really too applicable in real-life.  The only exceptions are discussed in this very old article of mine.
For reference
Great Preliminaries
More on Basic Machine Learning (Unsorted)
More AI than Machine Learning (Unsorted)
More about the Brain:

I don’t have much, but you can take a look of my another list on Neuroscience MOOCs.

Books

I wrote quite a bit on the Recommended Books Page.   In a nutshell,  I found that classics such as PRML and Duda and Hart are still must-reads in the world of deep learning.   But if you still want a list, alright then……

  1. Michael Nielson’s Deep Learning Book: or NNDL,  highly recommended by many.  This book is very suitable for beginners who want to understand the basic insights of simple feed forward networks and their setups.    Unlike most text books, it doesn’t quite go through the Math until it gives you some intuition.   While I only went through recently, I highly recommend all of you to read it.  Also see my read on the book.
  2. PRML : I love PRML!  Do go to read my Recommended Books Page to find out why.
  3. Duda and Hart:  I don’t like it as much as PRML, but it’s my first machine learning Bible.  Again, go to my Recommended Books Page to find out why.
  4. The Deep Learning Book by Ian Goodfellow, Yoshua Bengio and Aaron Courville:  This is the book for deep learning, but it’s hardly for beginner.   I recently browse through the book.  Here is some quick impression.
  5. Natural Language Understanding with Distributed Representation by Kyung Hyun Cho.   This is mainly for NLP people, but it’s important to note how different that NLP is seen from a deep learning point of view.

Others: Check out my Recommended Books Page.  For beginner, I found Mitchell’s and Domingo’s books are quite interesting.

Frameworks

  1. Tensorflow : most popular, and could be daunting to install, also check out TFLearn.  Keras became the de-facto high-level layer lately.
  2. Torch :  very easy to use even if you don’t know Lua.   It also leads you to great tutorials.  Also check out PyTorch.
  3. Theano : grandfather of deep learning frameworks, also check out Lasagne.
  4. Caffe : probably the fastest among the generic frameworks.  It takes you a while to understand the setup/syntax.
  5. Neon : the very speedy neon, it’s optimized on modern cards. I don’t have a benchmarking between caffe and neon yet, but its MNIST training feels very fast.

Others:

  • deeplearning4j: obviously in java, but I heard there are great support on enterprise machine learning.

Tutorials

  1. Theano Tutorial:  a great sets of tutorials and you can run it from CPU.
  2. Tensorflow Tutorial : a very comprehensive sets of tutorial.  I don’t like it as much as Theano’s because some tasks require compilation, which could be fairly painful.
  3. char-rnn:  not exactly a tutorial but if you want to have fun with deep learning.  You should train at least one char-rnn.   Note that word-based version is available.  The package is also optimized now as torch-rnn.  I think char-rnn is also a great starting code for intermediate learners to learn Torch.
  4. Misc: generally running the examples of a package can teach you a lot.  Let’s say this is one item.

Others: I also found Learning Guide from YeravaNN’s lab to be fairly impressive.  There is ranked resource list on several different topics, which is similar to the spirit of my list.

Mailing Lists

  1. (Shameless Plug) AIDL Weekly  Curated by me and Waikit Lau, AIDL weekly is a tied-in newsletter of the AIDL Facebook group. We provide in-depth analysis of weekly events of AI and deep learning.
  2. Mapping Babel Curated by Jack Clark.  I found it entertaining and well-curated.  Clark is more in the journalism space and I found his commentary thoughtful.
  3. Data Machina This is a link only letter.  The links are quite quality.

Of course, there are more newsletter than these three.  But I don’t normally recommend them.   One reason is many “curators” don’t always read the original sources before they share the links, which sometimes inadvertently spread faked news to the public.   In Issue #4 of AIDL Weekly, I described one of such incidences.  So you are warned!

Facebook Forums

That’s another category I am going to plug shamelessly.  It has to do with most Facebook forums have too much noise and administrator pay too little attention to the group.

  1. (Shameless Plug) AIDL This is a forum curated by me and Waikit.  We like our forum because we actively curate it, delete spam and facilitate discussion within the group.  As a result it become one of the most active group.  It has 10k+ members.  As of this writing, we have a tied-in mailing list as well as a weekly show.
  2. Deep Learning  Deep Learning has comparable size as AIDL, but less active, perhaps because the administrators use Korean.  I still find some of the links interesting and use the group a lot before  administering AIDL.
  3. Deep Learning/AI Curated by Sid Dharth and Ish Girwan.  DLAI follows very similar philosophy and Sid control posting tightly.  I think his group will be one of the up-and-coming group next year.
  4. Strong Artificial Intelligence  This is less about deep learning, but more on AI.   It is perhaps the biggest FB group on AI, its membership stabilized but posting is solid and there are still some life in discussion. I like the more philosophical ends of the posts which AIDL usually refrained from.

Non-trivial Mathematics You should Know

Due to popular demand,  this section is what I would say a bit on the most relevant Math which you need to know.   Everyone knows that Math is useful, and yes, stuffs like Calculus, Linear Algebra, Probability and Statistics are super useful too.  But then I think they are too general, so I will name several specific topics which turns out to be very useful, but not very well taught in school.

  1. Bayes’  Theorem:  Bayes’ theorem is important not only as a simple rule which you will use it all the time.   The high school version usually just ask you to reverse the end of probabilities. But once it is apply in reasoning, you will need to be very clear how to interpret terms such as likelihood and priors. It’s also very important what the term Bayesian really means, and why people see it as better than frequentist.   All these thinking if you don’t know Bayes’ rules, you are going to get very confused.
  2. Properties of Multi-variate Gaussian Distribution:  The one-dimensional Gaussian distribution is an interesting mathematical quantity.  If you try to integrate it, it will be one of the integrals you quickly you can’t integrate it in trivial way.   That’s the point you want to learn the probability integral and how it was integrated.   Of course, once you need to work on multi-variate Gaussian, then you will need to learn further properties such as diagonalizing the covariance matrix and all the jazz.   Those are non-trivial Math.   But if you master them, it will helps you work through more difficult problems in PRML.
  3. Matrix differentiation : You can differentiate all right, but once it comes to vector/matrix, even the notation seems to be different from your college Calculus.  No doubt, matrix differentiation is seldom taught in school.   So always refer to useful guide such as Matrix Cook Book, then you will be less confused.  (Matrix reference manual is also good. )
  4. Calculus of Variation: If you want to find the best value which optimize a function you use Calculus, if you want to find the best function/path which optimize a functional, you use Calculus of Variation. For the most part, Euler-Langrange equation is what you need.
  5. Information theory:  information theory is widely used in machine learning.  More importantly the reasoning and thinking can be found everywhere.  e.g. Why do you want to optimize cross-entropy, instead of square error?  Not only square error over-penalize incorrect outputs.  You can also think of cross-entropy is learning from the surprise of a mistake.

Blogs You Should Read

  1. Chris Olah’s Blog  Olah has great capability to express very difficult mathematical concepts to lay audience.   I greatly benefit from his articles on LSTM and computational graph.   He also makes me understand learning topology is fun and profitable.
  2. Andrew Karparthy’s Blog  If you hadn’t read “The Unreasonable Effectiveness of Recurrent Neural Networks“, you should.   Karparthy’s articles show both great enthusiasm on the topic and very good grasp on the principle.    I also like his article on reinforcement learning.
  3. WildML Written by Danny Britz,  he is perhaps less well-known than either Olah or Karparthy, but he enunciate many topics well. For example, I enjoy his explanation on GRU/LSTM a lot.
  4. Tombone’s Computer Vision Blog Written by Tomasz Malisiewicz.  This is the first few blogs I read about computer vision, Malisiewicz has great insight on machine learning algorithms and computer vision.   Many of his articles give insightful comments on relationship between ML techniques.
  5. The Spectactor written by Shakir Mohamad.  This is my goto page on mathematical statistics as well as theoretial basis of  deep learning techniques.  Check out his thought on what make a ML technique deep, as well as his tricks in machine learning.

That’s it for now. Check out this page and I might update with more contents. Arthur

This post is first published at http://thegrandjanitor.com/2016/08/15/learning-deep-learning-my-top-five-resource/.

You might also like Learning Machine Learning,  Some Personal Experience.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed.  You can also find me at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

 

(20160817): I change the title couple of times, because this is more like a top-5 list of a list. So I retitled the post as “top-five resource”, “top-five”, now I settled to use “top-five list”, which is a misnomer but close enough.

(20160817): Fixed couple of typos/wording issues.

(20160824): Add a section on important Math to learn.

(20160826): Fixed Typos, etc.

(20160904): Fixed Typos

(20161002): Changed the section on books to link to my article on NNDL.   Added a section on must-follow blogs.

(20170128): As I go deep on Socher’s lectures, I boost up his class ranking to number 3.  I also made Karparthay’s lecture into rank number 2. I think Silver’s class is important but the material is too advanced, and perhaps less of importance for deep learning learners.  (It is more about reinforcement learning when you look at it closely.)  Hinton’s class is absolutely crucial but it requires more mathematical understanding than Karparthay’s class.  Thus the ranking.

I also 2 more classes (NYU, MIT)  to check out and 2 more as references (VTech and UA).

(20161207): Added descriptions of Li, Karparthy and Johnson’s class,   Added description of Silver’s class.

(20170310): Add “Philosophy”, “Top-Five of Top-Five”, “Top-Five Mailing List”, “Top-Five Forums”.  Adjusted description on Socher’s class, linked a quick impression on GoodFellow’s “Deep Learning”.

(20170312): Add Oxford NLP class, Berkeley’s Deep RL into the mix.

(20170319): Add the Udacity’s course into the mix.  I think next version I might have a separate section on reinforcement learning.

(20170326): I did another rewrite last two weeks mainly because there are many new lectures released during Spring 2017. Here is a summary:

  •  I separate all “Courses/Lectures” session to two tracks: “Basic Deep Learning” and “Reinforcement Learning”. It’s more a decluttering of links. I also believe reinforcement learning should be separate track because it requires more specialized algorithms.
  • On the “Basic Deep Learning” track, ranking has change. It was Ng’s, cs231n, cs224d, Hinton’s, Silver’s, now it becomes Ng’s, cs231n, cs224d, Silvers’s, Hinton’s. As I go deep into Hinton’s class, I found that it has more difficult concepts. Both Silver’s and Hinton’s class are more difficult than the first 3 IMO.
  • I also gives some basic description on the U. of Amsterdam’s class. I don’t know much about it yet, but it’s refreshing because it gives different presentation from the “Basic 5” I recommend.

(20170412): I finished Hinton’s NNML, added Berkley CS294-131 into the mix.

(20170620): Links up “Top-5” List with “Basic 5”.  Added a list of AI, added link to my MOOC list.

(20170816): Added deeplearning.ai into Basic 5.  It becomes the new official recommendation to AIDL newcomers.

(20171126): Added several ML classes. Added Stats 385 into the considered list.

Appendix:
Links to process: http://ai.berkeley.edu/lecture_videos.html

Categories
deep learning Machine Learning Math Programming Thought

How To Get Better At X (X = Programming, Math, etc ) ……

Here are some of my reflections on how to improve at work.

So how would you get better at X?

X = Programming

  • Trace code of smart programmers, learn their tricks,
  • Learn how to navigate codebase using your favorite editors,
  • Learn algorithm better, learn math better,
  • Join an open source project,  first contribute, then see if you can maintain,
  • Always be open to learn a new language.

X = Machine Learning

X = Reading Literature

  • Read everyday, make it a thing.
  • Browse arxiv‘s summary as if it more than daily news.
  • Ask questions on social networks, Plus or Twitter, listen to other people,
  • Teach people a concept, it makes you consolidate your thought and help you realize something you don’t really know something.

X = Unix Administration

  • Google is your friend.
  • Listen to experienced administrator, their perspective can be very different – e.g. admin usually care about security more than you.   Listen to them and think whether your solution incorporate their thought.
  • Every time you solve a problem, put it in a notebook.  (Something which Tadashi Yonezaki at Scanscout taught me.)

X = Code Maintenance

  • Understand the code building process, see it as a part of your jobs to learn them intimately,
  • Learn multiple types of build system, learn autoconf, cmake, bazel.  Learn them,  because by knowing them you can start to compile and eventually really hack a codebase.
  • Learn version control, learn GIT.  Don’t say you don’t need one, it would only inhibit your speed.
  • Learn multiple types of version control systems, CVS, SVN, Mercury and GIT.  Learn why some of them are bad (CVS), some of them are better but still bad (SVN).
  • Send out a mail whenever you are making a release, make sure you communicate clearly what you plan to do.

X = Math/Theory

  • Focus on one topic.  For example, I am very interested in machine learning these days, so I am reading Bishops.
  • Don’t be cheap, buy the bibles in the field.  Get Thomas Cover if you are studying information theory.   Read Serge Lang on linear algebra.
  • Solve one problem a day, may be more if you are bored and sick of raising dumbbells.
  • Re-read a formulation of a certain method.  Re-read a proof.   Look up different ways of how people formulate and prove something.
  • Rephrasing Ian Stewart – you always look silly before your supervisor.  But always remember that once you study to the graduate-level, you cannot be too stupid.   So what learning math/theory takes is gumption and perseverance.

X = Business

  • Business has mechanism so don’t dismiss it as fluffy before you learn the details,
  • Listen to your BD, listen to your sales, listen to your marketing friends.   They are your important colleagues and friends

X = Communication

  • Stands on other people shoes, that is to say: be empathetic,
  • I think it’s Atwood said: (rephrase) It’s easy to be empathetic for people in need, but it’s difficult to be empathetic for annoying and difficult people.   Ask yourself these questions,
    • Why would a person became difficult and annoying in the first place?  Do they have a reason?
    • Are you big enough to help these difficult and annoying people?   Even if they could be toxic?
  • That said, communication is a two-way street, there are indeed hopeless situation.  Take it in stride, spend your time to help friends/colleagues who are in need.

X = Anything

Learning is a life-long process, so be humble and ready to be humbled.

Arthur

 

 

 

Categories
deep learning Machine Learning

Learning Machine Learning – Some Personal Experience

Introduction

Some context: a good friend of mine, Waikit Lau, starts a facebook group called “Deep Learning“.  It is a gathering place of many deep learning enthusiasts around the globe.  And so far it is almost 400 members strong.   Waikit kindly gave me the admin right of the group; I was able to interact with all members since, and had a lot of fun.

When asked “Which topic do you like to see in “Deep Learning”?”, surprisingly enough, “Learning Deep Learning” is the topic most members would like to see more.   So I decided to write a post, summarizing my own experience of learning deep learning, and machine learning in general.

My Background

Not every one could predict the advent of deep learning, neither do I.  I was trained as a specialist in automatic speech recognition (ASR), with half of the time focusing on research (at HKUST, CMU, BBN), the other half on implementation (Speechworks, CMUSphinx).   That reflects in my current role, Principal Speech Architect, which my research-to-implementation is around 50-50.    If you are being nice to me, you can say I was quite familiar with standard modeling in speech recognition,  with passable programming skills.  Perhaps what I gain from ASR, is more an understanding in languages and linguistics, which I would described as cool party tricks.  But real-life speech recognition only use little linguistic [1].

To be frank though, while ASR used a lot of machine learning techniques such as GMM, HMM, n-grams, my skills in general machine learning were clearly lacking.   For a while, I didn’t have an acute sense of dangerous issues such as over- and under-fitting, nor I would able to foresee the rise of deep neural network in so many different fields.    So when my colleagues start to tell me, “Arthur, you got to check out this Microsoft’s work using deep neural network!” I was mostly suspicious at the time and couldn’t really fathom its importance.   Obviously I was too specialized in ASR – if I had ever give a deeper thought on “universal approximation theorem“,  the rise of DNN would make a lot of sense to me.  I can only blame myself for my ignorance.

That is a long digression.  So long story short: I woke up about 4 years ago and said “screw it!” I decided to “empty my cup” and learn again.   I decided to learn everything I can learn on neural networks, and in general machine learning again.  So this article is about some of the lessons I learn.

Learning The Jargons

If you are an absolute beginner,  the best way to start is to take a good on-line class.   For example Andrew Ng’s machine learning class   (my review) would be a very good place to start.   Because Ng’s class is generally known to be gentle to beginners.

Ideally you want to finish the whole course,  from there you will be able to have some basic understanding on what you are doing.  For example, you want to know that “Oh, if I want to make a classifier, I need a train set and a test set; And it’s absolutely wrong that they are the same”.   Now this is a rather deep thought, and actually there are people I know just take short cut and use the training set as the test set.  (Bear in mind, they or their love ones suffer eventually. 🙂 )    If you don’t know anything about machine learning, learning how to setup data set is the absolute minimum you want to learn.

You would also want to know some basic machine learning methods such as linear regression, logistic regression and decision tree.   Most method you will use in practice require these techniques as building blocks.  e.g.  If you don’t really know logistic regression, understanding neural network would be much tougher.   If you don’t understand linear classifier, understand support vector machine would be tough too.  If you have know idea what decision tree, no doubt you will confuse about random forest.

Learning basic classifiers also equipped you with intuitive understanding of core algorithms,  e.g. you will need to know stochastic gradient descent (SGD) for many things you do in DNN.

Once you go through first class, then there are two things you want to do: one is to actually work on a machine learning problem, the other is to learn more about certain techniques.  So let me split them into two sections:

How To Work On Actual Machine Learning Problems

Where Are The Problems?

If you are still in school and specialize in machine learning, chances you are funded by agency.   So more than likely you already have a task.   My suggestion for you is try to learn up your own problem as much as you can, and make sure you master all the latest techniques first, because that will help your daily job and career.

On the other hand, what if you were not major in machine learning?  For example, what if you were an experienced programmer in the first place, and now shift your attention to machine learning?  The simple answer for that is Kaggle.  Kaggle is a multi-purpose venue where you can learn and compete in machine learning.  You will also start from basic tasks such as MNIST or CIFAR-10 to first hone your skill.

Another good source of basic machine learning tasks, are tutorials of machine learning toolkits.  For example,  Theano’s deeplearning.net tutorial is my first taste on MNIST,  from there I also follow the tutorial to train up the IMDB sentiment classifier and well as polyphonic music generator.

My only criticism to Kaggle is that it lacks of the most challenging problem you can find in the field.   e.g. At the time when imagenet was not yet solved, I would hope a large scale computer vision would be hold at Kaggle.   And now when machine reading is the most acute problem, I would hope that there are tasks which every one in the world would try to tackle.

If you have my concerns, then consider other evaluations sources.  In your field, there got to be a competition or two holding every years. Join them, and make sure you gain experience from these competitions.  By far, I think it is the fastest way to learn.

Practical Matter 1 – Linux Skills

For the most part, what I found tripping many beginners are linux skills, especially software installation.    For that I would recommend you to use Ubuntu.   Many machine learning software can be installed by simple apt-get.   If you are into python, try out anaconda python, because it will save you a lot of time in software installation.

Also remember that Google is your friend.  Before you feel frustrated about a certain glitch and give up, always turn to google, paste your error message, to see if you find an answer.  Ask forums if you still can’t resolve your issue.   Remember, working on machine learning requires you to have certain problem-solving skill.  So don’t feel deter by small things.

Oh you ask what if you are using windows? Nah, switch to Linux, a majority of machine learning tools ran in Linux anyway.   Many people would also recommend Docker.   So far I heard both good and bad things about it.  So I can’t say if I like it or not.

Practical Matter 2 – Machines

Another showstopper for many people is compute.   I will say though if you are a learner,  the computational requirement can be just a simple dual-core desktop with no GPU cards.   Remember, a lot of powerful machine learning tools are developed before GPU card became trendy.   e.g. libsvm is mostly a CPU-based software and all Theano’s tutorial can be completed within a week with a decent CPU-only machine.  (I know because I did that before.)

On the other hand, if you have to do a moderate size task.  Then you should buy a decent GPU card,  a GTX980 would be a choice consumer card, for a more supported workstation grade card, Quadro series would be nice.    Of course, if you can come up with 5k, then go for a Tesla K40 or K80.   The GPU card you use directly affect your productivity.   If you know how to build a computer, consider to DIY one.  Tim Dettmer has couple of articles (e.g. here) on how to build a decent machine for deep learning.    Though you might never reach the performance of a 8-GPU card monster, you will be able to test with pleasure on all standard techniques including DNN, CNN and LSTM.

Once You Have a Taste

For the most part, your first few tasks will teach you quite a lot of machine learning.   Then the next problem you will encounter is how do you progressively improve your classifier performance.  I will address that next.

How To Learn Different Machine Learning Methods

As you might already know, there are many ways to learning machine learning.  Some will approach it mathematically and try to come up with an analysis of how a machine technique works.  That’s what you will learn when you go through school training, i.e. say a 2-3 year master program, or the first 3-4 year of a PhD program.

I don’t think that type of learning has anything wrong.  But machine learning is also a discipline which requires real-life experimental data to confirm your theoretical knowledge.  An overly theoretical approach would sometimes hurt your learning.   That said, you will need both practical and theoretical understanding to work well in practice.

So what should you do?  I will say machine learning should be learned through 3 aspects, they are

  1. Running the Program,
  2. Hacking the Source Code,
  3. Learning the Math (i.e. Theory).

Running the Program – A Thinking Man Guide

In my view, by far the most important skill in machine learning is to run a certain technique.    Why?  Wouldn’t that the theory is important too?  Why don’t we go to first derive an algorithm from the first principle, and then write our own program?

In practice, I found that starting that a top-down approach, i.e. go from theory to implementation, can work.   But most of the time, you will easily pigeonhole yourself into certain technique, and couldn’t quite see the big picture of the field.

Another flaw of the top-down approach is that it assumes you would understand more from just the principle.   In practice, you might need to deal with multiple types of classifiers at work, and it’s hard to understand their principle in a timely manner.    Besides, having practical experience of running will teach you aspects of the technique.   For example, have you run libsvm on a million data point, with each vector in the dimension of a thousand?   Then you will notice that type of algorithm to find support vectors makes a huge difference.   You will also appreciate why many practitioners from big companies would suggest beginners to learn random forest soon, because in practice random forest is the faster and more scalable solution.

Let me sort of bite my tongue: While it is meant to be a practice, at this stage, you should try very hard to feel and understand a certain technique.    If you are new, this is also a stage where you should ask if general principle such as bias vs variance work in your domain.

What is the mistake you can make while using a technique for beginners?    I think the biggest is you decide to run certain things without thinking why, that’s detrimental to your career.    For example, many people would read a paper, pick up all techniques the author used, then rush to rerun all these experiments themselves.    While this is usually what people do in evaluation/competition, it is a big mistake in real industrial scenario.   You should always think about if a technique would work for you – “Is it accurate but too slow?”,  “Is its performance good but takes up too much memory?”,  “Are there any good integration route which fits to our existing codebase?”   Those are all tough questions you should answer in practice.

I hope you get an impression from me that being practical in machine learning requires a lot of thinking too.   Only when you master this aspect of knowledge, then you are ready to take up more difficult parts of our work, i.e.  changing the code, algorithm and even the theory itself.

Hacking the Source Code

I believe the more difficult task after you successfully run an experiment, is to change the algorithm itself.   Mastery of using a program perhaps ties to your general skills in Linux.   Whereas mastery of source code would tie to your coding skills in lower-level language such as C/C++/Java.

Making the source code works require you the capability to read and understand a source code base,  a valuable skill in practice.     Reading a code base requires a more specialized type of reading – you want to keep notes of a source file, make sure you understand each of the function calls, which could go many levels deep.   gdb is your friend, and your reading session should be based on both gdb and eye-balling the source code.  Setting conditional break points and display important variables.   These are the tricks.  And at the end, make sure you can spell out the big picture of the program – What does it do?  What algorithm does it implement?  Where is the important source files?   And more importantly, if I was the one who wrote the program, how would I write it?

What I said so far applies for all types of programs, for machine learning, this is a stage you should focus on just the algorithm.  e.g.  you can easily implement SGD of linear regression without understanding the math.    So why would you want to decouple math out of the process then?    The reason is that there are always multiple implementations for a same technique and each implementation can be based on slightly different theories.    Once again, chasing down the theory would take you too much time.

And do not underestimate the work required to learn the Math behind even the simplest technique in the field.   Consider just linear regression,  and consider how people have thought about it as 1) optimizing the squared loss, 2) as a maximum likelihood problem [2],  then you will notice it is not a simple topic as you learned in Ng’s class.   While I love the Math, would not knowing the Math affect your daily work? Not in most circumstances.    On the other hand, that will be situations you want to just focus on implementations.    That’s why decoupling theory and practice is a good thinking.

Learning The Math and The Theory

That brings us to our final stage of learning – the theory of machine learning.  Man, this is such a tough thing to learn, and I don’t really do it well myself.   But I can share you some of my experience.

First thing first, as I am an advocate of bottom-up learning in machine learning, why would we want to learn any theory at all?

In my view, there are several use of theory,

  1. Simplify your practice: e.g. knowing direct method of linear regression would save you a lot of typing when implementing one using SGD.
  2. Identify BS: e.g.  You have a data set with two classes with prior 0.999:0.001, your colleague has created a classifier with 99.8% accuracy and decide he has done his job.
  3. Identify redundant idea:  someone in marketing and sales ask why can’t we create more data point by squaring every elements of the data point.  You should know how to answer, “That is just polynomial regression.”
  4. Have fun with theory and the underlying mathematics,
  5. Think of a new idea
  6. Brag before your colleagues and show how smart you are. 

(There is no 6.  Don’t try to understand theory because you want to brag.  And for that matter, stop bragging.)

So now we establish theory can be useful.  How do you learn it?   By far I think the most important means are to listen to good lectures, reading papers, and actually do the math,

With lectures, you goal is to gather insight from experienced people.  So I would recommend the Ng’s class as the first class, then Hinton’s Neural Networks For Machine Learning.  I also heard Koller’s class on Graphical Models are good.  If you understand Mandarin,  H. T. Lin’s classes on support vector machine are perhaps the best.

On papers, subscribe to arxiv.org today, get an RSS feed for yourself, read at least the headlines daily to learn what’s new.   That’s where I first learn many of the important concepts last few years: LSTM, LSTM with attention, highway networks etc.

If you are new, check out the “Awesome resources”, like Awesome Deep Learning, that’s where you find all basic papers to read.

And eventually you will find that just listening to lecture and reading papers don’t explain enough, this is the moment you should go to the “Bible”.   When I say Bible, we are really talking about 7-8 textbook which are known to be good in the field:

If you have to start with one book, consider either Pattern Classification by Duda and Hart or  Patten Recognition and Machine Learning (PRML) by C. M.  Bishop.   (Those are the only I read deep as well.) In my view, the former is suitable for a 3rd year undergraduate or graduate students to tackle.  There are many computer exercises, so you will enjoy a lot in both math problem solving and programming.  PRML is more for advanced graduates, like a PhD.   PRML is known to be more Bayesian,  in a way, it’s more modern.

And do the Math, especially for the first few chapters, where you would be frustrated by more advanced calculus problems.   Noted though, both Duda and Hard, and PRML’s exercises are guided.  Try to spread out this kind of Math exercise overtime, for example, I try to spend 20-30 minutes to tackle one problem in PRML a day.  Write down all of your solutions and attempts in a note book.  You will be greatly benefited from this effort.    You will gain valuable insights of different techniques: their theory, their motivations, their implementations as well as their notable variants.

Finally, if you have tough time on the Math, don’t stay on the same problem all the time.   If you can’t solve a problem after a week, look it up on google, or go to standard text such as Solved Problems in Analysis.  There is no shame of looking up the answers if you had tried.

Conclusion

No one can hit the ground running and train a Google’s “convolutional LSTM” on 80000 hours of data in one day.   Nor one can think of the very smart idea of using multiplier in a RNN. (i.e. LSTM),  using attention to do sequence-to-sequence learning, or reformulating neural network such that a very deep one is trainable.  It is hard to understand the fundamentals of concepts such as LSTM or CNN, not to say to innovate on them.

But you got start somewhere, in this article I tell you my story of how I started and restarted this learning process.   I hope you can join me in learning.   Just like all of you, I am looking forward to see what deep learning will bring to humanity.   And rest assure, you and I will enjoy the future more because we understand more behind the scene.

You might also like Learning Deep Learning – My Top Five List.

Arthur

 

[1]  As Fred Jelinek said “Every time I fire a linguist, the performance of our speech recognition system goes up.(https://en.wikiquote.org/wiki/Fred_Jelinek)

Categories
Dan Jurafsky Dependency Parsing Dragomir Radev HMM Language Modeling Machine Learning Natural Language Processing Parsing POS tagging Programming Python SMT Word Sense Disambiguation

Radev’s Coursera Introduction to Natural Language Processing – A Review

As I promised earlier, I was going to review Prof.  Dragomir Radev’s introductory class on natural language processing.   Few words about Prof. Radev: from his Wikipedia entry, Prof Radev is an award winning Professor, who co-found North American Computational Linguistics Olympiad (NACLO), which is the equivalent of USAMO in computational linguistics. He was also the coach of U.S. coach of International Language Olympiad 2011 and helped the team won several medals [1].    I think these are great contributions to the speech and language community.  In late 90s, when I was still in undergraduate, there was not much recognition of computational language processing as an important computation skill.    With competition in high-school or college level, there will be a generation of young minds who would aspire to build intelligent conversation agent, robust speech recognizer and versatile question and answering machine.   (Or else everyone would think Linux kernel is the only cool hack in town. 🙂 )

The Class

So how about the class?  I got to say I am more than surprised and happy with it.   I was searching for an intro NLP class, so the two natural choices was Prof. Jurafsky’ and Manning’ s and Prof.  Collin’s Natural Language Processing.   Both classes received great praise and comments and few of my friends recommend to take both.   Unfortunately, there was no class offering recently so I could only watch the material off-line.

Then there comes the Radev’s class,  it is as Prof. Radev explains: “more introductory” than Collin’s class and “more focused on Linguistics and resources” than Jurafsky and Manning.   So it is good for two types of learners:

  1. Those who just started out in NLP.
  2. Those who want to gather useful resources and start projects on NLP.

I belong to both types.   My job requires me to have more comprehensive knowledge of language and speech processing.

The Syllabus and The Lectures

The class itself is a brief survey of many important topics of NLP.   There are the basics:  parsing, tagging, language modeling.  There are the advanced topics such as summarization, statistical machine translation (SMT), semantic analysis and dialogue modeling.   The lectures, except occasionally mistakes, are quite well done and filled with interesting examples.

My only criticism is perhaps the length of videos, I would hope that most videos I watch would be less than 10 minutes.    That makes it easier to rotate with my other daily tasks.

The material is not too difficult to absorb for newcomers.   For starter, advanced topic such as  SMT is not covered in too much detail mathematically.  (So no need to derive EM on IBM models.)  That I think it’s quite appropriate for first time learners like me.

One more unique feature of the lectures: it fills with interesting NACLO problems.    While NACLO is more a high-school level competition, most of the problems are challenging even for experienced practitioners.  And I found them quite stimulating.

The Prerequisites and The Homework Assignments

To me, the fun part is the homework.   There were 3 of them, they focus on,

  1. Nivre’s Dependency Parser,
  2. Language Modeling and POS Tagging,
  3. Word Sense Disambiguation

All homework are based on python.   If you know what you are doing, they are not that difficult to do.   For me, I spent around 12-14 hours on each.   (Those are usually weekends.) Just like Ng’s Machine Learning class,   you need to match numbers with  the golden reference.   I think that’s the right approach to learn any machine learning task the first time.   Blindly come up with a system and hope it works never get you anywhere.

The homework does speak about an issue of the class, i.e. you do need to know the basics of Machine Learning .  Also, if you never had any programming experience would find the homework very difficult.   This probably described many linguistic students but never take any computer science classes.  [3]    You can still “power it through” and pass.  But it can be unnecessarily hard.

So I will recommend you first take the Ng’s class or perhaps the latest Machine Learning specialization from Guestrin and Fox first.   Those are the classes which would give you some basics of programming as well as basic concept of Machine Learning.

If you didn’t take any machine learning class, one way to go through more difficult classes like this is to read forum messages.   There are many nice people in the course was answering various questions.   To be frank, if the forum doesn’t exist, then it will take me around 3 times more time to finish all assignments.

Final Word

All-in-all, I highly recommend Prof. Radev’s class to anyone who is interested in NLP.    As I mentioned though, the class does require prerequisite such as basics of programming and machine learning.   So  I would recommend any learners to first take the Ng’s class before taking this one.

In any case, I want to thank Prof. Radev and all teaching staffs who prepare this wonderful course.   I also thank to many classmates who help me through the homework.

Arthur

Postscript at 2017 April

After I wrote this review, Coursera had since upgraded to the new format.  It’s a pity none of the NLP classes, including Prof. Radev’s survive.   To bad for NLP lovers!

There is also a seismic shift in the field of NLP toward deep learning. While deep learning does not dominate evaluations like in computer vision or speech recognition, it is perhaps the most actively researched direction right now.  So if you are curious about what’s new, consider to take the latest Standford cs224n 2017 or Oxford’s Deep Learning for NLP.

[1] http://www.eecs.umich.edu/eecs/about/articles/2010/Radev-Linguistics.html

[2] Week 1 Lecture 1 Introduction

[3] One anecdote:  In the forum, some students was asking why you can’t just sum all data points of a class together and pour into scikit-learn’s fit().    I don’t blame the student because she started late and lacks of prerequisite.   She later finished all assignment and I really admire her determination.

Categories
Connectionist Duda and Hart Machine Learning PRML The Element of Statistical Learning Tom Mitchell

One Algorithm to rule them all – Reading “The Master Algorithm”

519HYQKubELI read a lot of sci non-fi,  and I always wonder why there is no popular account of machine learning, especially given it is so prevalent in our time.   Machine learners are everywhere: when you use Siri, when we search the web, when we translate using service such as Google translate, when we use email and spam filter helps us to remove most of the junks.

Prof.  Pedro Domingos’ book, The Master Algorithm (TMA), is perhaps the first popular account of machine learning (I know of).   I greatly enjoy the book.   The book is most suitable for high school or first-year college students who want to learn more about machine learning.   But experienced practitioners (like me) would enjoy many ideas from the book.

Target Audience

Let’s ignore forgettable books with title such as “Would MACHINES become SKYNET?”  or “Are ROBOTS TAKING OVER THE WORLD?” type of fluffiest fluff. Most books I know on introductory level of machine learning are specialized on one type of technique with titles such as “Machine Learning Technique X Without Pain”, etc.  They are more a kind of user manual.  In my view, they also lean on the practical side of thing too much.  Those are good for tasting machine learningbut they seldom give you more understanding of what you are doing.

On the other hand,  comprehensive textbooks in the field such as Mitchell’s Machine Learning or Bishop’s Pattern Recognition and Machine Learning (also known as PRML),  Hastie’s The Element of Statistical Learning and of course Duda and Hart’s Pattern Classification (2rd), are more for practitioners who want to deepen their understanding[1].    Out of the four books I just mentioned, perhaps Machine Learning is the most readable, but it still requires prerequisite knowledge such as multivariate calculus and familarity of Bayes’ Rules.   PRML would challenge you with (more) advanced tricks of calculus such as how to work with tricky integrals such as $latex \int_{-\infty}^{\infty} e^{-x^2} dx$ or gamma functions.   They are hardly for the general reader whom does not have much sophistication in math.

I think TMA  fills in the gap between a user manual and a comprehensive textbook.  Most explanation are in words, or perhaps college level of math.   Yet the coverage is very similar to Machine Learning.  It is still dumbed down but touch many goodies (and toughies) in machine learning such as No Free Lunch Theorem.

5 Schools of Machine Learning

In TMA,  Prof. Domingos divides existing machine learning techniques into 5 schools:

1, Symbolist : such as logic-based representation, rule-based approach,

2, Connectionist : such as neural networks,

3, Evolutionist: such as genetic algorithms,

4, Bayesian: bayesian network,

5, Analogizer:  nearest neighborhood,  linear separators, SVM.

To be frank, the scheme can be hard to use in practice…..  Most modern textbook such as PRML or Duda and Hart usually discuss Bayesian approach but mixed with techniques in the other 4 categories.   I guess the reason is you can always have some Bayesian interpretation of a parameter estimation technique.

Artificial neural network (NN) is another example, which can have multiple  categories in the TMA’s scheme.  It’s true,  ANN was motivated by human neural network (HNN).  But ANN’s formulation is quite different from computational models from HNN.   So one way to think about ANN is “a stacks of logistic regressor”.  [2]

Even though I think the TMA’s scheme of dividing algorithms is weird,  as a popular book, I think this treatment is fair.   In a way, you can say it’s also hard to find a consistent scheme to classify machine learning algorithm.   If you ask me, I will say “Woa, you should totally learn linear regression, logistic regression …..” and come up with 8 techniques.   But that, just like many textbooks, are not easy to comprehend by general readers.

I guess more importantly, you should ask if the coverage is good. Most practitioners of ML are perhaps specialists like me (on ASR) or particular subfields.   It’s easy to get tunnel vision on what can be done and researched.

“The Master Algorithm”

So what is the eponymous “Master Algorithm”  then?   The Professor explained in p.24, which he calls the central thesis of the book.

“All knowledge – past, present, and future – can be derived from data by a single universal learning algorithm.”

Prof. Domingo then motivates why he has such belief and I think this also highlights the thinking of latest research in Machine Learning.    What do I mean by that?

Well, you can think of in machine learning work in real life are just testing different techniques and see if they work well.   For example, my recent word sense disambiguation homework recommends me to try out both SVM and kNN.   (We then saw the miraculous power of SVM……)   But that’s the deal, most of the time, you use the best technique through evaluation.

But in the last 5-6 years, we witness that deep neural network (and its friends, RNN, CNN etc) becomes the winners of competitions in many fields.   Not only ASR [3], computer vision [4], we are seeing NLP’s record is beat by neural network [5].   That thus make you think.  Would one algorithm can rule it all?

And another frequently talk-about discovery is about human neocortex.  If you know the popular view of neuroscience.  Most of our brain’s function are localized.  i.e. For vision, there is a region called visual cortex, for sound, there is a region called audio cortex.

Then you might heard of amazing experiment that when researcher tried to wire the connection from the eyes, to the audio cortex.   It is possible that the audio cortex would learn how to see.  That is a sign that neocortex circuitry can be reused [6] for many different purposes.

I think that’s what Prof. Domingos is driving at.   In the book, he also tries to motivated from other perspectives.  But I think the neuro-scientific perspective probably resonates our time the most.

No Deep Learning

While I like TMA’s general coverage,  I would hope that there are some descriptions on deep learning, which as I said in last paragraph, has been beating records here and there.

But then we should feel alarmed.   Yes, right now deep learning is showing superior results.  But so did SVM (even now) or GMM.   It just means that our search of the best algorithm is still on-going and might never end.  That’s perhaps the good professor is not too focused on deep learning.

Conclusion

While I comment on the applicability of the “5 schools” categorization scheme,  I love the books’ comprehensive coverage and its central thesis.   The book is also good for wide audience: for high-school and college students who heard of machine learning first time, this book is  a good introductory book.   While for specialists like me, this book can inspire new ideas and help consolidate the old ones.  e.g.  This is the first time I read about performance of nearest neighbor is only twice as error prone as the best imaginable classifier (p.185).

So I highly recommend this book for anyone who is interested in machine learning.   Of course, feel free to tell me what you think in the comment section.

References:

[1] The other classic I missed here is Prof. Kevin Murphy’s Machine Learning – a Probabilist Perspective.

[2] I heard this from Dr. Richard Socher’s DNN+NLP class.

[3] G. Hinton et al.   Deep Neural Networks for Acoustic Modeling in Speech Recognition.

[4] Alexnet: paper.

[5] I.Sutskever et al Sequence to Sequence Learning with Neural NetworksI am thinking more on the line of SMT.   Latest I heard: with attention model and some tuning.  NN-based SMT beats traditional IBM Models-based approach.

[6] In TMA, this paper on ferret is quoted.