Some Notes on OpenMined.org

Yesterdays I watched the video from Introduction of OpenMined by Andrew Liam Trask. Wow, it's so interesting. Some notes:

* First of all, unlike similar projects. There is no ICO. It's a genuine open source projects with a vibrant community.

* The idea is quite explainable. It's a marketplace between data scientists and miners who want to provide data. The key question Trask tried to figure out is to protect data privacy from the user, at the same time allow data scientists to train model securely.

* At first I thought it is just an idea about federated learning combined with deep learning. But Trask has augmented the idea with homomorphic encryption and smart contract. I am still in the process to learn why the last two concepts but briefly homomorphic encryption allows model to be securely to miners without getting stolen. Whereas having smart contract would genuinely allow an open market place.

* What if miner tries to come up with fake data? This is actually an FAQ on OpenMined.org. As it turns out, a data scientist can also specify a test set on the smart contract. This ensures data uploaded by miner improve a model.

* Another question I asked to their slack community is how everyone is paid without a coin. Currently the project would rely on USD, ETH and BTC. Fair enough.

Quick Impression of Princeton's Coursera "Bitcoin and Cryptocurrency Technologies"

Recently I have gone through all the video lectures from Princeton's Coursera "Bitcoin and Cryptocurrency Technologies" (BCT). Just like many of the courses I took in AI/DL, I want to write some notes on the class and whatabout. But since I won't call myself deep in the subject yet and I hadn't looked at the HW (no time...), so I decide to just write a quick impression.

- If you recall, Greg DubelaWaikit Lau and I started B&C as a satellite of AIDL. Greg Dubela is more a practicing programmer, Waikit Lau has quite a bit of experience on bitcoin. But I learn about cryptos mostly for fun. When we first started the group, part of the goal is to direct OoT traffic from AIDL to a proper place. We end up having 5k members now.

- And of course, the most frequently asked question at B&C is "What really is bitcoin?". (*)

- You may think of bitcoin as just a currency. If you have a bit of money, you can just trade it. That's pretty much is what 99% of the people should know about bitcoin.

- But then you can also see bitcoin as a kind of mineral. It just happens that your computer is the equipment to mine it. Again, depends on whether you like the idea of mining, that's also what many people understand about bitcoin.

- To really look inside bitcoin though, you need to understand that it is a decentralized system of currency, which is opposed to a central banking systems which we are using today.

- If you want to understand bitcoin in this level, then BCT is your class. You will first learn how the basic structure of bitcoin blockchain works.

- Built upon the idea of blockchain, you will also learn how bitcoin's transaction really works, what is the role of miners, how mining works, and do blockchain really provides anonymity? what is the role of community in bitcoin? Who really controls bitcoin?

- And finally it's about altcoins, which I believe is more relevant to our time - as the combined altcoins market values is on par with bitcoin. So what are they? Why are they there. The course will not cover all famous altcoins we know today, but it will give you the basics.

- I found the content of the class illuminating. And I highly recommend anyone, especially with general CS background, goes to take BCT first before you do anything related to cryptos. You will find yourself more educated than your peers in many issues, even on basic operations such as trading and mining.

- My only criticism of the course is that it is dated back in 2015. Of course, things changed dramatically last 2 years: events such as bitcoin hard forks, the rise of ethereum are all unforeseen by the lecturers. That doesn't devalue the class at all. As for me, it just gives me historical perspective of the technology.

- If you enjoy the course, go ahead to read the book by Arvind Narayanan as well, once again, it is a great intro of the subject.

That's what I have, hope my impression helps.

Note:

* In fact, you may say "Should I buy bitcoin at $?" is the most frequently asked question. But of course those type of questions are banned by our strong rules.

Review of Ng's deeplearning.ai Course 3: Structuring Machine Learning Projects

(Also see my review of Course 1 and Course 2.)

As you might know, deeplearning.ai courses were released in two batches.  The first batch contains Course 1 to 3.  And only recently (as of November 15),  Course 4, "Convolution Neural Networks" was released.  And Course 5 is supposedly released in late November.   So Course 3, "Structuring Machine Learning Projects" was more the "final" course in the first batch.  It is also a good pause of the first and second half of the course:  The first half  was more the foundation of deep learning, whereas the second half was more about applications of deep learning.

So here you are, learning something new in deep learning now, isn't it time to apply these new found knowledge?  Course 3 says "Hold on!"  It turns out before you start to do machine learning,  you need to slow down and think about how to plan a task.

In fact, in practice, Course 3 is perhaps the most important course among all the courses in the specialization.   The Math in Course 1 may be tougher, and Course 5 could have difficult concepts such as RNN or LSTM which are hard to grok.  They are also longer than Course 3 (which only last 2 weeks). But in grand scheme of things, they are not as important as Course 3.  I am going to discuss why.

What do you actually do as an ML Engineer?

Let me digress a bit: I know many of my readers are young college students who are looking for careers in data science or machine learning.  But what do people actually do in the business of machine learning or AI?   I think this is a legit question because I was very confused when I first started out.

Oh well, it really depends on how much you are on the development side or research side of your team.  Terms like "Research" and "Development" can have various meaning depends on the title.  But you can think "researcher" are the people who try to get a new techniques working - usually the criterion is whether it beats the status quo such as accuracy performance. "Developers" on the other hand, are people come up with a production implementation.     You can think that many ML jobs are really in between the spectrum of "developers" and "researchers".   e.g.  I am usually known for my skill as a architect.  That usually means I have the knowledge on both sides.  My quote on my skills is usually "50% development and 50% research".  There are also people who are highly specialized in either side.  But I will focus on the research-side more in this article.

So, What do you actually do as an ML Researcher then?

Now I can see a lot of you jump up and say "OH I WANT TO BE A RESEARCHER!"  Yes, because doing research is fun, right?   You just need to train some models and beat the baseline and write a paper.  BOOM! In fact, if you are good, you just need to ask people to do your research.  Woohoo, you are happy and done, right?

Oh well,  in reality, good researchers are usually fairly good coders themselves.  Especially in applied field such as machine learning, my guess is out of 100 researchers in an institute, may be there is perhaps 1 person who is really a "thinking staff". i.e.  They do nothing other than coming up with new theory or writing proposal.   Just like you, I admire the life of a pure academician.  But in our time, you usually have to be either very smart and very lucky to be one of them. (There is a tl;dr explanation here, but it is out of scope of this article.)

"Okay, okay, got it..... so can we start to have some fun now?   We just need to do some coding, right? " Not really, the first step before you can work on fun stuffs such as modeling, or implement new algorithm, is to clean-up data.   So say if you work on a fraudulent transaction detection, the first is to load a giant table somewhere so that you can query it and get the training data.  Then you want to clean the data, and massage the data so that it can be an input of ML engine.   Notice that by themselves these tasks can be non-trivial as well.

Course 3: Structuring Machine Learning Projects

Then there you are, after you code, you clean up your data, finally you have some time to do machine learning.    Notice that your time after all these "chores" are actually quite limited.    That makes how to use your time effectively a very important topic.   And here comes why you want to take Course 3: Andrew teaches you the basics of how to assign time/resource in a deep learning task.   e.g. How large are your train/validation/test sets?  When should you stop your development?   What is human performance?   What if there are mismatches between your train set/test set?   If you are stuck, should you tune your hyperparemeters more? Or should you regularize?

In a way, Course 3 is a reminiscence of "Machine Learning"'s  Week 6 and Week 11, basically what you try to learn is to make good "meta-decision"e of all your projects you will work for your life time.  I also think it's the right stuffs in your ML career.

One final note: as you might notice in my last two reviews, I usually tried to compare deeplearning.ai with other classes.   But Course 3 is quite unique, so you might only find similar material on machine learning course which focus on theory.   But Ng's treatment is unique:  first what he gave is practical and easy to understand advice.  Then his advice focused on deep learning - while we are talking about similar principle.   Working on deep learning usually implies special circumstance - such as close to human performance, and you might just have low train and test set performance.  Those scenarios did appear in the past - but only in cutting edge ML evaluation involved the best ML teams.  So you don't normally hear about it in a course,  but now Andrew tell you all.  Doesn't that worth the price of $49? 🙂

Conclusion

So here you have it.  This is my review of Course 3 of deeplearning.ai.  Surprising even to me, I actually write more than I expect for these two-week course.   Perhaps the main reason is - I really hope this course were there say 3 years ago.  This would have change the course of some projects I develop.

May be it's too late for me..... but if you are early in deep learning, do recognize the importance of Course 3, or any advices you hear similar to what Course 3 taught.  It will save you much time - not just on one ML task but many ML tasks you will work in your career.

Arthur Chan

If you like this message, subscribe the Grand Janitor Blog's RSS feed. You can also find me (Arthur) at twitterLinkedInPlusClarity.fm. Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Resources for Implementing Linear Regression Based On Normal Equation

Not understanding, implementing.

Implementation in C

(less-related and equally interesting the non-linear regression description.)

Books:

  • Regression by Bingham and Fry
  • Matrix Algebra by Abadi and Magnus

 

Comparing deeplearning.ai and Udacity's nanodegree

I would think it this way:

"For the most part MOOC certificates don't mean too much in real life. It is whether you can actually solve problem matters. So the meaning of MOOC is really there to stimulate you to learn. And certificate serves as a motivation tool.

As for OP's question. I never take the Udacity nanodegree. From what I heard though, I will say the nanodegree will require effort to take 1 to 2 Ng's deeplearning.ai specialization. It's also tougher if you need to take a course in a specified period of time. But the upside is there are human graders which give you feedbacks.

As for which path to go, I think it's solely depend on your finance. Let's push to an extreme: e.g. If you purely think of credential and opportunities May be an actual PhD/Master degree will give you the most, but then the downside is it can cost you multi-year of salaries. One tier down would be online ML degree from Georgia tech, but it will still cost you up to $5k. Then there is taking cs231n or cs224d from Stanford online, again that will cost you $4k/class. So that's why you would consider to take MOOC. And as I said which price tag you choose depends on how motivate you are and how much feedbacks you want to get."

Quick Impression of Course 4 of deeplearning.ai

* Fellows, I just got the certificate. As you know, I really love to review the whole specialization so I decide to take the class myself. Since I haven't watched all lectures yet, this is my usual "Quick Impression".

* I am a slow learner (getting stubborn?  ) , so I was more like the 100th guy who finish the class in our Coursera deeplearning.ai forum. Go check it out. There are many experienced DLers there: https://www.facebook.com/groups/DeepLearningAISpecialization/

* Many of you had noted that it is a very good class for computer vision. My assessment so far this is a very good *first* class if you want to get into DL-based computer vision. But then there are many great material you can still only learn from cs231n. So I would suggest you to go through it as well after you finish Course 4.

* So far as I see, it has perhaps one of the best explanations on basics of Convnet, as well as what YOLO is. Perhaps the pity is the short length of the course disallow going in-depth on topic such as RCNN, segmentation, GAN-based synthesis. All of these, cs231n has much better coverage.

* At this point, the course is still a bit unstable. There are assignment which requires an implementation which doesn't match the notebook. But the staff is working on it now.

That's what I have. Hope this is helpful. I am going to write up a full review for Course 3 and 4 soon too.

On Whether AI Are Stealing Our Jobs.

This is a question that come up from time to time. So I will just give you couple of perspectives.
 
First thing first: you should first ask what "A.I." are we talking about. A lot of times, when people are talking about machine are taking jobs away from people, they are really talking about "automation" is taking away the jobs. Now "automation" just means something can be done without human interference, it might or might not have to do with A.I. For example, I can write a for-loop to repeat something for 1 million times. But no one would call it "A.I.", "programming" may be. So in that sense automobile, large-scale industry equipment are all "automations".
 
This is an important note because to determine if jobs are really taken away from humans, we need to look at economic data, but so far, there is not many reports which are really on "A.I"'s impact. But there are many studies which look at the impact of "automation", then they will say "Oh, AI in recent years have been helping automation" but they usually don't quantify the impact.
 
Now then there is the idea of this supreme AI being (or AGI) will control everything in the world. Valeria Iegorova said it well, we are just far far away from that scenario. Just imagine you want to automate the manual work by a construction worker - creating a biepedal robot with the capability to walk around a construction site is a billion dollar project. Not to say to ask them actually do work.
 
So why would so many people fear about A.I. steal away their jobs then? Well, many of those who complains are really unemployed because of various reasons. May be their industry is just no longer appropriate for the time.
 
Now are there any cases where A.I. are replacing humans? Yes. But their economists will tell you a long-known fact - with proper re-training, humans can quite easily get back to workforce. So you might here a lot of more senior citizens these days are learning programming and get hired. These are usually incentivized by governments. In an other words - if you live in a place governments sense unemployment issues early, they will usually come up with reasonable solutions to resolve the problem.
 
Hope this is not too long. But I think it covers the topic. Other than that, good luck!

Review of Ng's deeplearning.ai Course 2: Improving Deep Neural Networks

(My Reviews on Course 2 and Course 3.)

In your life, there are times you think you know something, yet genuine understanding seems to elude you.  It's always frustrating, isn't it?   For example, why would all these seemingly simple concepts such as gradients or regularization can throw us off when we learn them since Day 1 of our learning in machine learning?

In programming, there's a term called "grok", grokking something usually means that not only you know the term, but you also have intuitive understanding of the concept.    Or as in "Zen and the Art of Motorcycle Maintenance" [1], you just try to dive deep into a concept, as if it is a journey...... For example, if you really think about speech recognition, then you would realize the frame independence  assumption [2] is very important.   Because it simplifies the problem in both search and parameter estimation.  Yet it certainly introduces a modeling error.  These small things which are not mentioned in classes or lectures are things you need to grok.

That brings us to Course 2 of deeplearning.ai.  What are you grokking in this Course?  After you take Course 1, should you take Course 2?  My answer is yes and here is my reasoning.

Really, What is Gradient Descent?

Gradient descent is a seemingly simple subject - say you want to find a minima of the function a convex function, so you follow the gradient down hill and after many iterations, you eventually hit the minima.  Sounds simple right?

Of course, once you start to realize that functions are normally not convex, and they are n-dimensional, and there can be plateaus.  Or when you follow the gradient,  but it happens to be a wrong direction! So you will have zigzagging when you try to descend.   It's a little bit like descending from a real mountain, yet you don't really can't see n-dimensional space!

That explains the early difficulty of deep learning development - Stochastic gradient descent (SGD) was just too slow back in 2000 for DNN. That results in very interesting research of restricted Boltzmann machine (RBM) which was stacked and used to  initialize DNN, which was prominent subject of Hinton's NNML after Lecture 8, or pretraining, which is still being used in some recipes in speech recognition as well as financial prediction.

But we are not doing RBM any more! In fact, research in RBM is not as fervent as in 2008. [4] Why? It has to do with people just understand more about SGD and can run it better - it has to do with initialization, e.g. Glorot's and He's initialization.   It also has to do with how gradient descent is done - ADAM is our current best.

So how do you learn these stuffs?  Before Ng deeplearning.ai's class, I would say knowledge like this spread out on courses such as cs231n or cs224n.  But as I mentioned in the Course 1's review, those are really courses with specific applications in mind.  Or you can go to read Michael Nielsen's Neural Network and Deep Learning.   Of course, Nielsen's work is a book.  So it really depends on whether you have the patience to work through the details while reading.  (Also see my review of the book.)

Now you don't have to.  The one-stop shop is Course 2.  Course 2 actually covers the material I just mentioned such as initialization, gradient descent, as well as deeper concepts such as regularization  and batch normalization.   That makes me recommend you to keep on taking the course after you finish Course 1.  If you take the class, and are also willing to read Sebastian Ruder's Review of SGD or Grabriel Goh's Why Momentum Really Works, you would be much ahead of the game.

As a note, I also like Andrew breaks down many of the SGD algorithm as a smoothing algorithm.   That's a new insight for me even after I used SGD many times.

Is it hard?

Nope, as Math goes, Course 1 is probably toughest.  Of course, even in Course 1, you will finish coursework faster if you don't overthink the problem.  Most notebooks have the derived results for you.  On the other hand, you do want to derive the formulae,  you do need to have decent skill in matrix calculus.

Is it Necessary to Understand These Details?; Also Top-Down vs Bottom-Up learning, which is Better?

A legitimate question here is that : well, in our current state of deep learning which we have so many toolkits which already implemented techniques such as ADAM.  Do I really need to dig so deep?

I do think there are always two views in learning - one is from top-down, which in deep learning, perhaps is to read a bunch of papers, learn the concepts and see if you can wrap you head around them.  the fast.ai class is one of them.   And 95% of the current AI enthusiasts are following such paths.

What's the problem of the top-down approach?  Let me go back to my first paragraph - which is - do you really grok something when you do something top-down?  I frequently can't.   In my work life, I also heard senior people say that top-down is the way to go.  Yet, when I went ahead to check if they truly understand an implementation.  They frequently can't give a satisfactory answer.  That happens to a lot of senior technical people who later turn to more management.   Literally, they lost their touch.

On the other hand, every time, I pop up an editor and write an algorithm, I gain tremendous understanding!   For example, I was asked to write a forward inference once with C, you better know what you are doing when you write in C!   In fact, I come to have opinion these days that you have to implement an algorithm once before you can claim you understand it.

So how come there are two sides of the opinion then?  One of my speculations is that back in 80s/90s, students are often taught to learn how to write program in first writing.  That create mindsets that you have to think up a perfect program before you start to write one.   Of course, in ML, such mindset is highly impractical because and the ML development process  are really experimental.  You can't always assume you perfect the settings before you try something.

Another equally dangerous mindset is to say "if you are too focused on details, then you miss the big picture won't come up with something new!" . This I heard a lot when I first do research and it's close to most BS-ty thing I've heard.  If you want to come up with something new, the first thing you should learn is all the details of existing works.  The so called "big picture" and "details" are always interconnected.  That's why in the AIDL forum, we never see young kids, who say "Oh I have this brand new idea, which is completely different from all previous works!", would go anywhere.  That's because you always learn how to walk before you run.   And knowing the details has no downsides.

Perhaps this is my long reasons why Ng's class is useful for me, even after I read many literature.  I distrust people who only talk about theory but don't show any implementation.

Conclusion

This concludes my review of Course 2.  To many people, after they took Course 1, they just decide to take Course 2, I don't blame them, but you always want to ask if your time is well-spent.

To me though, taking Course 2 is not just about understanding more on deep learning.  It is also my hope to grok some of the seemingly simple concepts in the field.   Hope that my review is useful and I will keep you all posted when my Course 3's review is done.

Arthur

Footnotes:
[1] As Pirsig said - it's really not about motorcycle maintenance.

[2] Strictly speaking, it is conditional frame independence assumption.  But practitioners in ASR frequently just called it frame independence assumption.

[3] Also see HODL's interview with Ruslan Salakhutdinov, his account is first hand on the rise and fall of RBM.

Review of Ng's deeplearning.ai Course 1: Neural Networks and Deep Learning

Credit:: Damien Kühn CC

(See my reviews on Course 2 and Course 3.)

As you all know, Prof. Ng has a new specialization on Deep Learning. I wrote about the course extensively yet informally, which include two "Quick Impressions" before and after I finished Course 1 to 3 of the specialization.  I also wrote three posts just on Heroes on Deep Learning including Prof. Geoffrey HintonProf. Yoshua Bengio and Prof. Pieter Abbeel and Dr. Yuanqing Lin .    And Waikit and I started a study group, Coursera deeplearning.ai (C. dl-ai), focused on just the specialization.    This is my full review of Course 1 after finish watching all the videos.   I will give a description on what the course is about, and why you want to take it.   There are already few very good reviews (from Arvind and Gautam).  I will write based on my experience as the admin of AIDL, as well as a deep learning learner.

The Most Frequently Asked Question in AIDL

If you don't know, AIDL is one of most active Facebook group on the matter of A.I. and deep learning.  So what is the most frequently asked question (FAQ) in our group then?  Well, nothing fancy:

How do I start deep learning?

In fact, we got asked that question daily and I have personally answered that question for more than 500 times.   Eventually I decided to create an FAQ - which basically points back to "My Top-5 List" which gives a list of resources for beginners.

The Second Most Important Class

That brings us to the question what should be the most important class to take?   Oh well, for 90% of the learners these days, I would first recommend Andrew Ng's "Machine Learning", which is both good for beginners or more experienced practitioners (like me).  Lucky for me, I took it around 2 years ago and got benefited from the class since then.

But what's next? What would be a good second class?  That's always the question on my mind.   Karpathy cs231n comes to mind,  or may be Socher's cs224[dn] is another choice.    But they are too specialized in the subfields.   E.g. If you view them from the study of general deep learning,  the material in both classes on model architecture are incomplete.

Or you can think of general class such as Hinton's NNML.  But the class confuses even PhD friends I know.  Indeed, asking beginners to learn restricted Boltzmann machine is just too much.   Same can be said for Koller's PGM.   Hinton's and Koller's class, to be frank, are quite advanced.  It's better to take them if you already know the basics of ML.

That narrows us to several choices which you might already consider:  first is fast.ai by Jeremy Howard, second is deep learning specialization from Udacity.   But in my view, those class also seems to miss something essential -   e.g., fast.ai adopts a  top-down approach.  But that's not how I learn.  I alway love to approach a technical subject from ground up.  e.g.  If I want to study string search, I would want to rewrite some classic algorithms such as KMP.  And for deep learning, I always think you should start with a good implementation of back-propagation.

That's why for a long time, Top-5 List picked cs231n and cs224d as the second and third class.   They are the best I can think of  after researching ~20 DL classes.    Of course, deeplearning.ai changes my belief that either cs231n and cs224d should be the best second class.

Learning Deep Learning by Program Verification

So what so special about deeplearning.ai? Just like Andrew's Machine Learning class, deeplearning.ai follows an approach what I would call program verification.   What that means is that instead of guessing whether your algorithm is right just by staring at the code, deeplearning.ai gives you an opportunity to come up with an implementation your own provided that you match with its official one.

Why is it important then?  First off, let me say that not everyone believes this is right approach.   e.g. Back when I started, many well-intentioned senior scientists told me that such a matching approach is not really good experimentally.  Because supposed your experiment have randomness, you should simply run your experiment N times, and calculate the variance.  Matching would remove this experimental aspect of your work.

So I certainly understand the point of what the scientists said.  But then, in practice, it was a huge pain in the neck to verify if you program is correct.  That's why in most of my work I adopt the matching approach.  You need to learn a lot about numerical properties of algorithm this way.  But once you follow this approach, you will also get an ML tasks done efficiently.

But can you learn in another way? Nope, you got to have some practical experience in implementation.  Many people would advocate learning by just reading paper, or just by running pre-prepared programs.  I always think that's missing the point - you would lose a lot of understanding if you skip an implementation.

What do you Learn in Course 1?

For the most part, implementing feed-forward (FF) algorithm and back-propagation (BP) algorithm from scratch.  Since for most of us, we are just using frameworks such as TF or Keras, such implementation from scratch experience is invaluable.  The nice thing about the class is that the mathematical formulation of BP is fined tuned such that it is suitable for implementing on Python numpy, the course designated language.

Wow, Implementing Back Propagation from scratch?  Wouldn't it be very difficult?

Not really, in fact, many members finish the class in less than a week.  So the key here: when many of us calling it a from-scratch implementation, in fact it is highly guided.  All the tough matrix differentiation is done for you.  There are also strong hints on what numpy functions you should use.   At least for me, homework is very simple. (Also see Footnote [1])

Do you need to take Ng's "Machine Learning" before you take this class?

That's preferable but not mandatory.  Although without knowing the more classical view of ML, you won't be able to understand some of the ideas in the class.  e.g. the difference how bias and variance are viewed.   In general, all good-old machine learning (GOML) techniques are still used in practice.  Learning it up doesn't seem to have any downsides.

You may also notice that both "Machine Learning" and deeplearning.ai covers neural network.   So will the material duplicated?  Not really.  deeplearning.ai would guide you through implementation of multi-layer of deep neural networks, IMO which requires a more careful and consistent formulation than a simple network with one hidden layer.  So doing both won't hurt and in fact it's likely that you will have to implement a certain method multiple times in your life anyway.

Wouldn't this class be too Simple for Me?

So another question you might ask.  If the class is so simple, does it even make sense to take it?   The answer is a resounding yes.  I am quite experienced in deep learning (~4 years by now) and I learn machine learning since college.  I still found the course very useful, because it offers many useful insights which only industry expert knows.  And of course, when a luminary such as Andrew speaks, you do want to listen.

In my case, I also want to take the course so that I can write reviews about it and my colleagues in Voci can ask me questions.  But with that in mind, I still learn several things new through listening to Andrew.

Conclusion

That's what I have so far.   Follow us on Facebook AIDL, I will post reviews of the later courses in the future.

Arthur

[1] So what is a true from-scratch  implementation? Perhaps you write everything from C and even the matrix manipulation part?

If you like this message, subscribe the Grand Janitor Blog's RSS feed. You can also find me (Arthur) at twitterLinkedInPlusClarity.fm. Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

History:
Nov 29, 2017: revised the text once. Mostly rewriting the clunky parts.
Oct 16, 2017: fixed typoes and misc. changes.
Oct 14, 2017: first published

 

(Repost) Quick Impression on "Deep Learning" by Goodfellow et al.

(I wrote it back in Feb 14, 2017.)

I have some leisure lately to browse "Deep Learning" by Goodfellow for the first time. Since it is known as the bible of deep learning, I decide to write a short afterthought post, they are in point form and not too structured.

* If you want to learn the zen of deep learning, "Deep Learning" is the book. In a nutshell, "Deep Learning" is an introductory style text book on nearly every contemporary fields in deep learning. It has a thorough chapter covered Backprop, perhaps best introductory material on SGD, computational graph and Convnet. So the book is very suitable for those who want to further their knowledge after going through 4-5 introductory DL classes.

* Chapter 2 is supposed to go through the basic Math, but it's unlikely to cover everything the book requires. PRML Chapter 6 seems to be a good preliminary before you start reading the book. If you don't feel comfortable about matrix calculus, perhaps you want to read "Matrix Algebra" by Abadir as well.

* There are three parts of the book, Part 1 is all about the basics: math, basic ML, backprop, SGD and such. Part 2 is about how DL is used in real-life applications, Part 3 is about research topics such as E.M. and graphical model in deep learning, or generative models. All three parts deserve your time. The Math and general ML in Part 1 may be better replaced by more technical text such as PRML. But then the rest of the materials are deeper than the popular DL classes. You will also find relevant citations easily.

* I enjoyed Part 1 and 2 a lot, mostly because they are deeper and fill me with interesting details. What about Part 3? While I don't quite grok all the Math, Part 3 is strangely inspiring. For example, I notice a comparison of graphical models and NN. There is also how E.M. is used in latent model. Of course, there is an extensive survey on generative models. It covers difficult models such as deep Boltmann machine, spike-and-slab RBM and many variations. Reading Part 3 makes me want to learn classical machine learning techniques, such as mixture models and graphical models better.

* So I will say you will enjoy Part 3 if you are,
-a DL researcher in unsupervised learning and generative model or
-someone wants to squeeze out the last bit of performance through pre-training.
-someone who want to compare other deep methods such as mixture models or graphical model and NN.

Anyway, that's what I have now. May be I will summarize in a blog post later on, but enjoy these random thoughts for now.

Arthur