Category Archives: AIDL

AIDL - The Biggest Facebook Group on Deep Learning

Just a moment I want to share:

Some of you told me that AIDL is the largest Facebook group on deep learning. The truth is, in terms of members, *not until tonight*. As of this moment though, we are the biggest group which has the keyword "deep learning" in it. Of course, join rate would change and no one can forbid "Get Rich Quick! Connect 100k Members!" would change their name..... But as of this moment, we are indeed the biggest deep learning group on Facebook. How about that? ūüôā

I want to thank Waikit Lau, "da man"! And of course, all of you who participate in this forum. If you find that you learn something useful in the group, share it with your friends, subscribe our tied-in newsletter at www.aidl.io and youtube channel (see pinned post).

Most importantly, spread the technology so that more people are aware of advances in AI/ML/DL.

Any case, good luck, may Hinton bless you all!

If you like this message, subscribe the Grand Janitor Blog's RSS feed. You can also find me (Arthur) at twitter, LinkedIn, Plus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

AIDL Pinned Post V2

(Just want to keep a record for myself.)

Welcome! Welcome! We are the most active FB group for Artificial Intelligence/Deep Learning, or AIDL. Many of our members are knowledgeable so feel free to ask questions.

We have a tied-in newsletter: https://aidlweekly.curated.co/ and

a YouTube-channel, with (kinda) weekly show "AIDL Office Hour",
https://www.youtube.com/channel/UC3YM5TEbSqIpFGH85d6gjKg

Posting is strict at AIDL, your post has to be relevant, accurate and non-commerical (FAQ Q12). Commercial posts are only allowed on Saturday. If you don't follow this rule, you might be banned.

FAQ:

Q1: How do I start AI/ML/DL?
A: Step 1: Learn some Math and Programming,
Step 2: Take some beginner classes. e.g. Try out Ng's Machine Learning.
Step 3: Find some problem to play with. Kaggle has tons of such tasks.
Iterate the above 3 steps until you become bored. From time to time you can share what you learn.

Q2: What is your recommended first class for ML?
A: Ng's Coursera, the CalTech edX class, the UW Coursera class is also pretty good.

Q3: What are your recommended classes for DL?
A: Go through at least 1 or 2 ML class, then go for Hinton's, Karparthay's, Socher's, LaRochelle's and de Freitas. For deep reinforcement learning, go with Silver's and Schulmann's lectures. Also see Q4.

Q4: How do you compare different resources on machine learning/deep learning?
A: (Shameless self-promoting plug) Here is an article, "Learning Deep Learning - Top-5 Resources" written by me (Arthur) on different resources and their prerequisites. I refer to it couple of times at AIDL, and you might find it useful: http://thegrandjanitor.com/…/learning-deep-learning-my-top…/ . Reddit's machine learning FAQ has another list of great resources as well.

Q5: How do I use machine learning technique X with language L?
A: Google is your friend. You might also see a lot of us referring you to Google from time to time. That's because your question is best to be solved by Google.

Q6: Explain concept Y. List 3 properties of concept Y.
A: Google. Also we don't do your homework. If you couldn't Google the term though, it's fair to ask questions.

Q7: What is the most recommended resources on deep learning on computer vision?
A: cs231n. 2016 is the one I will recommend. Most other resources you will find are derivative in nature or have glaring problems.

Q8: What is the prerequisites of Machine Learning/Deep Learning?
A: Mostly Linear Algebra and Calculus I-III. In Linear Algebra, you should be good at eigenvectors and matrix operation. In Calculus, you should be quite comfortable with differentiation. You might also want to have a primer on matrix differentiation before you start because it's a topic which is seldom touched in an undergraduate curriculum.
Some people will also argue Topology as important and having a Physics and Biology background could help. But they are not crucial to start.

Q9: What are the cool research papers to read in Deep Learning?
A: We think songrotek's list is pretty good: https://github.com/son…/Deep-Learning-Papers-Reading-Roadmap. Another classic is deeplearning.net's reading list: http://deeplearning.net/reading-list/.

Q10: What is the best/most recommended language in Deep Learning/AI?
A: Python is usually cited as a good language because it has the best support of libraries. Most ML libraries from python links with C/C++. So you get the best of both flexibility and speed.
Other also cites Java (deeplearning4j), Lua (Torch), Lisp, Golang, R. It really depends on your purpose. Practical concerns such as code integration, your familiarity with a language usually dictates your choice. R deserves special mention because it was widely used in some brother fields such as data science and it is gaining popularity.

Q11: I am bad at Math/Programming. Can I still learn A.I/D.L?
A: Mostly you can tag along, but at a certain point, if you don't understand the underlying Math, you won't be able to understand what you are doing. Same for programming, if you never implement one, or trace one yourself, you will never truly understand why an algorithm behave a certain way.
So what if you feel you are bad at Math? Don't beat yourself too much. Take Barbara Oakley's class on "Learning How to Learn", you will learn more about tough subjects such as Mathematics, Physics and Programming.

Q12: Would you explain more about AIDL's posting requirement?
A: This is a frustrating topic for many posters, albeit their good intention. I suggest you read through this blog posthttp://thegrandjanitor.com/2017/01/26/posting-on-aidl/ before you start any posting.

If you like this message, subscribe the Grand Janitor Blog's RSS feed. You can also find me (Arthur) at twitter, LinkedIn, Plus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Thoughts From Your Humble Administrators - Feb 5, 2017

Last week:

Libratus is the biggest news item this week.  In retrospect, it's probably as huge as AlphaGo.   The surprising part is it has nothing to do with deep-learning.   So it worths our time to look at it closely.

  • We learned that¬†Libratus¬†crushes human professional player in head-up no-limit holdem¬†(NLH). ¬†How does it work? ¬†Perhaps the Wired and the Spectrum articles tell us the most.
    • First of all, NLH is not as commonly played in Go, but it is interesting because people play real-money on it. ¬†And we are talking about big money. ¬†World Series of Poker holds a yearly poker tournament, all top-10 players will become instant millionaires.¬†Among pros, holdem is known as the "Cadillac of Poker" coined by Doyle Brunson. That implies mastering holdem is the key skill in poker.
    • Limit Holdem, which pros generally think it is a "chess"-like game. ¬†Polaris from University of Alberta¬†bested humans in three wins back in 2008.
    • Not NLH until now, so let's think about how you would model a NLH in general.¬†In NLH, the game states is 10^165, close to Go. ¬†Since the game only 5 streets, you easily get into what other game players called end-game. ¬† It's just that given the large number of possibility of bet size, the game-state blow up very easily.
    • So in run-time you can only evaluate a portion of the game tree. ¬† ¬†Since the betting is continuous, the bet is usually discretized such that the evaluation is tractable with your compute, known as "action abstraction", ¬†actual bet size is usually called "off-tree" betting. ¬† These off-tree betting will then translate to in tree action abstraction in run-time, known as "action translation". ¬† Of course, there are different types of tree evaluation.
    • Now, what is the merit of Libratus, why does it win? There seems to be three distinct factors, the first two is about the end-game.
      1. There is a new end-game solver (http://www.cs.cmu.edu/~noamb/papers/17-AAAI-Refinement.pdf) which features a new criterion to evaluate game tree, called Reach-MaxMargin.
      2. Also in the paper, the authors suggest a way to solve an end-game given the player bet size.  So they no longer use action translation to translate an off-tree bet into the game abstraction.  This considerably reduce "Regret".
    • What is the third factor? As it turns out, in the past human-computer games, humans were able to easily exploit machine by noticing machine's betting patterns. ¬† So the CMU team used an interesting strategy, every night, the team will manually tune the system such that repeated betting patterns will be removed. ¬† That confuses human pro. ¬†And Dong Kim, the best player against the machine, feel like they are dealing with a different machine every day.
    • These seems to be the reasons why the pro is crushed. ¬†Notice that this is a rematch, the pros won in a small margin back in 2015, but the result this time shows that there are 99.8% chance the machine is beating humans. ¬†(I am handwaving here because you need to talk about the big blinds size to talk about winnings. ¬†Unfortunately I couldn't look it up.)
    • To me, this Libratus win is very closed to say computer is able to beat the best tournament head-up players. ¬†But poker players will tell you the best players are cash-game players. ¬†And head-up plays would not be representative because bread-and-butter games are usually 6 to 10 player games. So we will probably hear more about pokerbot in the future.

Anyway, that's what I have this week.  We will resume our office hour next week.  Waikit will tell you more in the next couple of days.

If you like this message, subscribe the Grand Janitor Blog's RSS feed. You can also find me (Arthur) at twitter, LinkedIn, Plus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Thoughts From Your Humble Administrators - Jan 29, 2017

This week at AIDL:

Must-read:  I would read the Stanford's article and Deep Patient's paper in tandem.

If you like this message, subscribe the Grand Janitor Blog's RSS feed. You can also find me (Arthur) at twitter, LinkedIn, Plus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.