Categories
Uncategorized

Should You Learn Lisp?

(Redacted from a post on AIDL.)

Learning programming languages, like human languages, or generally different skills, is a way to enlighten you. LISP is a cool language because it does things differently. So sure, in that sense, Lisp worths your time.

On the other hand, if you do want to learn modern-day A.I. though, perhaps probability and statistics are the first “language” you want to learn well. As one member, Ernest Szeto said, nowadays A.I. usually use at least some probability-based logic. And if you think probability and statistics as a language, they are fairly difficult to learn on their own.

And yes, at AIDL, we recommend python as the first language, because it allows you to use several stacks in deep learning. You can also use R and java, but notice that there will be a gap between your work and what many people are doing.

Arthur

Categories
Uncategorized

An AIDL member’s question at 20170329.

(Redacted from a AIDL’s discussion.)

Q (First asked by Shezad Rayhan,  Redacted): “I bought some books on ML,Deep Learning ,RL seeing the reviews on amazon and quora.
[Arthur: the OP then listed out ~10 books on different subjects such as DL, ML, RL.]” …..

“I saw few lectures of Geoffrey Hinton’s Neural networks course but not sure which text book has similarity with his lectures.”

A: “Good question. Thanks for writing it up. My 2cts:

You bought too many books. Here are the few books to focus on

  • Python Machine Learning (Sebastian Raschka )
  • The Elements of Statistical Learning (Trevor Hastie,Robert Tibshirani,Jerome Friedman )
  • Machine Learning A probabilistic perspective (K Murphy)
  • Pattern Recognition and Machine learning (Bishop)
  • Deep learning (Ian Goodfellow,Yoshua ,Aaron Courville)
  • Neural networks and learning machines(Simon Haykin)

I never read Raschka’s and Murphy’s book but there are many good comments. Raschka’s books is more for practical use of machine learning so that’s probably the best place to start. For other 5, if you can read through one of them with ease, you should already be able to get a job or do research somewhere.

To your question about Hinton’s: not every lectures come with a textbook. Hinton’s class is unique in the sense that *he* represents a point of view, so you have to delve into his or his students’ paper.

Arthur”

Categories
Uncategorized

Test Pet Groups I started for Fun

As you might know, I have been administering AIDL for a while, AIDL has a rather strict (self-inflicted? :)) posting guidelines so many posts such as general neural science, AGI and self-driving are usually deleted.

Since I am also quite interested in those topics, I decide to start two more test groups as my own link dumps.  They are much less elaborated than AIDL and frankly my knowledge in those topics are amateurish, but it is a fun place if you want to hang out and talk about fancier topics.   Here is their links:

Computational Neuroscience and Artificial General Intelligence:
https://www.facebook.com/groups/cnagi/

Self Driving Car with Deep Learning: https://www.facebook.com/groups/dlsdc/

Arthur

Categories
AIDL

AIDL – The Biggest Facebook Group on Deep Learning

Just a moment I want to share:

Some of you told me that AIDL is the largest Facebook group on deep learning. The truth is, in terms of members, *not until tonight*. As of this moment though, we are the biggest group which has the keyword “deep learning” in it. Of course, join rate would change and no one can forbid “Get Rich Quick! Connect 100k Members!” would change their name….. But as of this moment, we are indeed the biggest deep learning group on Facebook. How about that? 🙂

I want to thank Waikit Lau, “da man”! And of course, all of you who participate in this forum. If you find that you learn something useful in the group, share it with your friends, subscribe our tied-in newsletter at www.aidl.io and youtube channel (see pinned post).

Most importantly, spread the technology so that more people are aware of advances in AI/ML/DL.

Any case, good luck, may Hinton bless you all!

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
Uncategorized

Some Tips on Studying “Deep Learning” by Ian GoodFellow et al

(Redacted from an answer I gave at AIDL.)

It depends on the chapters you are at. The first two parts are better to be supplementary material of lectures/courses. For example, if you are reading deep learning with watch all videos from Karpathy and Socher’s class, you would learn much more than other students. I think the best lecture to go with is Hinton’s “Neural Network”.

Part 1 tries to power you through the Math necessary. If you never have at least a class of machine learning, those material are woefully inadequate. Consider to study matrix algebra or more importantly matrix differentiation first. (Abadir’s Matrix Algebra is perhaps the most relevant.) Then you will make through the Math more easily. Saying so, Chapter 4’s example on PCA is quite cute. So read them if you are comfortable with the Math.

Part 3 is tough, and for the most part it is a reading for researchers in *unsupervised learning*, which many people believe it is the holy grail of the field. You will need to be comfortable with energy-based model. For which I suggest you go through Lecture 11 to 15 of Hinton’s deep learning first. Me think: if you don’t like unsupervised learning, you could skip Part 3 for now. Reading Part 3 is more about knowing what other people are talking about in unsupervised learning.

Finally, it’s how you see this book,  while deep learning is a hot field, make sure you don’t abandon other ideas in machine learning. e.g. I found reinforcement learning and genetic algorithm very useful (and fun). Learning theory is deep and can explain certain things we experienced in machine learning. IMO, those topics are at least as interesting as Part 3 of deep learning. (Thanks Richard Green at AIDL for his opinion.)

Further reading: Some Quick Impression of Browsing “Deep Learning”.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
Uncategorized

About The Penrose Chess Puzzle

Fellows, by now, you may have seen the Telegraph piece on a chess position which purportedly composed by Sir Roger Penrose (first brought up by Dušan Jovanović). On behalf of AIDL, I emailed Mr. James Tagg to verify if the position was actually created by Sir Penrose. Mr. Tagg verified that is indeed the case.

Here is my question in email, and Mr. Tagg’s quotable answer.

* * * * * * * * begin exchange
My email:
“Hey Mr. Tagg,

I administer a Facebook group on Artificial Intelligence and Deep Learning. Recently a member post an article from Telegraph, claiming that Scientist from Penrose Institute (PI) contact Telegraph, and share an interesting chess position to the public.

Since Telegraph is not a scientific journal, I am curious if the whole story is true. From the best of my judgment, the position is legit – humans would easily find it to be a draw, whereas engine such as Stockfish or Fritze would got very confused. Perhaps only deep learning type of technique would solve this problem more satisfactorily.

All that said, would you kindly verify if you did share a position with Telegraph. Of course, I also hope to have a write-up from PI as well. Then I can verify to my group members that the story is real.

Regards,
Arthur Chan”

Mr. Taggs’ reply,
“The chess position was worked out by Sir Roger Penrose over the course of a few months and James Tagg tested it on Fritz. The Telegraph newspaper published the story as they have a lond history of setting puzzles. Indeed, they set the crossword puzzle that features in the story of codebreakers being recruited for Bletchley park to work with Alan Turing in World War II.”

* * * * * * * * end exchange

Of course, my apology to The Telegraph’s staffs who specialized in puzzles – I was fact-checking and it is a good idea to be cautious.

So one bit we know for real – this is indeed a puzzle composed by Sir Penrose and the Telegraph’s piece is legitimate. I don’t want to comment on the answer of the puzzle yet. Nor I want to speculate on whether it is indeed a “key to consciousness”. But one thing for sure, Waikit Lau and I will comment on the position in the future issue of “AIDL Weekly”.

Arthur

Categories
deep learning deep neural network

Some Quick Impression of Browsing “Deep Learning”

(Redacted from a post I wrote back in Feb 14 at AIDL)
I have some leisure lately to browse “Deep Learning” by Goodfellow for the first time. Since it is known as the bible of deep learning, I decide to write a short afterthought post, they are in point form and not too structured.

  • If you want to learn the zen of deep learning, “Deep Learning” is the book. In a nutshell, “Deep Learning” is an introductory style text book on nearly every contemporary fields in deep learning. It has a thorough chapter covered Backprop, perhaps best introductory material on SGD, computational graph and Convnet. So the book is very suitable for those who want to further their knowledge after going through 4-5 introductory DL classes.
  • Chapter 2 is supposed to go through the basic Math, but it’s unlikely to cover everything the book requires. PRML Chapter 6 seems to be a good preliminary before you start reading the book. If you don’t feel comfortable about matrix calculus, perhaps you want to read “Matrix Algebra” by Abadir as well.
  •  There are three parts of the book, Part 1 is all about the basics: math, basic ML, backprop, SGD and such. Part 2 is about how DL is used in real-life applications, Part 3 is about research topics such as E.M. and graphical model in deep learning, or generative models. All three parts deserve your time. The Math and general ML in Part 1 may be better replaced by more technical text such as PRML. But then the rest of the materials are deeper than the popular DL classes. You will also find relevant citations easily.
  • I enjoyed Part 1 and 2 a lot, mostly because they are deeper and fill me with interesting details. What about Part 3? While I don’t quite grok all the Math, Part 3 is strangely inspiring. For example, I notice a comparison of graphical models and NN. There is also how E.M. is used in latent model. Of course, there is an extensive survey on generative models. It covers difficult models such as deep Boltmann machine, spike-and-slab RBM and many variations. Reading Part 3 makes me want to learn classical machinelearning techniques, such as mixture models and graphical models better.
  • So I will say you will enjoy Part 3 if you are,
    1. a DL researcher in unsupervised learning and generative model or
    2. someone wants to squeeze out the last bit of performance through pre-training.
    3. someone who want to compare other deep methods such as mixture models or graphical model and NN.

Anyway, that’s what I have now. May be I will summarize in a blog post later on, but enjoy these random thoughts for now.

Arthur

You might also like the resource page and my top-five list.   Also check out Learning machine learning – some personal experience.
If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
AIDL deep learning Machine Learning Thought

AIDL Pinned Post V2

(Just want to keep a record for myself.)

Welcome! Welcome! We are the most active FB group for Artificial Intelligence/Deep Learning, or AIDL. Many of our members are knowledgeable so feel free to ask questions.

We have a tied-in newsletter: https://aidlweekly.curated.co/ and

a YouTube-channel, with (kinda) weekly show “AIDL Office Hour”,
https://www.youtube.com/channel/UC3YM5TEbSqIpFGH85d6gjKg

Posting is strict at AIDL, your post has to be relevant, accurate and non-commerical (FAQ Q12). Commercial posts are only allowed on Saturday. If you don’t follow this rule, you might be banned.

FAQ:

Q1: How do I start AI/ML/DL?
A: Step 1: Learn some Math and Programming,
Step 2: Take some beginner classes. e.g. Try out Ng’s Machine Learning.
Step 3: Find some problem to play with. Kaggle has tons of such tasks.
Iterate the above 3 steps until you become bored. From time to time you can share what you learn.

Q2: What is your recommended first class for ML?
A: Ng’s Coursera, the CalTech edX class, the UW Coursera class is also pretty good.

Q3: What are your recommended classes for DL?
A: Go through at least 1 or 2 ML class, then go for Hinton’s, Karparthay’s, Socher’s, LaRochelle’s and de Freitas. For deep reinforcement learning, go with Silver’s and Schulmann’s lectures. Also see Q4.

Q4: How do you compare different resources on machine learning/deep learning?
A: (Shameless self-promoting plug) Here is an article, “Learning Deep Learning – Top-5 Resources” written by me (Arthur) on different resources and their prerequisites. I refer to it couple of times at AIDL, and you might find it useful: http://thegrandjanitor.com/…/learning-deep-learning-my-top…/ . Reddit’s machine learning FAQ has another list of great resources as well.

Q5: How do I use machine learning technique X with language L?
A: Google is your friend. You might also see a lot of us referring you to Google from time to time. That’s because your question is best to be solved by Google.

Q6: Explain concept Y. List 3 properties of concept Y.
A: Google. Also we don’t do your homework. If you couldn’t Google the term though, it’s fair to ask questions.

Q7: What is the most recommended resources on deep learning on computer vision?
A: cs231n. 2016 is the one I will recommend. Most other resources you will find are derivative in nature or have glaring problems.

Q8: What is the prerequisites of Machine Learning/Deep Learning?
A: Mostly Linear Algebra and Calculus I-III. In Linear Algebra, you should be good at eigenvectors and matrix operation. In Calculus, you should be quite comfortable with differentiation. You might also want to have a primer on matrix differentiation before you start because it’s a topic which is seldom touched in an undergraduate curriculum.
Some people will also argue Topology as important and having a Physics and Biology background could help. But they are not crucial to start.

Q9: What are the cool research papers to read in Deep Learning?
A: We think songrotek’s list is pretty good: https://github.com/son…/Deep-Learning-Papers-Reading-Roadmap. Another classic is deeplearning.net‘s reading list: http://deeplearning.net/reading-list/.

Q10: What is the best/most recommended language in Deep Learning/AI?
A: Python is usually cited as a good language because it has the best support of libraries. Most ML libraries from python links with C/C++. So you get the best of both flexibility and speed.
Other also cites Java (deeplearning4j), Lua (Torch), Lisp, Golang, R. It really depends on your purpose. Practical concerns such as code integration, your familiarity with a language usually dictates your choice. R deserves special mention because it was widely used in some brother fields such as data science and it is gaining popularity.

Q11: I am bad at Math/Programming. Can I still learn A.I/D.L?
A: Mostly you can tag along, but at a certain point, if you don’t understand the underlying Math, you won’t be able to understand what you are doing. Same for programming, if you never implement one, or trace one yourself, you will never truly understand why an algorithm behave a certain way.
So what if you feel you are bad at Math? Don’t beat yourself too much. Take Barbara Oakley’s class on “Learning How to Learn”, you will learn more about tough subjects such as Mathematics, Physics and Programming.

Q12: Would you explain more about AIDL’s posting requirement?
A: This is a frustrating topic for many posters, albeit their good intention. I suggest you read through this blog posthttp://thegrandjanitor.com/2017/01/26/posting-on-aidl/ before you start any posting.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
AIDL deep learning Machine Learning Thought

Thoughts From Your Humble Administrators – Feb 5, 2017

Last week:

Libratus is the biggest news item this week.  In retrospect, it’s probably as huge as AlphaGo.   The surprising part is it has nothing to do with deep-learning.   So it worths our time to look at it closely.

  • We learned that Libratus crushes human professional player in head-up no-limit holdem (NLH).  How does it work?  Perhaps the Wired and the Spectrum articles tell us the most.
    • First of all, NLH is not as commonly played in Go, but it is interesting because people play real-money on it.  And we are talking about big money.  World Series of Poker holds a yearly poker tournament, all top-10 players will become instant millionaires. Among pros, holdem is known as the “Cadillac of Poker” coined by Doyle Brunson. That implies mastering holdem is the key skill in poker.
    • Limit Holdem, which pros generally think it is a “chess”-like game.  Polaris from University of Alberta bested humans in three wins back in 2008.
    • Not NLH until now, so let’s think about how you would model a NLH in general. In NLH, the game states is 10^165, close to Go.  Since the game only 5 streets, you easily get into what other game players called end-game.   It’s just that given the large number of possibility of bet size, the game-state blow up very easily.
    • So in run-time you can only evaluate a portion of the game tree.    Since the betting is continuous, the bet is usually discretized such that the evaluation is tractable with your compute, known as “action abstraction”,  actual bet size is usually called “off-tree” betting.   These off-tree betting will then translate to in tree action abstraction in run-time, known as “action translation”.   Of course, there are different types of tree evaluation.
    • Now, what is the merit of Libratus, why does it win? There seems to be three distinct factors, the first two is about the end-game.
      1. There is a new end-game solver (http://www.cs.cmu.edu/~noamb/papers/17-AAAI-Refinement.pdf) which features a new criterion to evaluate game tree, called Reach-MaxMargin.
      2. Also in the paper, the authors suggest a way to solve an end-game given the player bet size.  So they no longer use action translation to translate an off-tree bet into the game abstraction.  This considerably reduce “Regret”.
    • What is the third factor? As it turns out, in the past human-computer games, humans were able to easily exploit machine by noticing machine’s betting patterns.   So the CMU team used an interesting strategy, every night, the team will manually tune the system such that repeated betting patterns will be removed.   That confuses human pro.  And Dong Kim, the best player against the machine, feel like they are dealing with a different machine every day.
    • These seems to be the reasons why the pro is crushed.  Notice that this is a rematch, the pros won in a small margin back in 2015, but the result this time shows that there are 99.8% chance the machine is beating humans.  (I am handwaving here because you need to talk about the big blinds size to talk about winnings.  Unfortunately I couldn’t look it up.)
    • To me, this Libratus win is very closed to say computer is able to beat the best tournament head-up players.  But poker players will tell you the best players are cash-game players.  And head-up plays would not be representative because bread-and-butter games are usually 6 to 10 player games. So we will probably hear more about pokerbot in the future.

Anyway, that’s what I have this week.  We will resume our office hour next week.  Waikit will tell you more in the next couple of days.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
AIDL deep learning Machine Learning Thought

Thoughts From Your Humble Administrators – Jan 29, 2017

This week at AIDL:

Must-read:  I would read the Stanford’s article and Deep Patient’s paper in tandem.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.