Categories
Uncategorized

My Social Network Policy

For years, my social networks don’t follow a single theme.  For the most part I have varieties of interest and don’t feel like pushing any news ……. until there is deep learning.   As Waikit and I started AIDL at FB with a newsletter and a youtube channel, I also start to see more traffic comes to thegrandjanitor as well. Of course, there are also more legit followers on Twitter.

In any case, here are couple of ways you can find me:
Facebook: I am private on Facebook. I don’t PM, but you can always find me at the AIDL group.
LinkedIn:  On the other hand, I am very public on LinkedIn. So feel free to contact me at https://www.linkedin.com/in/arthchan2003/ .
Twitter: I am quite public on Twitter https://twitter.com/arthchan2003.
Plus: Not too active, but yeah I am there https://plus.google.com/u/0/+ArthurChan.

Talk to you~
Arthur

 

Categories
Uncategorized

Good Old AI vs DNN – A question from AIDL Member

Redacted from this discussion at AIDL.

From Ardian Umam (shortened, rephrased):
“Now I’m taking AI course in my University using Peter Norvig and Stuart J. Russell textbook. In the same time, I’m learning DNN (Deep Neural Network) for visual recognition by watching Standford’s Lecure on CNN (Convolutional Neural Networks) knowing how powerful a DNN to learn something from dataset. Whereas, on AI class, I’m learning about KB (Knowledge Base) including such as Logical Agent, First Order Logic that in short is kind of inferring “certain x” from KB, for example using “proportional resolution”.

My question : “Is technique like what I learn in AI class I describe above good in solving real AI problem?” I’m still not get strong intuition about what I study in AI class in real AI problem.”

Our exchange:

My answer: “We usually call “Is technique …. real AI problem?” GOAI (Good Old Artificial Intelligence). So your question is weather GOAI is still relevant.

Yes, it is. Let’s take search as an example. More complicated systems usually have certain components in search. e.g. Many speech recognition these days still use Viterbi algorithm which is large-scaled search. NNMT type of technique still requires some kind of stack decoding. (Edit, was beam search, but I am not quite sure.)

More importantly, you can see many things as a search. e.g. optimization of a function, you can solve it by Calculus, but in practice, you actually use search algorithm to find the best solution. Of course, in real-life, you rarely implement beam search to optimization. But idea of search would give you better feeling many ML algorithms like.”

AU: “Ah, I see. Thank you Arthur Chan for your reply. Yes, for search, it is. Many real problems now are still utilizing search approach to solve. As for “Knowledge, reasoning” (Chapter 3 in the Norvig book) for example using “proportional resolution” to do inference from KB (Knowledge Base), is it still relevant?”

My Answer: “I think the answer is it is and it is not. Here is a tl;dr answer:

It is not: because many practical systems these days are probabilistic. So it makes Part V of Norvig’s book *feel* more relevant now. Most people in this forum are ML/DL fans. That’s probably the first feeling you should have in these days.

But then, it is also relevant. In what sense? There are perhaps 3 reasons. First is it allows you to talk with people who learn A.I. from the last generation, because people in their 50-60s (aka, your boss) learn solving AI problem with logic. So if you want to talk with them, learning logic/knowledge type of system would help. Also in AI, no one knows what topic would revive. e.g. Fractal is now the least talked-about topic in our community now. But you never know what happen in the future 10-20 years. So keep ing breath is a good thing.

Then there is the part of how you think about search, in both Norvig and Russell’s books, the first few search problem is to solve logic problem such as first-order logic, chess. While they are only used in fewer system, compare to search which requires probabilities, they are much easier to understand. e.g. You may heard of people in their teens write their first chess engine, but I heard no one write (good) speech recognizer or machine translator before grad school.

The final reason is perhaps more theoretical: many DL/ML system you use, yeah… .they are powerful, but not all of them are making decision human understand. So they are not *interpretable*. That’s a big problem. So it is still a research problem of how to link these system to GOAI-type of work.”

Categories
Uncategorized

Should You Learn Lisp?

(Redacted from a post on AIDL.)

Learning programming languages, like human languages, or generally different skills, is a way to enlighten you. LISP is a cool language because it does things differently. So sure, in that sense, Lisp worths your time.

On the other hand, if you do want to learn modern-day A.I. though, perhaps probability and statistics are the first “language” you want to learn well. As one member, Ernest Szeto said, nowadays A.I. usually use at least some probability-based logic. And if you think probability and statistics as a language, they are fairly difficult to learn on their own.

And yes, at AIDL, we recommend python as the first language, because it allows you to use several stacks in deep learning. You can also use R and java, but notice that there will be a gap between your work and what many people are doing.

Arthur

Categories
Uncategorized

An AIDL member’s question at 20170329.

(Redacted from a AIDL’s discussion.)

Q (First asked by Shezad Rayhan,  Redacted): “I bought some books on ML,Deep Learning ,RL seeing the reviews on amazon and quora.
[Arthur: the OP then listed out ~10 books on different subjects such as DL, ML, RL.]” …..

“I saw few lectures of Geoffrey Hinton’s Neural networks course but not sure which text book has similarity with his lectures.”

A: “Good question. Thanks for writing it up. My 2cts:

You bought too many books. Here are the few books to focus on

  • Python Machine Learning (Sebastian Raschka )
  • The Elements of Statistical Learning (Trevor Hastie,Robert Tibshirani,Jerome Friedman )
  • Machine Learning A probabilistic perspective (K Murphy)
  • Pattern Recognition and Machine learning (Bishop)
  • Deep learning (Ian Goodfellow,Yoshua ,Aaron Courville)
  • Neural networks and learning machines(Simon Haykin)

I never read Raschka’s and Murphy’s book but there are many good comments. Raschka’s books is more for practical use of machine learning so that’s probably the best place to start. For other 5, if you can read through one of them with ease, you should already be able to get a job or do research somewhere.

To your question about Hinton’s: not every lectures come with a textbook. Hinton’s class is unique in the sense that *he* represents a point of view, so you have to delve into his or his students’ paper.

Arthur”

Categories
Uncategorized

Test Pet Groups I started for Fun

As you might know, I have been administering AIDL for a while, AIDL has a rather strict (self-inflicted? :)) posting guidelines so many posts such as general neural science, AGI and self-driving are usually deleted.

Since I am also quite interested in those topics, I decide to start two more test groups as my own link dumps.  They are much less elaborated than AIDL and frankly my knowledge in those topics are amateurish, but it is a fun place if you want to hang out and talk about fancier topics.   Here is their links:

Computational Neuroscience and Artificial General Intelligence:
https://www.facebook.com/groups/cnagi/

Self Driving Car with Deep Learning: https://www.facebook.com/groups/dlsdc/

Arthur

Categories
Uncategorized

Some Tips on Studying “Deep Learning” by Ian GoodFellow et al

(Redacted from an answer I gave at AIDL.)

It depends on the chapters you are at. The first two parts are better to be supplementary material of lectures/courses. For example, if you are reading deep learning with watch all videos from Karpathy and Socher’s class, you would learn much more than other students. I think the best lecture to go with is Hinton’s “Neural Network”.

Part 1 tries to power you through the Math necessary. If you never have at least a class of machine learning, those material are woefully inadequate. Consider to study matrix algebra or more importantly matrix differentiation first. (Abadir’s Matrix Algebra is perhaps the most relevant.) Then you will make through the Math more easily. Saying so, Chapter 4’s example on PCA is quite cute. So read them if you are comfortable with the Math.

Part 3 is tough, and for the most part it is a reading for researchers in *unsupervised learning*, which many people believe it is the holy grail of the field. You will need to be comfortable with energy-based model. For which I suggest you go through Lecture 11 to 15 of Hinton’s deep learning first. Me think: if you don’t like unsupervised learning, you could skip Part 3 for now. Reading Part 3 is more about knowing what other people are talking about in unsupervised learning.

Finally, it’s how you see this book,  while deep learning is a hot field, make sure you don’t abandon other ideas in machine learning. e.g. I found reinforcement learning and genetic algorithm very useful (and fun). Learning theory is deep and can explain certain things we experienced in machine learning. IMO, those topics are at least as interesting as Part 3 of deep learning. (Thanks Richard Green at AIDL for his opinion.)

Further reading: Some Quick Impression of Browsing “Deep Learning”.

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
Uncategorized

About The Penrose Chess Puzzle

Fellows, by now, you may have seen the Telegraph piece on a chess position which purportedly composed by Sir Roger Penrose (first brought up by Dušan Jovanović). On behalf of AIDL, I emailed Mr. James Tagg to verify if the position was actually created by Sir Penrose. Mr. Tagg verified that is indeed the case.

Here is my question in email, and Mr. Tagg’s quotable answer.

* * * * * * * * begin exchange
My email:
“Hey Mr. Tagg,

I administer a Facebook group on Artificial Intelligence and Deep Learning. Recently a member post an article from Telegraph, claiming that Scientist from Penrose Institute (PI) contact Telegraph, and share an interesting chess position to the public.

Since Telegraph is not a scientific journal, I am curious if the whole story is true. From the best of my judgment, the position is legit – humans would easily find it to be a draw, whereas engine such as Stockfish or Fritze would got very confused. Perhaps only deep learning type of technique would solve this problem more satisfactorily.

All that said, would you kindly verify if you did share a position with Telegraph. Of course, I also hope to have a write-up from PI as well. Then I can verify to my group members that the story is real.

Regards,
Arthur Chan”

Mr. Taggs’ reply,
“The chess position was worked out by Sir Roger Penrose over the course of a few months and James Tagg tested it on Fritz. The Telegraph newspaper published the story as they have a lond history of setting puzzles. Indeed, they set the crossword puzzle that features in the story of codebreakers being recruited for Bletchley park to work with Alan Turing in World War II.”

* * * * * * * * end exchange

Of course, my apology to The Telegraph’s staffs who specialized in puzzles – I was fact-checking and it is a good idea to be cautious.

So one bit we know for real – this is indeed a puzzle composed by Sir Penrose and the Telegraph’s piece is legitimate. I don’t want to comment on the answer of the puzzle yet. Nor I want to speculate on whether it is indeed a “key to consciousness”. But one thing for sure, Waikit Lau and I will comment on the position in the future issue of “AIDL Weekly”.

Arthur

Categories
Uncategorized

Posting on AIDL

(This is adapted from my post on AIDL, I decided to turn it to a blog message such that I can easily refer to.)

In this post, I just want to address the issue on posting at AIDL. And more particularly, why sometimes your posts are deleted, have comments stopped, or why you sometimes see sad faces from me.

To premise this: we are now are fairly big group (6500+) and both Waikit Lau and I want to keep the group be public.  What we want to let you have certainly freedom to post your thought first without going through us, the administrators.

Of course, “with great power, come great responsibility”. Such freedom of posting brings a lot of abuse. AIDL is constantly spammed. Or at the very least, the forum is over-posted. So our counter-measure is to check all posts by whether they are 1) relevant, 2) non-commercial, 3) correct. All these 3 criteria requires some judgment calls, but this is how I interpret them

  1.  Relevancy: Your post has to be related to AI or DL. Since ML is a fundamental skill of DL and could be a part of AI. So ML post is welcomed. This rules out many posts, albeit they can be “interesting” by other standard. Note that we are always a *niche group*. So even your post has very interesting theory about general relativity, we can’t quite keep your post.
  2. Non-commercial: If you post anything which is related to $, we can only allow them on Saturday. Your solicitation of survey, conference, reading your sites which require signups. They all implicitly include money. We have very strict rules on such commercial posts. So be advised you should only post them on Saturday.
  3. Correctness: This criterion is to tackle faked news, which is rampant on Facebook. This is perhaps the part which frustrates most people. Because if you are not careful, the web can easily trick you to believe in falsehood. Just consider the recent news about “AI is causing job losses”. Once you go into the sources, most of the reputable sources were actually arguing “Automation/ Computerisation is causing job losses”. “Automation” and “AI” are very different concepts, and I can’t quite see discussion of “automation” relevant to us (Back to Point 1).

Now not meeting these 3 criteria usually explains why some posts are gone. If the post has many likes/shares, I might consider to just give a “sad face” comment. Most of the times though, I would just delete them. Sorry this appears to be rude, but it’s a necessity given our group’s volume.

So how do you avoid these issues? My suggestion is that you should make sure you check your post carefully. Was it the original source? If it is not, does the text distort the originals? Does the post has any click-bait? Remember if you post something at AIDL, you will be asked to give us the source. From time to time, I will also fact-check some sensational statement.

One last point, admittedly such curatorship requires subjective judgment. And frankly I could be wrong. So if you feel strongly about your post, do PM me. You will have my time to review your post together.

Thanks,
Arthur Chan

Categories
Uncategorized

Thoughts from Your Humble Administrators – Jan 22 2017

This week at AIDL:

Must-Read (not really): Kirsten Stewart’s involvement of AI (a style transfer paper) It makes people wonder whether she can get the lowest Erdos-Bacon number. 🙂

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.

Categories
Uncategorized

Thoughts from Your Humble Administrators – Jan 15

I (Arthur) have been traveling, but there are several newsworthy events:

  • How should we view Microsoft acquires Maluuba?  (First brought up by Zubair Ahmed) My speculation is that MS is trying to tackle unresolved issue in both QA and reading comprehension.  Maluuba recently open their newsQA dataset.  The Goal Oriented dialogue dataset also sounds fairly interesting.
  • For the most part though, Maluuba, just like DeepMind and MetaMind, are “research startup”.   They generate no bookings. Thus you may think big companies are trying snatch 1) the research, 2) the researcher in these kind of startups.   The rest, perhaps is really hype……
  • There are two pieces of news on deep-learning-based poker bot, one from University of Alberta (DeepStack), one from CMU (CMU new, the Verge).  CMU’s Libratus is cleaning up the Pros. With a crushing difference of earnings, collecting $81,716 to the humans $7,228.    Libratus’ detail is out of reach, but DeepStack method is quite similar to AlphaGo, a DNN is built as the approximation function.
  • Several interesting resources this week:
  • Finally, perhaps the most interesting event within our group: Waikit Lau and me are going to hold an online session.  So far, AIDLers seems to be very interesting.  (The response is overwhelming. 🙂 )  We will disclose more detail in the next week or so.

Must read: Times interview with Andrew Ng

If you like this message, subscribe the Grand Janitor Blog’s RSS feed. You can also find me (Arthur) at twitter, LinkedInPlus, Clarity.fm.  Together with Waikit Lau, I maintain the Deep Learning Facebook forum.  Also check out my awesome employer: Voci.