Categories
Uncategorized

AIDL Weekly #40 – Special Issue on NIPS 2017

Editorial

Thoughts From Your Humble Curators

Last week was the week of NIPS 2017. We chose 5 links from the conferences in this issue.

The news this week is all about hardware, we point you to the new Titan V, the successor of the Titan series. Elon Musk is also teasing us the best AI hardware of the world. Let’s take a closer look.

And as you might read from elsewhere: is Google building an AI which can build another AI? Our fact-checking section will tell you more.

Finally, we cover two papers this week:

  • The new DeepMind paper which describes how AlphaZero become master of chess and shogi as well,
  • Fixing weight decay regularization in Adam.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe/forward it to your colleagues.

Artificial Intelligence and Deep Learning Weekly

News


Factchecking

On Google’s “AI Built an AI That Outperforms Any Made by Humans”

For those who are new at AIDL. AIDL has what we called “The Three Pillars of Posting”. i.e. We require members to post articles which are relevant, non-commercial and non-sensational. When a piece of news with sensationalism start to spread, admin of AIDL (in this case, Arthur) would fact-check the relevant literature and source material and decide if certain pieces should be rejected. And this time we are going to fact-check a popular yet misleading piece “AI Built an AI That Outperforms Any Made by Humans”.

  • The first thing to notice: “AI Built an AI That Outperforms Any Made by Humans” by a site which historically sensationalized news. The same site was involved in sensationalizing the early version of AutoML, as well as the notorious “AI learn language” fake news wave.
  • So what is it this time? Well, it all starts from Google’s AutoML published in May 2017 If you look at the page carefully, you will notice that it is basically just a tuning technique using reinforcement learning. At the time, The research only worked at CIFAR-10 and PennTree Banks.
  • But then Google’s AutoML released another version in November. The gist is that Google beat SOTA results in Coco and Imagenet. Of course, if you are a researcher, you will simply interpret it as “Oh, now automatic tuning now became a thing, it could be a staple of the latest evaluation!” The model is now distributed as NASnet.
  • Unfortunately, this is not how the popular outlets interpret. e.g. Sites were claiming “AI Built an AI That Outperforms Any Made by Humans”. Even more outrageous is some sites are claiming “AI is creating its own ‘AI child'”. Both claims are false. Why?
  • As we just said, Google’s program is an RL-based program which propose the child architecture, isn’t this parent program still built by humans? So the first statement is refutable. It is just that someone wrote a tuning program, more sophisticated it is, but still a tuning program.
  • And, if you are imagining “Oh AI is building itself!!” and have this imagery that AI is now self-replicating, you cannot be more wrong. Again, remember that the child architecture is used for other tasks such as image classification. These “children” doesn’t create yet another group of descendants.
  • A much less confusing way to put it is that “Google RL-based AI now able to tune results better than humans in some tasks.” Don’t get us wrong, this is still an exciting result, but it doesn’t give any sense of “machine is procreating itself”.

We hope this article clears up the matter. We rate the claim “AI Built an AI That Outperforms Any Made by Humans” false.

Here is the original Google’s post.

Artificial Intelligence and Deep Learning Weekly

NIPS 2017





Blog Posts



Open Source


Member’s Question

How do you read Duda and Hart’s “Pattern Classification”?

Question (rephrase): I was reading the book “Pattern Classification” by Duda and Hart, but I found it difficult to follow the mathematics, what should I do?

Answer: (by Arthur) You are reading a good book – Duda and Hart is known to be one of the Bibles in the field. But perhaps is slightly beyond your skill at this point.

My suggestion is to make sure you understand basic derivations such as linear regression and perceptron. Also if you get stuck with the book for a long time, try to go through Andrew Ng’s Machine Learning. Granted the course is much easier than Duda and Hart, but you would also have the outline what you are trying to prove.

One specific advice on the derivation of NN – I recommend you to read Chapter 2 of Michael Nielsen’s book first because he is very good at defining clear notation. e.g. meaning of the letter z changes in different text books, but it is crucial to know exactly what it means to follow a derivation.

Artificial Intelligence and Deep Learning Weekly

Paper/Thesis Review


 

Leave a Reply

Your email address will not be published. Required fields are marked *