Categories
Uncategorized

AIDL Weekly #41 – How Everyone See Deep Learning in 2017?

Editorial

Thoughts From Your Humble Curators

Last week was the week of NIPS 2017. We chose 5 links from the conferences in this issue.

The news this week is all about hardware, we point you to the new Titan V, the successor of the Titan series. Elon Musk is also teasing us the best AI hardware of the world. Let’s take a closer look.

And as you might read from elsewhere: is Google building an AI which can build another AI? Our fact-checking section will tell you more.

Finally, we cover two papers this week:

  • The new DeepMind paper which describes how AlphaZero become master of chess and shogi as well,
  • Fixing weight decay regularization in Adam.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe/forward it to your colleagues.

Artificial Intelligence and Deep Learning Weekly

News


Factchecking

On Google’s “AI Built an AI That Outperforms Any Made by Humans”

For those who are new at AIDL. AIDL has what we called “The Three Pillars of Posting”. i.e. We require members to post articles which are relevant, non-commercial and non-sensational. When a piece of news with sensationalism start to spread, admin of AIDL (in this case, Arthur) would fact-check the relevant literature and source material and decide if certain pieces should be rejected. And this time we are going to fact-check a popular yet misleading piece “AI Built an AI That Outperforms Any Made by Humans”.

  • The first thing to notice: “AI Built an AI That Outperforms Any Made by Humans” by a site which historically sensationalized news. The same site was involved in sensationalizing the early version of AutoML, as well as the notorious “AI learn language” fake news wave.
  • So what is it this time? Well, it all starts from Google’s AutoML published in May 2017 If you look at the page carefully, you will notice that it is basically just a tuning technique using reinforcement learning. At the time, The research only worked at CIFAR-10 and PennTree Banks.
  • But then Google’s AutoML released another version in November. The gist is that Google beat SOTA results in Coco and Imagenet. Of course, if you are a researcher, you will simply interpret it as “Oh, now automatic tuning now became a thing, it could be a staple of the latest evaluation!” The model is now distributed as NASnet.
  • Unfortunately, this is not how the popular outlets interpret. e.g. Sites were claiming “AI Built an AI That Outperforms Any Made by Humans”. Even more outrageous is some sites are claiming “AI is creating its own ‘AI child'”. Both claims are false. Why?
  • As we just said, Google’s program is an RL-based program which propose the child architecture, isn’t this parent program still built by humans? So the first statement is refutable. It is just that someone wrote a tuning program, more sophisticated it is, but still a tuning program.
  • And, if you are imagining “Oh AI is building itself!!” and have this imagery that AI is now self-replicating, you cannot be more wrong. Again, remember that the child architecture is used for other tasks such as image classification. These “children” doesn’t create yet another group of descendants.
  • A much less confusing way to put it is that “Google RL-based AI now able to tune results better than humans in some tasks.” Don’t get us wrong, this is still an exciting result, but it doesn’t give any sense of “machine is procreating itself”.

We hope this article clears up the matter. We rate the claim “AI Built an AI That Outperforms Any Made by Humans” false.

Here is the original Google’s post.

Artificial Intelligence and Deep Learning Weekly

NIPS 2017





Blog Posts



Open Source


Member’s Question

How do you read Duda and Hart’s “Pattern Classification”?

Question (rephrase): I was reading the book “Pattern Classification” by Duda and Hart, but I found it difficult to follow the mathematics, what should I do?

Answer: (by Arthur) You are reading a good book – Duda and Hart is known to be one of the Bibles in the field. But perhaps is slightly beyond your skill at this point.

My suggestion is to make sure you understand basic derivations such as linear regression and perceptron. Also if you get stuck with the book for a long time, try to go through Andrew Ng’s Machine Learning. Granted the course is much easier than Duda and Hart, but you would also have the outline what you are trying to prove.

One specific advice on the derivation of NN – I recommend you to read Chapter 2 of Michael Nielsen’s book first because he is very good at defining clear notation. e.g. meaning of the letter z changes in different text books, but it is crucial to know exactly what it means to follow a derivation.

Artificial Intelligence and Deep Learning Weekly

Paper/Thesis Review


 

Categories
Uncategorized

AIDL Weekly #40 – Special Issue on NIPS 2017

Editorial

Thoughts From Your Humble Curators

Last week was the week of NIPS 2017. We chose 5 links from the conferences in this issue.

The news this week is all about hardware, we point you to the new Titan V, the successor of the Titan series. Elon Musk is also teasing us the best AI hardware of the world. Let’s take a closer look.

And as you might read from elsewhere: is Google building an AI which can build another AI? Our fact-checking section will tell you more.

Finally, we cover two papers this week:

  • The new DeepMind paper which describes how AlphaZero become master of chess and shogi as well,
  • Fixing weight decay regularization in Adam.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe/forward it to your colleagues.

Artificial Intelligence and Deep Learning Weekly

News


Factchecking

On Google’s “AI Built an AI That Outperforms Any Made by Humans”

For those who are new at AIDL. AIDL has what we called “The Three Pillars of Posting”. i.e. We require members to post articles which are relevant, non-commercial and non-sensational. When a piece of news with sensationalism start to spread, admin of AIDL (in this case, Arthur) would fact-check the relevant literature and source material and decide if certain pieces should be rejected. And this time we are going to fact-check a popular yet misleading piece “AI Built an AI That Outperforms Any Made by Humans”.

  • The first thing to notice: “AI Built an AI That Outperforms Any Made by Humans” by a site which historically sensationalized news. The same site was involved in sensationalizing the early version of AutoML, as well as the notorious “AI learn language” fake news wave.
  • So what is it this time? Well, it all starts from Google’s AutoML published in May 2017 If you look at the page carefully, you will notice that it is basically just a tuning technique using reinforcement learning. At the time, The research only worked at CIFAR-10 and PennTree Banks.
  • But then Google’s AutoML released another version in November. The gist is that Google beat SOTA results in Coco and Imagenet. Of course, if you are a researcher, you will simply interpret it as “Oh, now automatic tuning now became a thing, it could be a staple of the latest evaluation!” The model is now distributed as NASnet.
  • Unfortunately, this is not how the popular outlets interpret. e.g. Sites were claiming “AI Built an AI That Outperforms Any Made by Humans”. Even more outrageous is some sites are claiming “AI is creating its own ‘AI child'”. Both claims are false. Why?
  • As we just said, Google’s program is an RL-based program which propose the child architecture, isn’t this parent program still built by humans? So the first statement is refutable. It is just that someone wrote a tuning program, more sophisticated it is, but still a tuning program.
  • And, if you are imagining “Oh AI is building itself!!” and have this imagery that AI is now self-replicating, you cannot be more wrong. Again, remember that the child architecture is used for other tasks such as image classification. These “children” doesn’t create yet another group of descendants.
  • A much less confusing way to put it is that “Google RL-based AI now able to tune results better than humans in some tasks.” Don’t get us wrong, this is still an exciting result, but it doesn’t give any sense of “machine is procreating itself”.

We hope this article clears up the matter. We rate the claim “AI Built an AI That Outperforms Any Made by Humans” false.

Here is the original Google’s post.

Artificial Intelligence and Deep Learning Weekly

NIPS 2017





Blog Posts



Open Source


Member’s Question

How do you read Duda and Hart’s “Pattern Classification”?

Question (rephrase): I was reading the book “Pattern Classification” by Duda and Hart, but I found it difficult to follow the mathematics, what should I do?

Answer: (by Arthur) You are reading a good book – Duda and Hart is known to be one of the Bibles in the field. But perhaps is slightly beyond your skill at this point.

My suggestion is to make sure you understand basic derivations such as linear regression and perceptron. Also if you get stuck with the book for a long time, try to go through Andrew Ng’s Machine Learning. Granted the course is much easier than Duda and Hart, but you would also have the outline what you are trying to prove.

One specific advice on the derivation of NN – I recommend you to read Chapter 2 of Michael Nielsen’s book first because he is very good at defining clear notation. e.g. meaning of the letter z changes in different text books, but it is crucial to know exactly what it means to follow a derivation.

Artificial Intelligence and Deep Learning Weekly

Paper/Thesis Review


 

Categories
Uncategorized

AIDL Weekly #39 – Amazon The AI Powerhouse

Editorial

Thoughts From Your Humble Curators

Its AWS re:INVENT week – this is the big annual AWS conference. Amazon had several announcements on its AI offerings. So we will take a closer look in this Issue.

In our Blogs Posts section, we have a line-up of many interesting blog posts and paper reviews this issue, including:

  • Google Vision AIY Kit,
  • Stephen Merity on Understanding the Mixture of Softmaxes (MoS),
  • One LEGO at a time: Explaining the Math of How Neural Networks Learn,
  • Arthur’s Review of Course 3 of deeplearning.ai

We also present our read on the CheXnet paper which alleged beat human radiologists. We are taking a closer look of the results.


Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe and forward it to your colleagues!

Artificial Intelligence and Deep Learning Weekly

News

Blog Posts






Open Source


Member’s Question

Are MOOC Certificates Important?

Our thought (from Arthur): For the most part, MOOC certificates don’t mean too much in real life. It is whether you can actually solve problem matters. So the meaning of MOOC is really there to stimulate you to learn. And certificate serves as a motivation tool.

As for OP’s question. I never got the Udacity nanodegree. From what I heard though, I will say the nanodegree will require effort to take 1 to 2 Ng’s deeplearning.ai specialization. It’s also tougher if you need to take a course in a specified period of time. But the upside is there are human graders that give you feedback.

As for which path to go, I think it solely depends on your finances. Let’s push to an extreme: e.g. If you purely think of credential and opportunities, perhaps an actual PhD/Master degree will give you the most, but then the downside is multi-year salary opportunity costs. One tier down would be online ML degree from Georgia tech, but it will still cost you up to $5k. Then there is taking cs231n or cs224d from Stanford online, again that will cost you $4k/class. So that’s why you would consider to take MOOC. And as I said which price tag you choose depends on how motivate you are and how much feedbacks you want to get.

Artificial Intelligence and Deep Learning Weekly

Paper/Thesis Review


 

Categories
Uncategorized

AIDL Weekly #38 – The FaceID Hack

Editorial

Thoughts From Your Humble Curators

Our main story this week is the FaceID hack. Is it a valid hack? How much should we care? We will take a closer look.

In other sections, we cover Karpathy’s “Software 2.0”, CheXnet and other topics.


Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletters, feel free to subscribe and forward it to friends!

Artificial Intelligence and Deep Learning Weekly

News

Blog Posts




Open Source


Video


 

Categories
Uncategorized

AIDL Weekly Issue 37 – First Level 4 SDC, Pieter Abbeel and Raquel Urtasun

Editorial

Thoughts From Your Humble Curators

The biggest news last week: Waymo was putting the first Level 4 SDC on the ground. We also learned that Pieter Abbeel has left OpenAI and started his own robotic startup, Embodied Intelligence. Wired profiled Uber’s new head of their Toronto’s team, Raquel Urtasan. We cover all these pieces in our News section.

The rest of this issue should be very interesting as well: First is the new Distill article by (Chris) Olah, Mordvintsev and Schubert, which is an excellent review on visualization. Then, there is Google PhD Fellow Anirban Santara gave us his take on how to build a career in ML. AIDL-LD members, Ben Davis, gave us a nice summary of a paper on image fusion schemes. And you may feel interested in the two papers published by the Salesforce Einstein’s Lab last week, both on NNMT, which Arthur reviewed this week.


Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe/share the letter to your colleagues.

We will host our own show at the AI World events: “Attack of the AI Startups”. If you live in the New England area, feel free to join!

Artificial Intelligence and Deep Learning Weekly

News




Blog Posts



Open Source

Paper/Thesis Review



Contemporary Classic

 

Categories
Uncategorized

AIDL Weekly Issue 36 – Capsules, Capsules and Capsules

Editorial

Thoughts From Your Humble Curators

We were out last week. The hottest news this week is all about Prof. Hinton’s capsule models!

Prof. Hinton and students just released an arxiv paper on how the idea of capsule can be used and specifically how it can outperform MNIST as well as its more difficult cousins, affMNIST and MultiMNIST, which had distorted MNIST with affined transform and heavy overlapping. So we dedicate this issue on capsules. We provide our own analysis in the thesis/paper review section, highlight a popular link from Wired, and cover the latest developments. So far, we already know of two implementations which attempt to repeat the result.

Other than that, check out other interesting links such as ex-Google Brain Resident David Ha’s work on evolution strategy, and our piece on “Unsupervised Machine Translation Using Monolingual Corpora”.


Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe/share the letter to your colleagues.

We will host our own show at the AI World events: “Attack of the AI Startups”. If you live in the New England area, feel free to join!

Artificial Intelligence and Deep Learning Weekly

News



Blog Posts




Open Source


Paper/Thesis Review



 

Categories
Uncategorized

AIDL Weekly Issue 35 –

Editorial

Thoughts From Your Humble Curators

Big announcement – last week, we launched our own topic-based messaging app called Expertify, to help you connect with other AI and DL professionals in our 45,000-member community. More details below on why we rolled our own and specific AI / DL features we want to add to it over time…

Download Expertify iOS app

We’d love for you you to try it and give us some feedback, if you are on iOS. We’re working on a web app and a bit down the road, Android.

In other news, we heard of stunning news that AlphaGo beats itself again and created the first Go player which has Elo-rating over 5000.

In technical news, Google created a new activation function which works better than even ReLU. And we wrote a full review of Coursera deeplearning.ai Course 1, which was quite well-received in different networks.


As always, if you like our newsletter, feel free to subscribe/forward the letter to your colleagues.

Artificial Intelligence and Deep Learning Weekly

News



Blog Posts




Member’s Question

Paper/Thesis Review

 

Categories
Uncategorized

AIDL Weekly Issue 34 –

Editorial

Thoughts From Your Humble Curators

Perhaps the biggest new to us last week is California will allow car companies to test their fully autonomous vehicles on the street as soon as 2018.

Deal of the week last week is the Amazon/MS joint release of Gluon.

Then there are interviews. You would be interested to hear Google CEO Sinchar Pinchai’s view on what being AI-first means.


As always, if you like our newsletter, feel free to subscribe/forward it to your colleagues.

Artificial Intelligence and Deep Learning Weekly

News




Blog Posts

Open Source


Video

Paper/Thesis Review

 

Categories
Uncategorized

AIDL Weekly Issue 34 – Interviews, Partnership, SDC Testing in California 2018

Editorial

Thoughts From Your Humble Curators

Perhaps the biggest new to us last week is California will allow car companies to test their fully autonomous vehicles on the street as soon as 2018.

Deal of the week last week is the Amazon/MS joint release of Gluon.

Then there are interviews. You would be interested to hear Google CEO Sinchar Pinchai’s view on what being AI-first means.


As always, if you like our newsletter, feel free to subscribe/forward it to your colleagues.

Artificial Intelligence and Deep Learning Weekly

News




Blog Posts

Open Source


Video

Paper/Thesis Review

 

Categories
Uncategorized

AIDL Weekly Issue 33 – All About DeepMind – Its Cost, Its Ethic Society and Its Recent Wavenet Launch

Editorial

Thoughts From Your Humble Curators

We learnt more about DeepMind last week – we learn the hefty price to run it and its struggle to correct its image since the debacle of DeepMind Health-Royal Free event. But we also learnt that our beloved Wavenet is now in production within Google Assistant, and it’s whopping 1000 times faster. This week, we’ll cover DeepMind more in-depth.

We also have an issue filled with contents: “Confession of AI researchers” is our favorite link. We answer our member question on how to come up with new AI idea in “Questions from Members”. And we dive into an interesting paper: “Unsupervised Hypernym Detection by Distributional Inclusion Vector Embedding” by Haw-Shiuan et al.


As always, if you like our newsletter, feel free to subscribe and forward it to friends!

Artificial Intelligence and Deep Learning Weekly

News


Blog Posts





Open Source

Video

Member’s Question

Paper/Thesis Review