All posts by grandjanitor

AIDL Weekly #48 - Tensor Comprenhension and TPU beta

Editorial

Thoughts From Your Humble Curators

Perhaps the most interesting news last week is Facebook's Tensor Comprehension. Google's TPU is also at its beta stage. The price is ~4x more expensive than the P100 pool, but we are still very excited.


As always, if you like our newsletter, feel free to forward it to your friends/colleagues!

This newsletter is a labor of love from us. All publishing costs and operating expenses are paid out of our pockets. If you like what we do, you can help defray our costs by sending a donation via link. For crypto enthusiasts, you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Artificial Intelligence and Deep Learning Weekly

News

Blog Posts





Paper/Thesis Review

Contemporary Classic

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 100,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65. Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly #47 - About Prof. Geoffrey Hinton

Editorial

Thoughts From Your Humble Curators

It's also the beginning of a new semester. So we have three new sets of videos to shared with you. One from CMU Deep Learning course. The other two are from MIT: Introduction to Deep Learning, as well as Prof. Lex Fridman's Artificial General Intelligence class.

Oh, did you know AIDL just have 100000 members?


As always, if you like our newsletter, feel free to forward it to your friends/colleagues!

This newsletter is a labor of love from us. All publishing costs and operating expenses are paid out of our pockets. If you like what we do, you can help defray our costs by sending a donation via link. For crypto enthusiasts, you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Artificial Intelligence and Deep Learning Weekly


News

Blog Posts



Open Source

Video



Paper/Thesis Review

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 100,000+ members and host an occasional "office hour" on YouTube.

To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

 

AIDL Weekly #46 - Ng's AI Fund, deeplearning.ai Course 5, ICLR 2017

Editorial

Thoughts From Your Humble Curators

Interesting week for AI and deep learning: ICLR 2017 results are out, deeplearning.ai's Course 5 is also out. We have two items on ICLR 2017, and you will also see Arthur's "Quick Impression" on Course 5.

We also learned the third act of Andrew Ng: a $175M AI Fund. We'll cover the Fund in our News section.


As always, if you like our newsletter, feel free to forward it to your friends/colleagues!

This newsletter is a labor of love from us. All publishing costs and operating expenses are paid out of our pockets. If you like what we do, you can help defray our costs by sending a donation via link. For crypto enthusiasts, you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Artificial Intelligence and Deep Learning Weekly

News

Blog Posts






Paper/Thesis Review

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 96,000+ members and host an occasional "office hour" on YouTube.

To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly #45 - Prof. Lecun is Stepping Down as the Chief of FAIR

Editorial

Thoughts From Your Humble Curators

We are back! The big news this week is perhaps Prof LeCun is stepping down as the chief of Facebook A.I. Research (FAIR). More in the news section.

We also have a bunch of interesting content in our blog section. e.g. Arthur's review of Course 4 of deeplearning.ai. Course 4 focuses on image classification as an application of deep learning. Arthur will walk through how it compares with an existing class such as cs231n.

Then, in our paper section, we present a read on the classic paper, "Deep Neural Networks for Acoustic Modeling in Speech Recognition".


As always, if you like our newsletter, feel free to forward it to your friends/colleagues!

This newsletter is a labor of love from us. All publishing costs and operating expenses are paid out of our pockets. If you like what we do, you can help defray our costs by sending a donation via link. For crypto enthusiasts, you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Artificial Intelligence and Deep Learning Weekly


News

Blog Posts



Open Source


Video

Paper/Thesis Review

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 93,000+ members and host an occasional "office hour" on YouTube.

To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly #44 - Prof. Lecun vs Dr. Goertzel

Editorial

Thoughts From Your Humble Curators

The interesting news last week is perhaps Prof. LeCun harsh criticism on Sophia, the "First Robotic Citizens" and his back-and-forth with Dr. Ben Goetzel, the Chief Scientist of Hanson Robotics. We'll take a closer look in this issue.

Other than Sophia, we also incorporated several interesting blog posts this week. Perhaps the most interesting one is Google Brain's review of 2017 on deep learning, which deserves your time to read.


As always, if you like our newsletter, feel free to subscribe and forward it to your friends/colleagues?

Artificial Intelligence and Deep Learning Weekly

News

Blog Posts




Open Source

Video

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 84,000+ members and host a weekly "office hour" on YouTube.

You may donate to this link to help our operations.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

Artificial Intelligence and Deep Learning Weekly

AIDL Weekly #43 - China's AI Park

Editorial

Thoughts From Your Humble Curators

Happy 2018, everyone! We start off this year by learning China is building a $2.1 billion AI park, would China AI's development catch up with U.S.? Let's find out in our News section.

We also have interesting contents in our Blogs section. Two gems we recommend is Denny Britz's "AI and Deep Learning in 2017 – A Year in Review" and Prof. Rodney Brooks dated prediction. The former is a comprehensive review of research development of AI/DL in 2017, the latter is a thoughtful prediction on the future of SDC and DL by Prof. Brooks.


As always, if you like our newsletter, feel free to subscribe and forward it your colleagues/friends!

Artificial Intelligence and Deep Learning Weekly


News

Blog Posts






Open Source

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 78,000+ members and host a weekly "office hour" on YouTube.

You may donate to this link to help our operations.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

Artificial Intelligence and Deep Learning Weekly

Issue 42 - Throwback 2017

Editorial

Thoughts From Your Humble Curators - 2017 Year End Edition

In this issue, we will re-publish several memorable stories during 2017. For news that includes Uber vs Waymo, Andrew Ng left Baidu and started deeplearning.ai. For papers, we include classic such as Sara Sabour and Prof. Hinton's work on capsule theory, as well as Prof. Bengio's consciousness prior. And finally for fact-checking, you guess it right: both "Facebook makes a kill switch of AI" and "Google AI build AO" are here.


Hope you enjoy this throwback issue. As always, if you like our newsletter, feel free to subscribe/forward it your colleagues. AIDL accept donation. You can use the link at https://paypal.me/aidlio/12/ to donate. Your donation is use to cover monthly payment for our-ever growing subscription of AIDL Weekly as well as other operating cost.

Artificial Intelligence and Deep Learning Weekly

News







Factchecking

A Closer Look at The Claim "Facebook kills Agents which Create its Own Language"

From Issue 24:

As we fact-checked in Issue 18 and 23, we rated the claim

Facebook kills Agents which Create Its Own Language.

as false. And as you might know, the faked news has spread to 30+ outlets which stir the community.

Since the Weekly has been tracking this issue much earlier than other outlets (Gizmodo is the first popular outlet call the faked news out), we believe it's a good idea to give you our take on the issue, especially given all information we know. You can think of this piece as more a fact-checking from a technical perspective. You can use as a supplement of the Snope's piece.

Let's separate the issue into few aspects:

1, Does Facebook kills an A.I. agent at all?

Of course, this is the most ridiculous part of the claim. For starter, most of these "agents" are really just Linux processes. So..... you can just stop them by using the Linux command kill. The worst case.... kill -9 or kill -9 -r? (See Footnote [1])

2, What Language does AI Agents Generated and The Source

All outlets, seem to point to couple of sources, or the original articles. As far as we know, none of these sources had quoted academic work which is directly the subject matter. For convenience, let's call these source articles to be "The Source" (Also see Footnote [2]). The Source apparently has conducted original research and interview with Facebook researchers. Perhaps the more stunning part is there are printouts of how the machine dialogue look like. For example. Some of the machine dialogue looks like

Bob: "i can i i everything else"

Alice: "balls have zero to me to me to me to me ....."

That does explain why many outlets sensationalized this piece, because while the dialogue is still English (as we explained in Issue #18), it does look like codeword, rather than English.

Where does the printout comes from? It's not difficult to guess - it comes from the open source code of "End-to-End Negotiator" But then the example from github we can find there looks much more benign:

Human : hi i want the hats and the balls

Alice : i will take the balls and book <eos>

Human : no i need the balls

Alice : i will take the balls and book <eos>

So one plausible explanation here is that someone has played with the open source code, and they happened to create a scary looking dialogue. Th question, of course, are these dialogue generated by FB researchers? or does FB researchers just provide The Source the dialogue? Here is the part we are not sure. Because the Source does quote words from Facebook's researcher (see Footnote [3]), so it's possible.

3, What is Facebook's take?

Since the event, Prof. Dhruv Batras has post a status at July 31 in which he simply ask everyone to read the piece "Deal or No Deal" as the official reference of the research. He also called the faked news "clickbaity and irresponsible". Prof. Yann Lecun also came out and slam at the faked newsmaker.

Both of them decline to comment on individual pieces, including The Source. We also tried to contact both Prof. Dhruv Batra and Dr. Mike Lewis on the validity of the Source. Unfortunately, they are both unavailable for comments.

4, Our Take

Since it's an unknown for us whether any of The Source is real, we can only speculate what happened here. What we can do is make as technically plausible as possible.

The key question here: is it possible that FB researchers have really created some codeword-like dialogue and passed it off to the source? It's certainly possible but unlikely. Popular outlets have general bad reputation of misinforming the public on A.I., it is hard to imagine that P.R. department of FB don't stop this kind of potential bad press in the first place.

Rather, it's more likely that FB researchers only publish paper, but somebody else is misusing the code the researchers open sourced (as we mentioned in Pt. 2). In fact, if you reexamine the dialogue released by The Source:

Bob: "i can i i everything else"

Alice: "balls have zero to me to me to me to me ....."

It looks like the dialogue was generated by models which are not well-trained, this is true especially if you compare the print out with the one published by Facebook's github.

If our hypothesis is true, we side with FB researchers, and believe that someone just write a over-sensational post in the first place causing a stirs of the public. Generally, everyone who spreads the news should take responsibility to check the sources and ensure integrity of their piece. We certainly don't see such responsible behavior in the 30+ outlets who report the faked news. It also doesn't look likely that The Source is written in a way which is faithful of the original FB research. Kudos to Gizmodo and Snope's authors who did the right thing. [4]

Given the agent is more likely to behave like what we found on Facebook's github, we maintain our verdict as in Issue 18 and 23, it is still very unlikely that FB agents are creating any new language. But we add qualifier "very unlikely" because as you can see in Point 3, we still couldn't get Facebook researchers' verification as of this writing.

So let us reiterate our verdict:
We rate the "Facebook killing agents" false.
We rate "Agents that create its own language" very likely false.

AIDL Editorial Team

Footnote:

[1] So, immediately after the event, couple of members was joking about the public was being ignorant about what so-called AI agents are.

[2] We avoiding naming what The Source is. There seems to be multiple of them and we are not sure which one is the true origin.

[3] The author of The Source seems to have communication with Facebook researcher Prof. Dhruv Batra and quote the Professor's word, e.g.

There was no reward to sticking to English language,

as well as talking with researcher Mike Lewis,

Agents will drift off understandable language and invent codewords for themselves,

[4] What if we are wrong? Suppose the Source is real, and the Agents does Generated Codeword-Like Dialogue, Are they new Language?

That's more a debatable issue. Again, just like we said in Issue 18, suppose you start from training a model like from an English database, the language you got will still be English. But can you characterize a English-like language as a new language? That's a harder question. e.g. a creole is usually seen as another language, but a pidgin is usually seen as just a grammatical simplification of a language. So how should we see the codeword generated by purported "rogue" agent? Only professional linguist should judge.

It is worthwhile to bring up one thing: while you can see the codeword language just like any machine protocol such as TCP/IP, the Source implies that Facebook researcher have consciously making sure the language adhere to English. Again, this involves if the Source is real, and whether the author has mixed in his/her own researches into the article.

Artificial Intelligence and Deep Learning Weekly


On Google's "AI Built an AI That Outperforms Any Made by Humans"

For those who are new at AIDL. AIDL has what we called "The Three Pillars of Posting". i.e. We require members to post articles which are relevant, non-commercial and non-sensational. When a piece of news with sensationalism start to spread, admin of AIDL (in this case, Arthur) would fact-check the relevant literature and source material and decide if certain pieces should be rejected. And this time we are going to fact-check a popular yet misleading piece "AI Built an AI That Outperforms Any Made by Humans".

  • The first thing to notice: "AI Built an AI That Outperforms Any Made by Humans" by a site which historically sensationalized news. The same site was involved in sensationalizing the early version of AutoML, as well as the notorious "AI learn language" fake news wave.
  • So what is it this time? Well, it all starts from Google's AutoML published in May 2017 If you look at the page carefully, you will notice that it is basically just a tuning technique using reinforcement learning. At the time, The research only worked at CIFAR-10 and PennTree Banks.
  • But then Google's AutoML released another version in November. The gist is that Google beat SOTA results in Coco and Imagenet. Of course, if you are a researcher, you will simply interpret it as "Oh, now automatic tuning now became a thing, it could be a staple of the latest evaluation!" The model is now distributed as NASnet.
  • Unfortunately, this is not how the popular outlets interpret. e.g. Sites were claiming "AI Built an AI That Outperforms Any Made by Humans". Even more outrageous is some sites are claiming "AI is creating its own 'AI child'". Both claims are false. Why?
  • As we just said, Google's program is an RL-based program which propose the child architecture, isn't this parent program still built by humans? So the first statement is refutable. It is just that someone wrote a tuning program, more sophisticated it is, but still a tuning program.
  • And, if you are imagining "Oh AI is building itself!!" and have this imagery that AI is now self-replicating, you cannot be more wrong. Again, remember that the child architecture is used for other tasks such as image classification. These "children" doesn't create yet another group of descendants.
  • A much less confusing way to put it is that "Google RL-based AI now able to tune results better than humans in some tasks." Don't get us wrong, this is still an exciting result, but it doesn't give any sense of "machine is procreating itself".

We hope this article clears up the matter. We rate the claim "AI Built an AI That Outperforms Any Made by Humans" false.

Here is the original Google's post.

Artificial Intelligence and Deep Learning Weekly

Member's Question

Paper/Thesis Review



 

AIDL Weekly #41 - How Everyone See Deep Learning in 2017?

Editorial

Thoughts From Your Humble Curators

Last week was the week of NIPS 2017. We chose 5 links from the conferences in this issue.

The news this week is all about hardware, we point you to the new Titan V, the successor of the Titan series. Elon Musk is also teasing us the best AI hardware of the world. Let's take a closer look.

And as you might read from elsewhere: is Google building an AI which can build another AI? Our fact-checking section will tell you more.

Finally, we cover two papers this week:

  • The new DeepMind paper which describes how AlphaZero become master of chess and shogi as well,
  • Fixing weight decay regularization in Adam.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe/forward it to your colleagues.

Artificial Intelligence and Deep Learning Weekly

News


Factchecking

On Google's "AI Built an AI That Outperforms Any Made by Humans"

For those who are new at AIDL. AIDL has what we called "The Three Pillars of Posting". i.e. We require members to post articles which are relevant, non-commercial and non-sensational. When a piece of news with sensationalism start to spread, admin of AIDL (in this case, Arthur) would fact-check the relevant literature and source material and decide if certain pieces should be rejected. And this time we are going to fact-check a popular yet misleading piece "AI Built an AI That Outperforms Any Made by Humans".

  • The first thing to notice: "AI Built an AI That Outperforms Any Made by Humans" by a site which historically sensationalized news. The same site was involved in sensationalizing the early version of AutoML, as well as the notorious "AI learn language" fake news wave.
  • So what is it this time? Well, it all starts from Google's AutoML published in May 2017 If you look at the page carefully, you will notice that it is basically just a tuning technique using reinforcement learning. At the time, The research only worked at CIFAR-10 and PennTree Banks.
  • But then Google's AutoML released another version in November. The gist is that Google beat SOTA results in Coco and Imagenet. Of course, if you are a researcher, you will simply interpret it as "Oh, now automatic tuning now became a thing, it could be a staple of the latest evaluation!" The model is now distributed as NASnet.
  • Unfortunately, this is not how the popular outlets interpret. e.g. Sites were claiming "AI Built an AI That Outperforms Any Made by Humans". Even more outrageous is some sites are claiming "AI is creating its own 'AI child'". Both claims are false. Why?
  • As we just said, Google's program is an RL-based program which propose the child architecture, isn't this parent program still built by humans? So the first statement is refutable. It is just that someone wrote a tuning program, more sophisticated it is, but still a tuning program.
  • And, if you are imagining "Oh AI is building itself!!" and have this imagery that AI is now self-replicating, you cannot be more wrong. Again, remember that the child architecture is used for other tasks such as image classification. These "children" doesn't create yet another group of descendants.
  • A much less confusing way to put it is that "Google RL-based AI now able to tune results better than humans in some tasks." Don't get us wrong, this is still an exciting result, but it doesn't give any sense of "machine is procreating itself".

We hope this article clears up the matter. We rate the claim "AI Built an AI That Outperforms Any Made by Humans" false.

Here is the original Google's post.

Artificial Intelligence and Deep Learning Weekly

NIPS 2017





Blog Posts



Open Source


Member's Question

How do you read Duda and Hart's "Pattern Classification"?

Question (rephrase): I was reading the book "Pattern Classification" by Duda and Hart, but I found it difficult to follow the mathematics, what should I do?

Answer: (by Arthur) You are reading a good book - Duda and Hart is known to be one of the Bibles in the field. But perhaps is slightly beyond your skill at this point.

My suggestion is to make sure you understand basic derivations such as linear regression and perceptron. Also if you get stuck with the book for a long time, try to go through Andrew Ng's Machine Learning. Granted the course is much easier than Duda and Hart, but you would also have the outline what you are trying to prove.

One specific advice on the derivation of NN - I recommend you to read Chapter 2 of Michael Nielsen's book first because he is very good at defining clear notation. e.g. meaning of the letter z changes in different text books, but it is crucial to know exactly what it means to follow a derivation.

Artificial Intelligence and Deep Learning Weekly

Paper/Thesis Review


 

AIDL Weekly #40 - Special Issue on NIPS 2017

Editorial

Thoughts From Your Humble Curators

Last week was the week of NIPS 2017. We chose 5 links from the conferences in this issue.

The news this week is all about hardware, we point you to the new Titan V, the successor of the Titan series. Elon Musk is also teasing us the best AI hardware of the world. Let's take a closer look.

And as you might read from elsewhere: is Google building an AI which can build another AI? Our fact-checking section will tell you more.

Finally, we cover two papers this week:

  • The new DeepMind paper which describes how AlphaZero become master of chess and shogi as well,
  • Fixing weight decay regularization in Adam.

Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe/forward it to your colleagues.

Artificial Intelligence and Deep Learning Weekly

News


Factchecking

On Google's "AI Built an AI That Outperforms Any Made by Humans"

For those who are new at AIDL. AIDL has what we called "The Three Pillars of Posting". i.e. We require members to post articles which are relevant, non-commercial and non-sensational. When a piece of news with sensationalism start to spread, admin of AIDL (in this case, Arthur) would fact-check the relevant literature and source material and decide if certain pieces should be rejected. And this time we are going to fact-check a popular yet misleading piece "AI Built an AI That Outperforms Any Made by Humans".

  • The first thing to notice: "AI Built an AI That Outperforms Any Made by Humans" by a site which historically sensationalized news. The same site was involved in sensationalizing the early version of AutoML, as well as the notorious "AI learn language" fake news wave.
  • So what is it this time? Well, it all starts from Google's AutoML published in May 2017 If you look at the page carefully, you will notice that it is basically just a tuning technique using reinforcement learning. At the time, The research only worked at CIFAR-10 and PennTree Banks.
  • But then Google's AutoML released another version in November. The gist is that Google beat SOTA results in Coco and Imagenet. Of course, if you are a researcher, you will simply interpret it as "Oh, now automatic tuning now became a thing, it could be a staple of the latest evaluation!" The model is now distributed as NASnet.
  • Unfortunately, this is not how the popular outlets interpret. e.g. Sites were claiming "AI Built an AI That Outperforms Any Made by Humans". Even more outrageous is some sites are claiming "AI is creating its own 'AI child'". Both claims are false. Why?
  • As we just said, Google's program is an RL-based program which propose the child architecture, isn't this parent program still built by humans? So the first statement is refutable. It is just that someone wrote a tuning program, more sophisticated it is, but still a tuning program.
  • And, if you are imagining "Oh AI is building itself!!" and have this imagery that AI is now self-replicating, you cannot be more wrong. Again, remember that the child architecture is used for other tasks such as image classification. These "children" doesn't create yet another group of descendants.
  • A much less confusing way to put it is that "Google RL-based AI now able to tune results better than humans in some tasks." Don't get us wrong, this is still an exciting result, but it doesn't give any sense of "machine is procreating itself".

We hope this article clears up the matter. We rate the claim "AI Built an AI That Outperforms Any Made by Humans" false.

Here is the original Google's post.

Artificial Intelligence and Deep Learning Weekly

NIPS 2017





Blog Posts



Open Source


Member's Question

How do you read Duda and Hart's "Pattern Classification"?

Question (rephrase): I was reading the book "Pattern Classification" by Duda and Hart, but I found it difficult to follow the mathematics, what should I do?

Answer: (by Arthur) You are reading a good book - Duda and Hart is known to be one of the Bibles in the field. But perhaps is slightly beyond your skill at this point.

My suggestion is to make sure you understand basic derivations such as linear regression and perceptron. Also if you get stuck with the book for a long time, try to go through Andrew Ng's Machine Learning. Granted the course is much easier than Duda and Hart, but you would also have the outline what you are trying to prove.

One specific advice on the derivation of NN - I recommend you to read Chapter 2 of Michael Nielsen's book first because he is very good at defining clear notation. e.g. meaning of the letter z changes in different text books, but it is crucial to know exactly what it means to follow a derivation.

Artificial Intelligence and Deep Learning Weekly

Paper/Thesis Review


 

AIDL Weekly #39 - Amazon The AI Powerhouse

Editorial

Thoughts From Your Humble Curators

Its AWS re:INVENT week - this is the big annual AWS conference. Amazon had several announcements on its AI offerings. So we will take a closer look in this Issue.

In our Blogs Posts section, we have a line-up of many interesting blog posts and paper reviews this issue, including:

  • Google Vision AIY Kit,
  • Stephen Merity on Understanding the Mixture of Softmaxes (MoS),
  • One LEGO at a time: Explaining the Math of How Neural Networks Learn,
  • Arthur's Review of Course 3 of deeplearning.ai

We also present our read on the CheXnet paper which alleged beat human radiologists. We are taking a closer look of the results.


Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

As always, if you like our newsletter, feel free to subscribe and forward it to your colleagues!

Artificial Intelligence and Deep Learning Weekly

News

Blog Posts






Open Source


Member's Question

Are MOOC Certificates Important?

Our thought (from Arthur): For the most part, MOOC certificates don't mean too much in real life. It is whether you can actually solve problem matters. So the meaning of MOOC is really there to stimulate you to learn. And certificate serves as a motivation tool.

As for OP's question. I never got the Udacity nanodegree. From what I heard though, I will say the nanodegree will require effort to take 1 to 2 Ng's deeplearning.ai specialization. It's also tougher if you need to take a course in a specified period of time. But the upside is there are human graders that give you feedback.

As for which path to go, I think it solely depends on your finances. Let's push to an extreme: e.g. If you purely think of credential and opportunities, perhaps an actual PhD/Master degree will give you the most, but then the downside is multi-year salary opportunity costs. One tier down would be online ML degree from Georgia tech, but it will still cost you up to $5k. Then there is taking cs231n or cs224d from Stanford online, again that will cost you $4k/class. So that's why you would consider to take MOOC. And as I said which price tag you choose depends on how motivate you are and how much feedbacks you want to get.

Artificial Intelligence and Deep Learning Weekly

Paper/Thesis Review