Verge ran a piece on Amazon DeepLens, a $250 AI-enabled camera, also SageMaker, an automatic transcription and translation tools. Of course, you might also heard of Rekognition which now gives developers access of powerful readily-made feature such as real-time video analysis. In a nutshell, you don't have to implement YOLO yourself.
After the Voice AIY toolkit, Google will release a new Vision AIY toolkit. You will need your own Raspberry Pi Zero, and Raspberry Camera. But the toolkit will come with VisionBonnet which has a Intel Movidius MA2450 vision processing unit.
Price: only $45. It sounds like we can all have some fun Christmas time then.
In this piece, our favorite writer Stephen Merity tries to explain the latest ideas of mixture of softmaxes in language modeling by Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. As always, Merity is a quality writer and he also provided source code for you to play with.
In this interesting conversation started by Denny Britz, he quoted Eric Jang's speculation on Quora on how BD's backfliping robot actually work, in which he suggest that BD is not using ML in the process.
That leads to a conversation on how/when ML would really able to learn this process automatically, and Prof Pieter Abbeel chimed in.
Our thought (from Arthur): For the most part, MOOC certificates don't mean too much in real life. It is whether you can actually solve problem matters. So the meaning of MOOC is really there to stimulate you to learn. And certificate serves as a motivation tool.
As for OP's question. I never got the Udacity nanodegree. From what I heard though, I will say the nanodegree will require effort to take 1 to 2 Ng's deeplearning.ai specialization. It's also tougher if you need to take a course in a specified period of time. But the upside is there are human graders that give you feedback.
As for which path to go, I think it solely depends on your finances. Let's push to an extreme: e.g. If you purely think of credential and opportunities, perhaps an actual PhD/Master degree will give you the most, but then the downside is multi-year salary opportunity costs. One tier down would be online ML degree from Georgia tech, but it will still cost you up to $5k. Then there is taking cs231n or cs224d from Stanford online, again that will cost you $4k/class. So that's why you would consider to take MOOC. And as I said which price tag you choose depends on how motivate you are and how much feedbacks you want to get.
It's online, but will it be free? We will see. Regardless, it says a lot of the importance of ML in scientific research. Though some in the community does raise concern if Nature is the best host of the new journal. Some use JMLR as an example that an academic journal could just be organized and edited by academicians themselves.
This is a note on CheXNet, the paper. As you know it is the widely circulated paper from Stanford, purportedly outperform human's performance on Chest X-ray diagnostic.
BUT, after we read it in detail, my impression is slightly different from just reading the popular news including the description on github.
Since the ML part is not very interesting. We will just briefly go through it - it's a 121-layer Densenet, basically it means there are feed-forward connection from every previous layers. Given the data size, it's likely a full training.
There was not much justification on the why of the architecture. Our guess: the team first transfer learning, but decide to move on to full-training to get better performance. A manageable setup would be Densenet.
Then there was fairly standard experimental comparison using AUC. In a nut shell, CheXNet did perform better than humans in every one of the 14 classes of ChestX-ray-14, which is known to be the largest of the similar databases.
Now here is the caveat popular news hadn't mentioned:
1, First of all, humans weren't allow to access previous medical records of a patient.
2, Only frontal images were shown to human doctors. But prior work did show when the lateral view was also shown.
That's why on p.3 of the article, the authors note:
"We thus expect that this setup provides a conservative estimate of human radiologist performance."
Reading so far should make you realize that may be it will still take a bit for deep learning to "replace radiologists".