AIDL Weekly Issue #76 - fast.ai pytorch library, MS's infer.NET Oct 8th 2018

Editorial

Thoughts From Your Humble Curators

This week we cover two new open source frameworks: the fast.ai Pytorch library and Microsoft infer.NET.

As always, if you find this newsletter useful, feel free to share it with your friends/colleagues.


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 176,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

News

Deals

Artificial Intelligence and Deep Learning Weekly


Blog Posts



Open Source

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 176,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly #75 - Inside Google Dataset Search

Editorial

Thoughts From Your Humble Curators

We cover Paige.ai and what's inside Google Dataset Search this issue.

As always, if you find this newsletter useful, feel free to share it with your friends/colleagues.


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 175,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

News


Deals

Artificial Intelligence and Deep Learning Weekly

Blog Posts



Video

Member's Question

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 175,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly Issue #74 - Facebook vs Fake News (or AI vs Fake News?)

Editorial

Thoughts From Your Humble Curators

This week we look into the details of Facebook's usage of machine learning in detecting misinformation. What is the purpose of their latest Rosetta system?. And how prepared is Facebook against faked news?

As always, if you find this newsletter useful, feel free to share it with your friends/colleagues.


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 173,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

News


Deals and Acquisitions

Microsoft Acquires Lobe

Also take a look at the trends in AI in healthcare fundings.

Artificial Intelligence and Deep Learning Weekly

Blog Posts




About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 173,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly Issue 70 - Nvidia Turing, TF 2.0

Editorial

Thoughts From Your Humble Curators

Hey Hey! We are back. This issue we brings you two interesting stories:

  • The Nvidia Turing architecture - how much would it affect deep learning?
  • Tensorflow 2.0 - what is the major change? How would that affect you?

As always, if you like our newsletter, share with your friends/colleagues!


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 168,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

News

Deals

Also this from Techcrunch: Artificial Intelligence Continues Its Fundraising Tear In 2018

For a counter view, check out this piece on Quanergy Systems and how they lost its ways.

Artificial Intelligence and Deep Learning Weekly


Open Source


Video

Member's Question

"You know more than Silicon Valley Engineers"

(Original Link) Question: (Excerpt and rewritten) At the end of the lecture 3 of Ng's Machine Learning Coursera Course, Andrew says that "if you understood what you have done so far in the course, you know much more than many of the Silicon Valley engineers that are having a lot of success" . Is it actually true?

Answer: (By Arthur) You might be around 5 years ago - that was the time machine learning was more an esoteric topic. At then, it is true that general programmers and engineers lack of basic understanding of ML concepts such as under/over-fitting, metric-driven development.

Translate to now though, you should be aware that machine learning became a mainstream topic and general CS major knows quite well ML works. You are competing with many young bright minds on your knowledge of machine learning now.

So I would probably say, given what you know so far until Lecture 3, "you have good basic understanding of machine learning". You have a good start, but my guess you still have things to learn.

Artificial Intelligence and Deep Learning Weekly

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 168,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly Issue 69 - A Dexterous Robotic Hand

Editorial

Thoughts From Your Humble Curators

This week we look deep into latest OpenAI's work on dexterous robotic hand, and ask if feed-forward networks are just as good as the recurrent ones.

As always, if you like our newsletter, feel free to share it with your friends and colleages.


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 165,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

News

Deals

Artificial Intelligence and Deep Learning Weekly


Blog Posts



Member's Question

AIDL Admin's Feedback on the New Pre-approval System.

An answer from Arthur: Zubair Ahmed posted a thread on getting everyone's feedback about our now 2-month-old pre-approval system. So far, most feedbacks we got are positive. I just want to give you my take on the new system.

First off, our system (or think of it as post-approval) was meant to give members freedom to post what they like.

Unfortunately, self-inspection from all members couldn't filter out all malicious postings such as porns, religious and political messages which have nothing to do with AI, etc. Plus there are too many complaints about basic questions such as "How do I learn AI?" was asked repetitively.

So here comes our new pre-approval system. How does it really work in practice? Let me just give you a sample of my day, and how we processed different posts and decide if they should appear in the feed. I am not the only approver, but we have fairly consistent standard across admins/mods. So you will have a good feel of our work.

Daily, we receive 50-70 posts required to be approved. In my timezone, I will process around 40-50 of them. Here is a rough breakdown of them:

  • 10%: selling irrelevant products such as rolexes, web hosting. What I do: delete the post.
  • 20%: technology-related but has nothing to do with AI. What I do: delete the post.
  • 30%: AI-related news which comes from unreliable sources, or from a Page which just reposts a piece. Or sensational opinion about AI-related technology. What I do: I usually delete the post unless it reflect a certain zeitegeist in AI development.
  • 10%: Members questions which are unclear. Usually these posts are poorly formatted and not proofread. These posts usually solicit angry responses from impatient AIDL members. What I do: sometimes I let them in, but comment on the quality of the questions. If they are "How do I learn AI?" I would just delete them.

Members questions which I have no idea the meanings are. Usually they are the results from poor formatting and poor or no proof-reading from the posters. They are usually gone because the post will only solicit angry response from the slightly more knowledgeable but impatient AIDL members. What I do: sometimes I let them in, but comment on the quality of the questions. But I don't mind to delete them. If they are "How do I learn AI?" Sorry, I would just delete them.

So the rest is what you see in the feed. That accounts for ~10-15 posts. If they are posts, they are original form the authors, if they are code, they share from the programmers. If they are questions, they are usually non-trivial. And their answers are good for everybody knows.

One questions members often asked is how does the pre-approval affect our workload as admin? I'll say : at the moment, it lighten up our load. The reason is we pretty much just used similar curation criteria before the pre-approval system. But now we see fewer group-wide outrages of poor quality posts. Spams such as porn, while infrequent, they are disruptive to our members', and thus our life.

There are some members just completely disagree with any pre-approval system. I'll say this: If you just look at the post breakdown, you should quickly notice that 60% of pending posts are inappropriate for the group. So we have always been removing them even before the system. We really tried to get the old system working, but it's too hard.

I'll also say we admins realize that we are just humans and can be biased and make mistakes. So let's say we keep an open-minded and feel free to give us feedbacks.

On a lighter note: me and Zubair Ahmed found that there are always someone suggest that ML should be used to replace us admins/mods. Of course, we also repeatedly pointed out that this is a cliche idea. But let's see how often they appear? 🙂

Artificial Intelligence and Deep Learning Weekly

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 165,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly Issue #61 - Project Maven

Editorial

Thoughts From Your Humble Curators

This week, we take a closer look at Google's involvement in project Maven, and how it ends abruptly after complaints from Google's employees.

As always, if you like our newsletter, feel free to subscribe and forward it to your colleagues.


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 145,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65. Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

Artificial Intelligence and Deep Learning Weekly

News


Blog Posts

Open Source

Member's Question

I realized ML is just Statistics, should I feel demotivated?

Question: Anyone had the feeling where you feel very motivated and eager to learn machine learning, and once you actually start, you realize it's just stats which is something you really don't like, and become completely demotivated?

Answer: (By Arthur) There are two parts of your frustrations. First, it's that you equate machine learning and statistics. Second it is you feel that statistics is boring. So let's address the second part first. Then I will come back to the first part.

Is statistics boring? I guess many people who learn statistics usually learn Math first. If that's your route, then perhaps one of the reasons why statistics is boring is that it is empirical and deal with imperfect phenomenon of the world. So unlike Euclidean geometry, or solving quadratic or cubic, you can't quite come up with an exact solution.

To many people's dissatisfaction though, the world is better to be described to be uncertain, rather than certain. Unfortunately, only statistics can teach us more in the realm of uncertainty. So statistics is actually a rescue, and I personally feel grateful for the subject.

Can statistics be fun? You ask. It all depends on how you look at it. e.g. It took me a while to find a good proof of how a full covariance matrix can be estimated through maximum likelihood. In particular, in matrix form, the math is quite interesting. I end-up bought a book by Abadir and Magnus called "Matrix Algebra" and browse it from time to time. I am pretty sure it is boring to some, but it's a lot of fun to me. Btw, Matrix Algebra can be quite mathematical too. But you may say it is more technical type of Math.

My conclusion of the second part is: are you sure you see everything in statistics and machine learning? There are many deep topics in both subjects. But then you might miss nuance in your first glance. So that's that. Of course, your frustration might come from your personal philosophy. Perhaps you don't like uncertainty? Perhaps you don't like the time-consuming process of collecting data? No one can blame you for that. You just have to be honest to yourself.

Let’s go back to the first part on whether machine learning is just statistics. And this is slightly controversial. Let me just quote one prominent person. Say if you ask Prof. Nil Nilsson, he once said machine learning just the subject of "machine to learn". But statistics is clearly more focused on the data and its observation. So if the fancy image of an intelligent robot is doing things was what attracts you, yeah, ML is the subject to learn. It's just that modern theory of ML has found that statistics is very important.

So why is that the case? Oh well, it's not like people love to be statistical, it has to do with nature is better described by uncertain rules. So say speech recognition? People would love to create several rules of phonetics and do speech recognition. In fact they were tried by PhD students in 60s. So in 70s, people start to realize that's not the way to go. Ha tada, here comes HMM. Boring it is. But it is the basis of many previous generation ASR before seq2seq NN models. In fact, there are still many HMM-based system.

So, to summarize, if you are disappointed by ML. Ask whether reality is better described by certainty or uncertainty. It will help you to have a closure.

Artificial Intelligence and Deep Learning Weekly

Paper/Thesis Review

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 145,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65. Join our community for real-time discussions with this iOS app here: https://itunes.apple.com/us/app/expertify/id969850760

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly #73 - Google Dataset Search Engine

Editorial

Thoughts From Your Humble Curators

This week we cover Google Dataset Search Engine and other topics.

As always, if you find this newsletter useful, feel free to share it with your friends/colleagues.


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 172,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

News


Deals

Artificial Intelligence and Deep Learning Weekly


Blog Posts


Open Source

Member's Question

How come my post get so few clicks?

Lately we have been asked this question often:

"I have this super-duper posts with super-fine details on why deep neural network is going to revolutionize humanity. My post talks about loss function and back-propagation. There is NO MATH! So I think it is very good for the community. And I think it is the first time we see this kind of blog post on AIDL!"

"But How come I only get 4 likes? My post should go VIIIIRAAALLL!"

We rephrased of course. But we think viewership has a lot of to do with qualities of our content. So the question deserves a good answer.

The first answer perhaps is just "Well your post is not that interesting....." But why is it the case? And in particular why is it the case at AIDL?

You should understand, by now, knowledge about deep learning is not entirely new. And AIDL has been there for almost 3 years. So most beginner topics you try to cover, has been covered more than once. For example, (again I rephrase),

  • "Write your own neural network in less than 5 characters"
  • "The detailed, simple, easy, beginner's, expert's, mathematical-but-tons-of-mistakes, mathematical-but-the-notation-was-inconsistent, mathematical-and-correct-but-boring guide of back propagation"
  • "The very detailed guide of convolutional networks with a wrong notion of convolutions"
  • "How do you start machine/deep learning and AI if you don't only know how to add and subtract."
  • "Why AI is/isn't an imminent danger for humanity (Disclaimer: I have no other statistics in other human calamities to compare. But I feel like writing one.)"

So my take is you should ask yourself, "am I really writing something new?" Now that's a tough requirement. Writing something new means you have already read up a body of literature yourself. And after studying the nuances of important literature, you then write something which people never read before. Yet these new reading also has to be convincing and interesting.....

In a nutshell, technical blogging is not easy. Some statistics for you: out of 50 blog posts submitted to AIDL. we might accept approve around 20-30% of them. And for the published ones, a majority of them will not gather more than 10 likes. That has nothing to do with whether you hashtag or come up with a catchy title. It has to do with whether you are creating genuinely interesting content for AIDL members.

On the other side of the coin, when your content is valuable, you will get attention. Just check out Joey Adrian Rosebrock's posts on computer vision? Or Raymond de Lacaze's regular posts on different papers? Rosebrock's posts usually include non-trivial implementation with working python code. And de Lacaze's posts are explanations of recent papers. Both of them post useful links which we check out from time to time. That's why their posts got clicks, not because they use any type of gimmicks.

One last thing: don't give up - writing about machine learning, just like learning machine learning, takes a long time to master. Also as Adrian later commented: writing a post is really not about getting likes. Writing by itself, it's a learning process. Once you master the knowledge, other things will naturally come.

Artificial Intelligence and Deep Learning Weekly

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 172,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly Issue #72 - Google Dopamine

Artificial Intelligence and Deep Learning Weekly - Issue 72

Editorial

Thoughts From Your Humble Curators

We include links of several blogs post this week. That includes:

  • Google's new reinforcement learning framework - Dopamine,
  • The very cool tutorial from Adrian Rosebrock on neural style transfer.

As always, if you find this newsletter useful, feel free to share it with your friends/colleagues.


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 171,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

Blog Posts





Video

Member's Question

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 171,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly Issue 71 - Humans vs OpenAI Five

Artificial Intelligence and Deep Learning Weekly - Issue 71

Editorial

Thoughts From Your Humble Curators

This week we bring you the story of the first match between Open AI vs Humans in a 5v5 battle. Unlike 1v1, 5v5 gameplays feature collaboration between human experts, and currently it is unknown if computer agents can learn these complex interaction and strategy by itself.

As always, if you like our newsletter, feel free to share it with your friends/colleagues.


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 169,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

News


Deals

Artificial Intelligence and Deep Learning Weekly

Blog Posts





About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 169,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly

 

AIDL Weekly Issue #68 - Dr. Rachel Thomas on AutoML and What ML Practitioners Actually do.

Editorial

Thoughts From Your Humble Curators

This week, we share with you the following:

  • Google's Edge TPU
  • Dr Rachel Thomas' series on AutoML and what ML practitioners actually do
  • A deeper look of the new connectomics work by Google

Enjoy!


This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 162,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here - Expertify

Artificial Intelligence and Deep Learning Weekly

News

Blog Posts

Dr. Rachel Thomas' Series on AutoML (and HumanML)

In AIDL Weekly, we usually share links from a range of sources. But this time we choose to share a series of links which are all written by Dr. Rachel Thomas at fast.ai. Reasons: First of all, Dr. Thomas covers what we ML practitioners care about - what do we really do? What are our daily activities? We found that this is an important yet not well-discussed topic. We are often asked how do we spend our time and why activities such as data cleaning and experimental design are part of our daily jobs. Dr. Thomas' Series Part I on "What do machine learning practitioners actually do?" answers all these question nice and clear. And it is suitable for all ML practitioners to read.

Second of all, Dr. Thomas dives into the what we called neural architecture search. Notable example of such search could be Google's AutoML. As you know, outlets have been sensationalizing how much AI can replace humans now. Dr. Thomas delve deep into the neural architecture search in "An Opinionated Introduction to AutoML and Neural Architecture Search" and she commented on AutoML in "Google's AutoML: Cutting Through the Hype". We found both pieces illuminating.

You can find the links here:

Artificial Intelligence and Deep Learning Weekly





Paper/Thesis Review

About Us

This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook's most active A.I. group with 163,000+ members and host an occasional "office hour" on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.

Join our community for real-time discussions here: Expertify

Artificial Intelligence and Deep Learning Weekly