The definitive weekly newsletter on A.I. and Deep Learning, published by Waikit Lau and Arthur Chan. Our background spans MIT, CMU, Bessemer Venture Partners, Nuance, BBN, etc. Every week, we curate and analyze the most relevant and impactful developments in A.I.
We also run Facebook’s most active A.I. group with 191,000+ members and host a weekly “office hour” on YouTube.
Editorial
Thoughts From Your Humble Curators
This past week Elon Musk is stepping down from OpenAI’s chair of board and Ian Goodfellow is now aptly called the “GANfather”.
But perhaps the most interesting news is Google’s latest claim on using retina fundus photographs to determine cardiovascular risk of patients. The results look promising – we will take a closer in our Literature Review section.
We also had a video AMA with Ocean Protocol, which is putting training data on the blockchain.
As always, if you like our newsletter, feel free to forward it to your friends/colleagues!
This newsletter is a labor of love from us. All publishing costs and operating expenses are paid out of our pockets. If you like what we do, you can help defray our costs by sending a donation via link. For crypto enthusiasts, you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.
News
Elon Musk steps down from OpenAI
Elon Musk step down from the chair of the board of OpenAI. Looking at the OpenAI’s blog post, the apparent reason is Musk is involved in both Tesla and OpenAI. Both organizations works on AI so it’s easy to have conflict of interests. It is good for OpenAI: we already seen high profile researchers such as Andrej Karpathy were lured to Tesla.
The Ganfather – Ian Goodfellow
Tech. Review runs a piece on GAN and its inventor Ian Goodfellow. As you know, GAN is perhaps the most investigated technique in unsupervised learning. This piece summarizes the pros and cons of the technique in a readable way.
Blog Posts
Feedback on Coursera deeplearning.ai: Is the Course Good For Everybody?
Long time AIDL Member, Sergey Zelvenskiy has mixed feelings about the coursera deeplearning.ai class. So this thread consists of members giving constructive feedbacks on the class.
JupyterLab
Project Jupyter just announced JupyterLab last week and it became an instant sensation. JupyterLab allows user to share code live. In the case of Jupyter, it means you have an interactive and collaborative environment which share equations, visualization and other rich outputs. That’s why many ML users are excited about the release.
Malicious AI Report
Coauthored by 26 AI experts, “Malicious AI Report” is a 100-page report, which is the result of a 2-day workshop held at Oxford. We will just quote the four high-level recommendations from the report:
”
- Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
- Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
- Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
- Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.”
Video
Office Hour with Ocean Protocol
We were joined by Trent McConaghy, Co-founder and CTO of Ocean Protocol, to talk about how they are enabling the democratization of training data by putting it on the blockchain and building a protocol that allows different organizations to share, sell and buy data.
Member’s Question
Sentient AI discussion.
From time to time, AIDL has deep and interesting discussion on more futuristic topic of AI. For example, in this thread, members ask about what is the biggest problem of building a sentient AI, and there are many meaningful exchanges between members.
Paper/Thesis Review
Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning
This is a read of the Nature’s paper version of the widely-circulated Google’s result.
(Despite the images, the link actually points you to the nature paper.)
We just checked out the widely-circulated result by Google last week on using cardiovascular risk (CV) factors from retinal fundus images. Here are some notes:
- We are no doctor, but through some Googling, we learned that retinal fundus photographs is the image which shows the part of your inner eye which can be seen through the pupil when during eye-examination.
- Retinal fundus photographs have a lot of applications, for example, say when someone has diabetes, high blood sugar level will cause damages of blood vessels in the retina, or diabetic retinopathy. By examining the eye, you can ask if someone has the disease.
- Of course, these are the type of tasks deep learning can be useful. The authors, from Alphabet subsidiaries, Google and Verily Life Sciences, produce a serious study.
- The amount of data is sufficient with 2 databases: UK Biobank, EyePacs, totally with ~280k training images.
- The gist of the paper is to use image data to predict several predictors of CV, which include age, gender, smoker status and body mass index (BMI). It gives very good results on age and gender (>0.95AUC), not as good for smoking status (~0.7AUC) and other predictors.
- The authors also try to predict major adverse cardiovascular events (MACE) within 5 years. This number got reported less but perhaps is more interesting. Because no one really care about if you can predict the predictor, but we really want to see if something bad happens. Thinking like this, the result basically improved AUC from 0.66 using one single factor to 0.73 with the help of deep learning algorithm. It still sounds like solid scientific progress.
- One interesting point about the analysis – there are also many diabetic retinopathy (DR) patients in the database. So the authors also try to examine the effect of DR and if that affects performance of DL, which I think it shows cautions from the researchers.
- To look at what the neural network is actually doing, the authors look at where the attention weights in the convnet. They conclude the networks correctly focus on the retina vessels.
- Google’s tone on reporting research is quite sober, and it doesn’t cause overly-sensationalized reports in the public. The results are solid with improvement upon the past methods. All-in-all, we found this is an interesting paper.
About Us