What a week! Apple just started a new machine learning blog; Audi is going to sell the first self-driving car with L3 autonomy; Elon Musk is telling us (again) that AI is an imminent danger. Would Apple's new blog clash with its secrecy culture? How real is Audi's L3 Autonomy? Should we care about Musk's alarmist view? We will find out more in this issue. We also cover results from the last Imagenet, Verge's interview with Hassabis, François Chollet's view on deep learning and more.
Apple starts a new ML blog! It is surprising because Apple is known for its defense-level of secrecy. We just couldn't help to feel.... "Ah, Apple really wants to get into the A.I. bandwagon."
But the hints of secrecy is still there. For example, whereas Google is very open on core technologies such as speech recognition or machine translation, Apple chose research subjects such as GAN-based synthetic images. In that way, nothing relevant to its core IP would be disclosed.
Looking at the technical content, which is mostly on GAN and how to use it well. No surprise that Apple has its own captain of deep learning like Soumith Chintala. But who is he/she? No one knows! Once again, it is Apple's secrecy at work - all the posts are written by "Apple engineers". In that way, Apple leave its competitors no way to poach any of its very competent employees.
All-in-all, we learn that Apple is taking steps on A.I. and deep learning, but we couldn't help to wonder - In the future, would Apple's culture of secrecy works well with the current more open culture of AI/DL? Would this clash impede AI development within Apple?
Imagenet 2017 results came out last week. In terms of classification, it went from 2.94% last year to 2.25% this year. Talking about stars in last two years of Imagenet, we have to talk about teams from China. e.g. The winner this year is a team from Beijing's Momenta.ai and a postdoc researcher from U. Oxford. While we don't see the dramatic improvement back in the first few Imagenets, the improvement is still fairly impressive (*).
At first we thought it is just another ensemble type of work. Yet the Momenta.ai+Oxford team did design a new learning block called "squeeze-and-excite", they also improve the GPU memory pipeline. The team also promise to release a report later on.
It's unfortunate that this is the last Imagenet, and as we know major sites haven't participate since last year. But we are always grateful with the pioneering effort of collection of the database as it has resulted in many breakthroughs in deep learning and computer vision.
Footnote: Normally, we measure the relative improvement when working on a task. So with 0.75% absolute improvement and we start from 2.94%, you can think the improvement is ~25%.
For a long time, SDC we know are at L2 autonomy, that is so-called "hands-off", which is the driver can take their hands off the wheel, but only monitor and engage in driving if necessary. As we know, Google was more the champion in L2 - it only requires engagement for every 5000 miles. (Uber? Every Mile.) But L2 was more known to be the status quo.
Then Audi comes in and claim that it has L3 on its Cardiallac CT3, that is "eyes-off" which allows users to take their eyes off and only require to attend driving in special limited time. More surprising here is that Audi is a large auto, and it has the reputation of strong quality control.
One thing for sure, Germany' autos companies are among the first which researched on SDC. Of course, it was much more popularized as illuminaries such as Prof. Sebastian Thrun has competed in DARPA Grand Challenge. And eventually brought SDC technology to many research institutions of the States. So we shouldn't feel too surprised that Audi has also researched on SDC, and apparently it is for sale now.
CT3 costs 98k euros, and it's certainly for the die-hard fan of SDC, it will first be on-sale in selected are in Europe.
Here is an interview from Verge with DeepMind Founder David Hassabis on his views about the current separation of two fields: artificial intelligence and neuroscience. Indeed, while neuroscience class these days would still concern on how computation can emerge from human biology, AI nowadays usually focus on practical applications. Should it be that way? We share Hassabis' view - it shouldn't be. There are many directions of AI research which can be inspired by neuroscience. And modern AI techniques which process large sets of data, can also shed light on unsolved problems in neuroscience. We can see much of these fusions in both our main FB group AIDL and our satellite group Computational Neuroscience and Neurobiology.
Hassabis's paper was published in the prestigious Cell. Here is the link.
This is TC's interview with Rodney Brooks, the famed researcher who start MIT’s Computer Science and Artificial Intelligence Lab and iRobots. By itself, it's an interesting interview. But perhaps the part which deserve your attention is how Brooks look at Elon's warning on AI's purported dangers.
Let's go back - earlier this week, Musk warned of the imminent danger of AI, and was widely reported by major outlets. In fact, there are also state-level discussion on how to properly regulate A.I.
Our view is not too different from Rodney. Let's quote him:
So you’re going to regulate now. If you’re going to have a regulation now, either it applies to something and changes something in the world, or it doesn’t apply to anything. If it doesn’t apply to anything, what the hell do you have the regulation for? Tell me, what behavior do you want to change, Elon? By the way, let’s talk about regulation on self-driving Teslas, because that’s a real issue.
Here are two articles written by François Chollet, with Keras fame, on what's the limitation and the [future] of deep learning. As an important captain of the deep learning industry, his words deserve your time. We see many hypes on deep learning lately. Many beginners just see deep learning as the hot new thing, and they all expected career opportunities from the skill sets. Yet real-life problem requires extensive knowledge in other fields in machine learning. So Chollet's essay reminds us such ignorance is danger.
He also gives us an outlook of the possible future research directions on DL and other ideas such as program synthesis and life long learning.