We all learned last week, iPhoneX FaceID was purportedly hacked by a Vietnamese security company bkav. But then the more mainstream reports all suggest that we shouldn't be too worried about the hack. So let's take a closer look of the matter.
In this Youtube video shows a mask spoofing FaceID. The mask is a combination of a 3D mask with 2D printing of the eyes and the mouth. Later, Bkav also did a live-demo with BBC.
However, once you ask how one might this as an exploit, it's much tougher than it looks. First of all, Bkav refused to create a new mask after the request of BBC reporters. And according to the engineers who work on it, the whole process require 9 hours and the user has to be present so that the 2D+3D mask can be adjusted.
Another part which doesn't make sense is that Apple seemed to have tested FaceID extensively with masks as well. So, if Bkav refuses to replicate the process, it's hard to say definitively if FaceID is hackable.
Andrej Karpathy wrote a new piece on why neural network is actually the new software. Some question whether we really reach the point where DNN can just replace programming. That's a valid question, yet if you read the article closely, Karpathy was really arguing for using neural network as a skill to build several ML components such as ASR, CV and translation, which traditionally would require huge amount of programming effort, but now it can be significantly reduced by deep learning.
We think that Karpathy here is playing the role of futurist, much like some of his past articles, e.g. Short Story on AI in which he speculate how AI will look like when we scale up supervised learning.
This is a new article by Prof. Rachel Thomas who discuss the how to create a good validation. She delves deeper than the usual "train-validation-test" set type of discussion, and ask when a random validation set might not work. We found it a thought-provoking piece.
This is a project from Stanford which shows that pneumonia detection can be done by deep learning in the level of radiologists.
The model is trained on the recently released ChestX-ray14 which has 14 types of diseases annotated for each of 110k images. The architecture is a 121-layer Densenet. The authors show that CheXNet exceed the ability of human radiologists for both specificity and sensitivity. The original paper can be found here.
As many deep learning scripts are using numpy, and many of you know that python library compatibility issues are really difficult to solve. So that's why numpy dropping 2.7 support is potentially a big issue for many projects and you deserve to know.