Woohoo! Google Brain is organizing a new Kaggle challenge, called Non-targeted Adversarial Attack Challenges (NAAC) on adversarial images. Of the three tasks, two is about creating images which can attack an existing system. The other is about defend against adversarial attack. The task is part of the NIPS 2017 competition track and it will last for three months.
Why would Brain want to organize such a task? Perhaps the elephant in the room is that adversarial attack is a sore point of all known deep learning systems, and some of the adversarial images can be embarrassingly different from the recognized class. In the past, such images are usually generated by the researchers, but in the scale of Google, it's possible that they might be observing some of these images in the wild.
Here's one interesting article describing various efforts of brain-inspired chip development by Prof. Karlheinz Meier who is also co-director of the Human Brain Project (HBP).
Let's back up a little bit, why would people want to simulate the brain then? For the most part, understanding the brain.
Our understanding of brain is still fairly primitive. That's why quite a portion of neuroscientists believe to further understand brain, a full simulation of brain is necessary. That's what Human Brain Project (HBP) supposed to do. But unlike other big science projects such as Human Genome Project, HBP is more controversial because the resource required is huge. As you can read from the article, it takes a supercomputer to simulate 1.73 billion nerve cells connected by 10.4 trillion synapses couple of year ago. This is only ~2% of all number of cells in the brain (the latest number is 86 billion and the model is crude. Is it worthwhile to dedicate such resource? That's a great question many people are pondering.
Regardless, perhaps one important benefit of learning about brain structure is that we might be able to emulate the brain structure with circuits. In this piece, Prof. Meier discussed three neuromorphic systems, SpiNNaker, TrueNorth and BrainScaleS. I think it is very educational for people who try to learn the "next big thing" in the ANN world.
One of our active Forum members, Keith Aumiller, once remarked that all our "Humble" posts, like "Humble Administrators" and the current "Humble Curators" (aka AIDL Weekly Editorial) are not that humble at all. That's true and not true. For the most part, while we have a lot of experience working on ML problems and starting new AI companies, we are nowhere on par with professors and researchers who had been pushing the envelope on ML/AI. So calling ourselves "Humble" always feel appropriate to us.
That brings us to this post: Prof. Tizhoosh's warning about AI Imposter is real. You might see a lot of posts from so-called "Deep Learning Consultants", or the "Influencers" lately. They might be bright in their own field, but their "success" in AI seems overnight. Their "deep understanding" comes from no where. You know when you press them on the detail, "How SMT BLEU scores improve with deep learning?" "How does GAN actually work? Do we really know?" "What is depth in Convnet?" "What do you when you can't use deep learning?" They are tongue-tied and try to switch topics. Or online, they would just come up with controversial posts/comments to hype up their like counts.
We only agree Prof Tizhoosh's post in principle only, and as AIDL members opine - if experts only confined PhD with 10 years of experience. The bar is just too high to cross. Of course, we also feel that the whole "Turing Imposter test" is weird.
But to AIDL Members: Beware of AI Imposters. In particular, when you brought up a technical subject in AI, make sure you do know it enough to opine. And for others' sake, may you be humble too. You want to help other people for sure. You want to build up your own reputation for sure. But if you don't know something, just say you don't know. The world will become a better place if we present ourselves truthfully.
In this angry post from a redditer, source code of facebook's recent research "Deal or no deal? Training AI bots to negotiate" was harshly criticized. It results in a storm of ~500 comments discussion at Reddit.
Here is one insightful comment from pdehaan:
I've struggled with some of these issues myself (I'm a programmer first, with an interest in ML). Some general thoughts I've encountered:
- Academic papers are by their nature often the wrong place to look if you're trying to grok ideas. Space is at a premium in many publications, so authors are incentivized to write papers that are information dense.
- A lot of researchers aren't "programmers first". By that I mean they often approach code as a one-off means to an end, not something they're sticking into a real system and responsible for maintaining indefinitely.
- Related to the above, the audience they're used to communicating to often have similar experience. What's obvious to them (and thus not elaborated on) isn't always going to align with what's obvious to you.
...that's not to say things shouldn't be improved. Some of the ideas coming out are immensely useful, and improving usability is a valuable activity. This is an area where developers shine - code is what they deal with every day. If you spend time working through shitty uncommented code, improve it.
Worst case you have better code to work from, but the feedback can also be useful for helping authors to write better code in the future. If they're publishing code, there's at least a decent chance they'll take feedback to heart. Most people don't want to put shitty code out there, but that's not necessarily their area of expertise.
Well said! In our experience, part of your job as a developer is to decode cryptic machine learning source code. Sadly, such skill is one your value proposition as a ML programmer. 🙂
Not only Prof. Richard Sutton is the new director, DeepMind also hires 6 researchers who published the DeepStack paper. It does sound fairly exciting, may be DeepMind can keep its on-going streak on beating various games in championship level.