Categories
Uncategorized

AIDL Weekly Issue 4: K for Kaggle, Jetson TX2 and DeepStack

Thoughts From Your Humble Curators

Three big news last week:

  1. Google acquired Kaggle
  2. Jetson TX2 was out,
  3. Just like its rival Libratus, DeepStack made headlines for beating human poker pros.

In this Editorial though, we want to bring to your attention is this little paper titled “Stopping GAN Violence: Generative Unadversarial Networks”. After 1 minute of reading, you would quickly notice that it is a fake paper. But to our dismay, there are newsletters just treat the paper as a serious one. It’s obvious that the “editors” hadn’t really read the original paper.

It is another proof point that the current deep learning space is a over-hyped. Similar happened to Rocket AI). You can get a chuckle out of it but if over-done, it could also over-correct when expectations aren’t met.

Perhaps more importantly, as a community we should spend more conscious effort to fact-check and research a source before we share. We at AIDL Weekly, follow this philosophy religiously and all sources we include are carefully checked – that’s why our newsletter stands out in the crowd of AI/ML/DL newsletters.

If you like what we are doing, check out our FB group, our YouTube channel.

And of course, please share this newsletter with friends so they can subscribe to this newsletter.

Artificial Intelligence and Deep Learning Weekly



Blog Posts



Open Source


Video

Member’s Question

Question from an AIDL Member

Q. (Rephrases from a question asked by Flávio Schuindt) I’ve been studying classification problems with deep learning and now I can understand quite well it. Activation functions, regularizeres, cost functions, etc. Now, I think its time to step forward. What I am really trying to do now is enter in the deep learning image segmentation world. It’s a more complicated problem than classification (object occlusion, lightning variations, etc). My first question is: How can I approach this king of problem? […]

A. You do hit one of the toughest (but hot) problem in deep-learning-based image processing. Many people confuse problems such as image detection/segmentation with image classification. Here are some useful notes.

  1. First of all, have you watched Karpathy’s 2016 cs231n‘s lecture 8 and 13? Those lectures should be your starting points to work on segmentation. Notice that image localization/detection/ segmentation are 3 different things. Localization and detection find bounding boxes and their techniques/concepts can be helpful on “instance segmentation”. “Semantic segmentation” requires downsampling/upsampling architecture. (see below.)
  2. Is your problem more a “semantic segmentation” problem of “instance segmentation” problem? (See cs231n’s lecture 13) The former comes up with regions of different meaning, the latter comes up with instances.
  3. Are you identifying something which always appear? If that’s the case you don’t have to use flunky detection technique, treat it as a localization problem and you can solve by Backprop a simple loss function (as described in cs231n lecture 8). If it might or might not appear, then a detection-type of pipeline might be necessary.
  4. If you do need to use detection-type of pipeline. Does standard segment proposal techniques work for your domain? This is crucial, because at least the beginning of your segmentation research, you have to do find segment proposals.
  5. Lastly if you decide this is really a semantic segmentation problem, then most likely your major task is to adopt an existing pre-train network. Very likely your goal is to transfer learning. Of course check out my point 2 and see if this is really the case.

Artificial Intelligence and Deep Learning Weekly

Leave a Reply

Your email address will not be published. Required fields are marked *