Categories
Uncategorized

AIDL Weekly #23 – Elon vs Musk

Editorial

Thoughts From Your Humble Curators

This week we look at the argument between Elon Musk and Mark Zuckerberg. Which one of the two billionaires is more right? (Or less wrong?) We will find out.

We also heard some disturbing news from popular outlets again – this time, Facebook kills its A.I agent s which create its own language! Is it true? In our fact-checking section, we will look at the claim closely.


As always, if you like our newsletter, feel free to forward it to your friends/colleagues!

Artificial Intelligence and Deep Learning Weekly

News


Factchecking

Fact-Checking : “Facebook kills AI that invented its own language……”

You know that news can develop, but do you know faked news can also “develop”? Interesting huh?

It happens. For example, in Issue 19, we fact-checked if the rumor “Facebook AI Created Its Own Unique Language” is true. We rate it False. And to recap, FAIR researchers were more evolving two agents which was first trained in English. So the resulting language is again English. And if you really think that AI is inventing its own language, then you are equating colloquialism as a language.

That’s what we said in Issue 19, what happened since then? Oh well, faked news always have legs by itself. Some good people (perhaps a bit gullible) from popular outlet, once again think that FB created an AI which created its language. This time imagination make the good people always think that FB “kills” the A.I.

So this is a simple one: we rate this piece false, as there was no AI creating a language, thus saying FB “kills” such AI is non-sensical.

(Follow Up at 20170729: We are contacting several researchers from Facebook to understand the matter better. We will keep you all posted on the latest”.)

Artificial Intelligence and Deep Learning Weekly

Blog Posts




Video

 

Categories
Uncategorized

AIDL Weekly #22 – Apple New ML Blog, Elon Musk The Alarmist, and Hassabis’ View on AI+Neuroscience

Editorial

Thoughts From Your Humble Curators

What a week! Apple just started a new machine learning blog; Audi is going to sell the first self-driving car with L3 autonomy; Elon Musk is telling us (again) that AI is an imminent danger. Would Apple’s new blog clash with its secrecy culture? How real is Audi’s L3 Autonomy? Should we care about Musk’s alarmist view? We will find out more in this issue. We also cover results from the last Imagenet, Verge’s interview with Hassabis, François Chollet’s view on deep learning and more.

Oh! Did we mention AIDL just hit 30000 members?


As always, if you like our newsletter, subscribe and forward it to your friends/colleagues!

Artificial Intelligence and Deep Learning Weekly

News




Blog Posts




List of Interesting Blog posts From Major Research Sites.

There are many interesting blog post published last week from major research sites. Here are some for your enjoyment:

Google:

BAIR

OpenAI

Artificial Intelligence and Deep Learning Weekly

Open Source

Categories
Uncategorized

AIDL Weekly Issue #21 – Google Gradients, Microsoft Research AI and Baidu-Nvidia Alliance


Issue 21  

Editorial

Thoughts From Your Humble Curators

This week, we learn about Gradient Ventures, MS new research (MSRAI) initiative and Nvidia-Baidu Alliance.

On the technical side, we discuss Google’s paper which revisit “Unreasonable Effectiveness of Data”, learn more about how the funny stick-man MuJoCo humanoids learn to walk with DeepMind’s reinforcement learning. And finally there are two articles from Arthur on the differences between deep and machine learning, as well as fact-checking whether “post-quantum computing” has anything to do with consciousness.


As always, if you like our letter, subscribe and forward it to your friends/colleagues!

Artificial Intelligence and Deep Learning Weekly

News



Factchecking

Blog Posts

Open Source

Jobs

Video

Member’s Question

 

Categories
Uncategorized

AIDL Weekly #20: Did DeepMind Fail to Comply Data Protection Law?

Artificial Intelligence and Deep Learning Weekly – Issue 20
Issue 20  

Editorial

Patient Privacy and DeepMind – Thoughts from Your Humble Curators

It’s summer! One of us is on vacation, so we have a shorter issue in terms of items (only 8). Yet, there are two long articles. The first is our investigation on the matter of whether DeepMind and Royal Free have violated patient privacy. We will examine both reports from the Independent Commissioner as well as DeepMind’s Independent Review Panel.

The second is a closer look of the growth of Skills on Amazon Alexa. Why are they growing so quickly? Are they real? We will dig into Amazon’s current and July’s promotion strategy to understand it more.

Other than the two pieces on DeepMind and Amazon, we also cover 5 blog pages with topics ranged from Kaggle’s Adversarial Attack Challenges to Spectrum’s piece of “Brain as a Computer”.

As always, if you like our newsletter, please subscribe and forward it to your colleagues!

Artificial Intelligence and Deep Learning Weekly

News


Blog Posts





Categories
Uncategorized

AIDL Weekly Issue #19 – Fact-Checking – Does Facebook AI Create Its Own Language?

Editorial

Thoughts From Your Humble Curators

This weekend is the Forth of July long weekend. So news are getting lighter. Yet development of AI and deep learning never stops – we just learned that Prof. Bengo becomes “Officer of the Order of Canada”, Prof. Ng is joining the board of Drive.ai; Frank Chen from a16z is telling us VC will not care about AI startups in few years. Of course, we all learned that “Not Hotdog” from Silicon Valley is an actual deep learning apps. All of these events are covered in this issue.

We also have two more new segments for you – The first is Fact-Checking. Routinely, we look at news posted on AIDL and decide if they are faked. This time we fact-check the claim “Facebook AI Created Its Own Unique Language”, which is widely circulated from popular tech outlets last week.

Another interesting feature is “Member’s Submission”. This time we have Neel Shah, an active member of AIDL, tells us more about research trend in India.

As always if you like our newsletter, feel free to subscribe and forward it to your colleagues/friends!

Artificial Intelligence and Deep Learning Weekly

News




Factchecking

Fact-Checking : “Facebook AI Created Its Own Unique Language”

This time we check the claim “Facebook AI Created Its Own Unique Language”. Is it true? Did Facebook really create an AIese?

When you hear such extraordinary claims from popular outlets, you should be alerted and feel suspicious. The next thing you should do is to look at the sources. In this case, the paper referred to is “Deal or No Deal? End-to-End Learning for Negotiation Dialogue” by FAIR.

So what did Facebook researchers actually do? First of all, their goal was to create bots which can negotiate. What they do first is to train a seq2seq model based on an English database. Of course, such bots would speak English, rather than any made-up language such as Esperanto or Toki Pona.

What is this machine language everyone refers to then? As it turns out, the researchers find that

… that models trained to maximize the likelihood of human utterances can generate fluent language, but make comparatively poor negotiators, which are overly willing to compromise.

So they use different strategies to evolve the bot such that they are better in negotiation. Of course, this evolution would result in a slightly different language from English, but it is more appropriate to call it a speaking mode rather than a unique language. Calling it a unique language seems to imply a difference like between English and French. Yet the difference we see here is more like English we use in chatting versus in a business setting.

AIDL Weekly rate this claim as False.

(20170728: The original link of the paper hosted at Amazon s3 was gone, so we replace it with the more updated version, retrieved from arxiv, as of today.)

Artificial Intelligence and Deep Learning Weekly

Blog Posts

Open Source


Video

Member’s Submission

 

Categories
Uncategorized

AIDL Weekly Issue #18 – Andrej Karpathy, Prof. Bengio’s MS Advisory and The Goldberg Debate


Issue 18  

Editorial

Thoughts from Your Humble Curators

We took a short break last week, but so many happenings in our little world of AI/DL! Dr. K being hired by Tesla is certainly huge to us. So is Prof. Bengio advising Microsoft. We also follow closely the Yoav Goldberg’s debate, which happens around 10 days ago. Why the interest? Because Goldberg’s Debate would make you ask deep questions about our field(s), such as is NLP really solved by deep learning? Should we use arxiv as the publication platform? Does deep learning really live up to its hype on every field? We present a fairly extensive list of viewpoints in our blurb.

On technical news, we are loving “One Model to Learn Them all” (branded as “MultiModel”), which is perhaps the first model which can learn from varieties of modalities including speech, image and text. If you are into computer vision, the late Tensorflow object detection API would also keep you entertained.

As always, if you find our newsletter useful, make sure you subscribe and forward to your colleagues/friends!

Artificial Intelligence and Deep Learning Weekly

News




Blog Posts


Open Source


Jobs

Video

 

Categories
Uncategorized

AIDL Weekly Issue #17 – Andrew Ng’s First Interview After Baidu, and More

Editorial

The Organic Growth of AI – Thoughts From Your Humble Curators

The Last two months have been very eventful – GTC, F8 and Google I/O. We have AlphaGo vs the best human player. This week is a bit slower. We see Apple come up with the Core ML – this is Apple playing catch up to Google but could very well be very impactful long-term given Apple’s culture of tight integration and iOS market size and position.

Some multi-threaded happenings this past week:

  • On celebrity in the field: We saw the first interview of Prof. Andrew Ng after he left Baidu, giving yet another interesting and inspiring interview with Forbes
  • On infrastructure: There is Kaggle hitting 1 million developers, and Coursera is getting its series D. Both are training grounds of budding machine learning researchers/engineers.
  • On new techniques: Facebook is able to train a Resnet-50 in one hour (Wow!) DeepMind also comes up with a technique in relationship modeling which improves around absolute ~30% accuracy. The common thread of both research is that the techniques themselves are surprisingly simple.
  • On our AIDL Facebook group: We have seen record sign ups in recent weeks. Just last week, we added ~1800 members, and it’s continuing apace. We should be hitting 30K members in the not too distant future.

As always, if you like our newsletter, subscribe and forward it to your colleagues/friends!


An announcement here – one of us (Arthur) is having a vacation, so no press for AIDL Weekly next Friday, we will resume Issue #18 at Jun 23.

Artificial Intelligence and Deep Learning Weekly

News







Blog Posts





Jobs

Member’s Question

Udacity DL class vs Hinton’s ?

AIDL Member Fru Nck asked (rephrase): Can you tell me more about the Hinton’s Coursera class vs the Google’s Udacity class on deep learning?

Here are couple of great answers from other members:

By Karan Desai: “Former focuses more on theory while latter gets you through Tensorflow implementations better, keeping the concepts a little superficial. In my opinion both courses were made to serve different purposes so there isn’t a direct comparison. Meanwhile you can refer to Arthur’s blog.”

By Aras Dar: “I took them both. For the beginners, I highly recommend Udacity and after Udacity, you understand Hinton’s course much better. Hinton’s course is more advanced and Udacity course is built for beginners. Hope this helps!”

Afterthought from me: I didn’t quite take the Udacity class. But by reputation, it is more a practical class with many examples. If you only take Ng’s, Hinton class is going to confuse you. In fact it confuses many PhDs I know. So go for the Udacity class and few others first before you dive into Hinton’s.

Artificial Intelligence and Deep Learning Weekly

 

Categories
Uncategorized

AIDL Weekly #16 – Apple Neural Engine, Neuromorphics and AlphaGo’s Retirement

Editorial

Editorial

Last week we learned that AlphaGo retired after her glorious career: beating the best human Go player and a team of 5 top players in the world. We include 4 pieces in this issue just to wrap up the event.

None of these should be surprising to AIDL readers though. As we have predicted in Issue 15:

Go joins the pantheon of games like Chess, where computers have proven to be better than humans. As with Chess, research funding will move from Go to some other more complex A.I.-vs-human research projects.

It still leave us a question: what is the meaning of researching AlphaGo? We believe Dr. Andrej Karpathy gives a very good answer – he believes that AlphaGo’s research only solve a very narrow sense of AI. Unless we are talking about yet another computer game, it’s rather hard for AlphaGo’s research to transfer. Take a look of the Blog section to see Dr. K’s article.


We also heard about all about new chips last week, Apple Neural Engine and ARM new processor. So that’s why we also include the Bloomberg piece on Apple Neural Engine. On the same vein, you may heard neuromorphics which purportedly used spiking neural networks, are they real? And are they getting better than the vanilla deep learning chips such as TPU. We include two pieces to analyze the matter.


As always if you like our newsletter, feel free to subscribe and forward it your colleagues!

Artificial Intelligence and Deep Learning Weekly

News






Blog Posts





Jobs

About Us

 

Categories
Uncategorized

AIDL Weekly #15 – AlphaGo vs The World

Editorial

AlphaGo vs The World – Thoughts From Your Humble Curators

Back 10 years ago, no one would believe computer Go program can ever beat humans. Many experts estimate it would take 25-50 years to make Go to compete in 9-dan, not become the world champion.

This is exactly what happened yesterday – AlphaGo defeated Ke Jie, the strongest human Go player according to Go Ratings. Go joins the pantheon of games like Chess, where computers have proven to be better than humans. As with Chess, research funding will move from Go to some other more complex A.I.-vs-human research projects.

The big question is – what’s the next big game for A.I. to challenge humans? Our guess is the next target is Starcraft/Warcraft. No doubt it would require another sets of technical breakthrough to defeat such complex game with so many game-states.

But before that, deep learning made some real history today.


Other than coverage of AlphaGo (4 items), we also cover SoftBank and statistics of NIPS. As always, if you like our letter, feel free to subscribe/forward it to your colleagues!

Artificial Intelligence and Deep Learning Weekly

News






Blog Posts


Jobs

Video

Videos of AlphaGo vs Ke Jie First Two Matches

Match 1

Match 2

Artificial Intelligence and Deep Learning Weekly


Member’s Question

Question by Nishanth Gandhidoss‎: Is the following three is what AI is all about to learn for a Data Scientist?
Computer vision
Natural language processing
Reinforcement learning
A: (By Arthur) On the terms – “reinforcement learning” (RL) is usually used as one sub-branch of machine learning, usually goes parallel with “supervised learning” (SL) and “unsupervised learning” (UL). Briefly, RL usually means that you don’t have correct output in your training (unlike SL), what you have is just a reward and the reward could be delayed. That makes RL very different from SL, and usually it has its own class of techniques.”computer vision” (CV) and “natural language processing” (NLP), on the hand, is more applications for machine learnings. You can use techniques from SL, UL and RL in either one of these fields. So that’s why the terms CV, NLP and RL seldom compare with each others.

On data scientist’s learning of all these subjects – it depends on the type of jobs. More conventionally (say 5 years ago), being data scientist usually means processing data in table forms (R dataframe, SQL etc). But nowadays due to deep learning, data scientist can also be asked to work on high-dimensional data such as computer vision and NLP (through word vector). So I am not surprised that some job description include CV and NLP. If you have some knowledge about them, you do have an edge.

Artificial Intelligence and Deep Learning Weekly

 

Categories
Uncategorized

AIDL Weekly Issue 14 – Google I/O 2017, AMD Vega and Review of Five Basic Deep Learning Class

Editorial

The Two Battlefronts of Nvidia

This week, we cover Google I/O: TPU v2 is a monster, equivalent to 32 P100s. Then there is new free TPU research clusters, TensorFlow lite which is a package for easy TF development on embedded device, automatic search of network architecture. All the amazing features!

Google could have taken quite a bit of attention away from Nvidia GTC by announcing TPU v2 last week, as The Next Platform’s Nicole Hemsoth nicely put it:

[……] we have to harken back to Google’s motto from so many years ago… “Don’t be evil.” Because let’s be honest, going public with this beast during the Volta unveil would have been…yes, evil.

In other news, AMD also announced a competing product, the Radeon RX Vega. However, AMD has been having a hard time against Nvidia due to software issues (more details in this issue) even when its specs are slightly better and the card is cheaper. This is the power of the software moat. Hardware commoditizes fast but software makes things sticky.


Other than Google I/O and AMD new GPU card, we also include several nice resources and links this week including:

  • Arthur’s Review on Five Basic Deep Learning Classes
  • Adit Despande’s github on using Tensorflow.

As always, if you like our letter, feel free to subscribe and forward it to your colleagues!!
We also just reach 20000 members in our AIDL forum, so come join us!

Artificial Intelligence and Deep Learning Weekly


A Correction on Issue #13 Editorial

About the Editorial of Issue 13: Peter Morgan is kind enough to correct us – both Nervanna and TPU are based on ASIC, rather than FPGA. We have corrected the web version since.

Artificial Intelligence and Deep Learning Weekly

News




Blog Posts


Open Source


Jobs

Video

Member’s Question

Thought/Anecdote from a participant of GTC 2017

Ajay Juneja share this on AIDL:
“Thoughts from the admin of the Self-Driving Car group this week (I attended the Nvidia Conference):

  1. Bi-LSTMs (Bi directional LSTMs) are everywhere, and working quite well. If you aren’t using them yet, you really should. For those of us from the mechanical engineering world, think of them a bit like making closed-loop feedback control systems.
  2. The convergence of AI, VR, AR, Simulation, and Autonomous Driving. It’s happening. Need to generate good data for your neural nets, quickly? Build a realistic simulator using Unreal Engine or Unity, and work with gaming developers to do so. Want to make your VR and AR worlds more engaging? Add characters with personality and a witty voice assistant with emotion to them, while using cameras and audio to determine the emotional state of the players. Want to prototype a new car or building or surgery room? Build it in VR, create a simulator out of it. We need to cross pollinate these communities and have everyone working together 🙂
  3. Toyota signed with Nvidia. That’s 8 of 14 car companies… and they have signed the 2 largest ones (VW and Toyota). I hear rumblings from the AI community saying “If you want to build a self driving car TODAY, your choices are Nvidia and… nothing. What can I actually buy from Intel and Mobileye? Where are the engineers to support it? Qualcomm may have something for the 845 but they are drunk on mobile profits and again, no one knows their tools.”

500K Nvidia Developers vs. next to nothing for Intel and Qualcomm’s solutions.

I believe Nvidia has its moat now.”

Artificial Intelligence and Deep Learning Weekly

About Us