Categories
Uncategorized

Issue 42 – Throwback 2017

Editorial

Thoughts From Your Humble Curators – 2017 Year End Edition

In this issue, we will re-publish several memorable stories during 2017. For news that includes Uber vs Waymo, Andrew Ng left Baidu and started deeplearning.ai. For papers, we include classic such as Sara Sabour and Prof. Hinton’s work on capsule theory, as well as Prof. Bengio’s consciousness prior. And finally for fact-checking, you guess it right: both “Facebook makes a kill switch of AI” and “Google AI build AO” are here.


Hope you enjoy this throwback issue. As always, if you like our newsletter, feel free to subscribe/forward it your colleagues. AIDL accept donation. You can use the link at https://paypal.me/aidlio/12/ to donate. Your donation is use to cover monthly payment for our-ever growing subscription of AIDL Weekly as well as other operating cost.

Artificial Intelligence and Deep Learning Weekly

News







Factchecking

A Closer Look at The Claim “Facebook kills Agents which Create its Own Language”

From Issue 24:

As we fact-checked in Issue 18 and 23, we rated the claim

Facebook kills Agents which Create Its Own Language.

as false. And as you might know, the faked news has spread to 30+ outlets which stir the community.

Since the Weekly has been tracking this issue much earlier than other outlets (Gizmodo is the first popular outlet call the faked news out), we believe it’s a good idea to give you our take on the issue, especially given all information we know. You can think of this piece as more a fact-checking from a technical perspective. You can use as a supplement of the Snope’s piece.

Let’s separate the issue into few aspects:

1, Does Facebook kills an A.I. agent at all?

Of course, this is the most ridiculous part of the claim. For starter, most of these “agents” are really just Linux processes. So….. you can just stop them by using the Linux command kill. The worst case…. kill -9 or kill -9 -r? (See Footnote [1])

2, What Language does AI Agents Generated and The Source

All outlets, seem to point to couple of sources, or the original articles. As far as we know, none of these sources had quoted academic work which is directly the subject matter. For convenience, let’s call these source articles to be “The Source” (Also see Footnote [2]). The Source apparently has conducted original research and interview with Facebook researchers. Perhaps the more stunning part is there are printouts of how the machine dialogue look like. For example. Some of the machine dialogue looks like

Bob: “i can i i everything else”

Alice: “balls have zero to me to me to me to me …..”

That does explain why many outlets sensationalized this piece, because while the dialogue is still English (as we explained in Issue #18), it does look like codeword, rather than English.

Where does the printout comes from? It’s not difficult to guess – it comes from the open source code of “End-to-End Negotiator” But then the example from github we can find there looks much more benign:

Human : hi i want the hats and the balls

Alice : i will take the balls and book <eos>

Human : no i need the balls

Alice : i will take the balls and book <eos>

So one plausible explanation here is that someone has played with the open source code, and they happened to create a scary looking dialogue. Th question, of course, are these dialogue generated by FB researchers? or does FB researchers just provide The Source the dialogue? Here is the part we are not sure. Because the Source does quote words from Facebook’s researcher (see Footnote [3]), so it’s possible.

3, What is Facebook’s take?

Since the event, Prof. Dhruv Batras has post a status at July 31 in which he simply ask everyone to read the piece “Deal or No Deal” as the official reference of the research. He also called the faked news “clickbaity and irresponsible”. Prof. Yann Lecun also came out and slam at the faked newsmaker.

Both of them decline to comment on individual pieces, including The Source. We also tried to contact both Prof. Dhruv Batra and Dr. Mike Lewis on the validity of the Source. Unfortunately, they are both unavailable for comments.

4, Our Take

Since it’s an unknown for us whether any of The Source is real, we can only speculate what happened here. What we can do is make as technically plausible as possible.

The key question here: is it possible that FB researchers have really created some codeword-like dialogue and passed it off to the source? It’s certainly possible but unlikely. Popular outlets have general bad reputation of misinforming the public on A.I., it is hard to imagine that P.R. department of FB don’t stop this kind of potential bad press in the first place.

Rather, it’s more likely that FB researchers only publish paper, but somebody else is misusing the code the researchers open sourced (as we mentioned in Pt. 2). In fact, if you reexamine the dialogue released by The Source:

Bob: “i can i i everything else”

Alice: “balls have zero to me to me to me to me …..”

It looks like the dialogue was generated by models which are not well-trained, this is true especially if you compare the print out with the one published by Facebook’s github.

If our hypothesis is true, we side with FB researchers, and believe that someone just write a over-sensational post in the first place causing a stirs of the public. Generally, everyone who spreads the news should take responsibility to check the sources and ensure integrity of their piece. We certainly don’t see such responsible behavior in the 30+ outlets who report the faked news. It also doesn’t look likely that The Source is written in a way which is faithful of the original FB research. Kudos to Gizmodo and Snope’s authors who did the right thing. [4]

Given the agent is more likely to behave like what we found on Facebook’s github, we maintain our verdict as in Issue 18 and 23, it is still very unlikely that FB agents are creating any new language. But we add qualifier “very unlikely” because as you can see in Point 3, we still couldn’t get Facebook researchers’ verification as of this writing.

So let us reiterate our verdict:
We rate the “Facebook killing agents” false.
We rate “Agents that create its own language” very likely false.

AIDL Editorial Team

Footnote:

[1] So, immediately after the event, couple of members was joking about the public was being ignorant about what so-called AI agents are.

[2] We avoiding naming what The Source is. There seems to be multiple of them and we are not sure which one is the true origin.

[3] The author of The Source seems to have communication with Facebook researcher Prof. Dhruv Batra and quote the Professor’s word, e.g.

There was no reward to sticking to English language,

as well as talking with researcher Mike Lewis,

Agents will drift off understandable language and invent codewords for themselves,

[4] What if we are wrong? Suppose the Source is real, and the Agents does Generated Codeword-Like Dialogue, Are they new Language?

That’s more a debatable issue. Again, just like we said in Issue 18, suppose you start from training a model like from an English database, the language you got will still be English. But can you characterize a English-like language as a new language? That’s a harder question. e.g. a creole is usually seen as another language, but a pidgin is usually seen as just a grammatical simplification of a language. So how should we see the codeword generated by purported “rogue” agent? Only professional linguist should judge.

It is worthwhile to bring up one thing: while you can see the codeword language just like any machine protocol such as TCP/IP, the Source implies that Facebook researcher have consciously making sure the language adhere to English. Again, this involves if the Source is real, and whether the author has mixed in his/her own researches into the article.

Artificial Intelligence and Deep Learning Weekly


On Google’s “AI Built an AI That Outperforms Any Made by Humans”

For those who are new at AIDL. AIDL has what we called “The Three Pillars of Posting”. i.e. We require members to post articles which are relevant, non-commercial and non-sensational. When a piece of news with sensationalism start to spread, admin of AIDL (in this case, Arthur) would fact-check the relevant literature and source material and decide if certain pieces should be rejected. And this time we are going to fact-check a popular yet misleading piece “AI Built an AI That Outperforms Any Made by Humans”.

  • The first thing to notice: “AI Built an AI That Outperforms Any Made by Humans” by a site which historically sensationalized news. The same site was involved in sensationalizing the early version of AutoML, as well as the notorious “AI learn language” fake news wave.
  • So what is it this time? Well, it all starts from Google’s AutoML published in May 2017 If you look at the page carefully, you will notice that it is basically just a tuning technique using reinforcement learning. At the time, The research only worked at CIFAR-10 and PennTree Banks.
  • But then Google’s AutoML released another version in November. The gist is that Google beat SOTA results in Coco and Imagenet. Of course, if you are a researcher, you will simply interpret it as “Oh, now automatic tuning now became a thing, it could be a staple of the latest evaluation!” The model is now distributed as NASnet.
  • Unfortunately, this is not how the popular outlets interpret. e.g. Sites were claiming “AI Built an AI That Outperforms Any Made by Humans”. Even more outrageous is some sites are claiming “AI is creating its own ‘AI child'”. Both claims are false. Why?
  • As we just said, Google’s program is an RL-based program which propose the child architecture, isn’t this parent program still built by humans? So the first statement is refutable. It is just that someone wrote a tuning program, more sophisticated it is, but still a tuning program.
  • And, if you are imagining “Oh AI is building itself!!” and have this imagery that AI is now self-replicating, you cannot be more wrong. Again, remember that the child architecture is used for other tasks such as image classification. These “children” doesn’t create yet another group of descendants.
  • A much less confusing way to put it is that “Google RL-based AI now able to tune results better than humans in some tasks.” Don’t get us wrong, this is still an exciting result, but it doesn’t give any sense of “machine is procreating itself”.

We hope this article clears up the matter. We rate the claim “AI Built an AI That Outperforms Any Made by Humans” false.

Here is the original Google’s post.

Artificial Intelligence and Deep Learning Weekly

Member’s Question

Paper/Thesis Review



 

Leave a Reply

Your email address will not be published. Required fields are marked *