Categories
Uncategorized

AIDL Weekly Issue 7 – The Last Imagenet, OpenAI’s Evolution Strategy and AI Misinformation Epidemic

Thoughts From Your Humble Curators

One of us (Waikit) is teaching a class for MIT in Brisbane, Australia. That’s why we have a lighter issue.

An interesting observation – In the MIT Entrepreneurship classes I’m teaching, there are 120 entrepreneurs from 34 countries spanning U.S to Vietnam to Kazakhstan. One of the top topics of interest and discussion was A.I. and Deep Learning. Surprising or not, some of the students were already implementing fairly advanced DL techniques in agriculture, etc. in emerging economies. It is clear that as A.I. democratizes from the ivory towers of Montreal, Stanford, CMU, FB, Google, Microsoft, etc., there will be some very long-tail positive implications in various economies over time. Is A.I. over-hyped? Sure. But people always over-estimate the short-term and under-estimate the long-term.

This week, we cover:

  • The last ImageNet
  • OpenAI’s new results on Evolution Strategy
  • A new and popular Github, photo style transfer

We also incorporate an article from Zachary Lipton, in which he called out the hype of AI and misinformation spread from popular outlets.

If you like our letter, remember to forward to your friends and colleagues! Enjoy!

Artificial Intelligence and Deep Learning Weekly

News





Blog Posts




The Bandwagon (using in the words of Claude Shannon, 1956)

This is an essay modified from Claude Shannon’s “The Bandwagon” about machine learning. I saw it shared by Cheng Soon Ong.

“Machine Learning has, in the last few years, become something of a scientific bandwagon. Starting as a technical tool for the computer scientist, it has received an extraordinary amount of publicity in the popular as well as the scientific press. In part, this has been due to connections with such fashionable fields computing machines, cybernetics, and automation; and in part, to the novelty of the subject matter. As a consequence, it has perhaps been ballooned to an importance beyond its actual accomplishments. Our fellow scientists in many different fields, attracted by the fanfare and by the new avenues opened to scientific analysis, are using these ideas in their own problems. Applications are being made to biology, psychology, linguistics, fundamental physics, economics, the theory of organisation, and many others. In short, machine learning is currently partaking of a somewhat heady draught of general popularity.

Although this wave of popularity is certainly pleasant and exciting for those of us working in the field, it carries at the same time an element of danger. While we feel that machine learning is indeed a valuable tool in providing fundamental insights into the nature of computing problems and will continue to grow in importance, it is certainly no panacea for the computer scientist or, a fortiori, for anyone else. Seldom do more than a few of natures’ secrets give way at one time. It will be all too easy for our somewhat artificial prosperity to collapse overnight when it is realised that the use of a few exciting words like deep learning, artificial intelligence, data science, do not solve all our problems.

What can be done to inject a note of moderation in this situation? In the first place, workers in other fields should realise that the basic results of the subject are aimed in a very specific direction, a direction that is not necessarily relevant to such fields as psychology, economics, and other social sciences. Indeed, the hard core of machine learning is, essentially, a branch of mathematics and statistics, a strictly deductive system. A thorough understanding of the mathematical foundation and its computing application is surely a prerequisite to other applications. I personally believe that many of the concepts of machine learning will prove useful in these other fields — and, indeed, some results are already quite promising — but the establishing of such applications is not a trivial matter of translating words to a new domain, but rather the slow tedious process of hypothesis and experimental verification. If, for example, the human being acts in some situations like an ideal predictor, this is an experimental and not a mathematical fact, and as such must be tested under a wide variety of experimental situations.

Secondly, we must keep our own house in first class order. The subject of machine learning has certainly been sold, if not oversold. We should now turn our attention to the business of research and development at the highest scientific plane we can maintain. Research rather than exposition is the keynote, and our critical thresholds should be raised. Authors should submit only their best efforts, and these only after careful criticism by themselves and their colleagues. A few first rate research papers are preferable to a large number that are poorly conceived or half-finished. The latter are no credit to their writers and a waste of time to their readers. Only by maintaining a thoroughly scientific attitude can we achieve real progress in machine learning and consolidate our present position.”

Shannon’s original can be found here.

Artificial Intelligence and Deep Learning Weekly

Open Source

Member’s Question

Some Tips on Reading “Deep Learning” By GoodFellow et al

Q: How do you read the book Deep Learning By Ian GoodFellow

It depends on the chapters you are in. The first two parts are better as supplementary material to lectures/courses. For example, if you are reading deep learning and watching all videos from Karpathy’s and Socher’s class, you would learn much more than other students. We think the best lecture to go with is Hinton’s “Neural Network”.

Part 1 tries to power you through the necessary Math. If you never have at least a class of machine learning, those material are woefully inadequate. Consider to study matrix algebra or more importantly matrix differentiation first. (Abadir’s Matrix Algebra is perhaps the most relevant.) Then you will make through the Math more easily. Saying so, Chapter 4’s example on PCA is quite cute. So read them if you are comfortable with the math.

Part 3 is tough, and for the most part it is a reading for researchers in unsupervised learning, which many people believe it is the holy grail of the field. You will need to be comfortable with energy-based model. For that, we suggest you go through Lecture 11 to 15 of Hinton’s deep learning first. If you don’t like unsupervised learning, you could skip Part 3 for now. Reading Part 3 is more about knowing what other people are talking about in unsupervised learning.

While deep learning is a hot field, make sure you don’t abandon other ideas in machine learning. e.g. we find reinforcement learning and genetic algorithm very useful (and fun). Learning theory is deep and can explain certain things we experienced in machine learning. IMO, those topics are at least as interesting as Part 3 of deep learning. (Thanks Richard Green at AIDL for his opinion.)

Artificial Intelligence and Deep Learning Weekly

Leave a Reply

Your email address will not be published. Required fields are marked *