The definitive weekly newsletter on A.I. and Deep Learning, published by Waikit Lau and Arthur Chan. Our background spans MIT, CMU, Bessemer Venture Partners, Nuance, BBN, etc. Every week, we curate and analyze the most relevant and impactful developments in A.I.
We also run Facebook’s most active A.I. group with 191,000+ members and host a weekly “office hour” on YouTube.
Editorial
Thoughts From Your Humble Curators
We brought you “Edmond de Belamy, from La Famille de Belamy”, the AI-generated artwork which was sold for 10 times its estimated price; We point you to several technical blogs including Ruder’s “Neural History of NLP”, and the very entertaining Google AI’s post on curiosity vs procrastination in robotic agents.
As always, if you find this newsletter useful, feel free to share it with your friends/colleagues.
This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook’s most active A.I. group with 178,000+ members and host an occasional “office hour” on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.
Join our community for real-time discussions here: Expertify
News
Edmond de Belamy, from La Famille de Belamy
Last week, AI-generated portrait “Edmond de Belamy, from La Famille de Belamy” was sold 40 times more than Christie’s initial estimates. We are surprised, but AI artists should their day too.
Some have argued that back in 2015, artists have already generated portraits which has similar quality as “Edmond de Belmany”. For example , take a look of Robbie Barrat’s website, you may see similar style of impressionism was there.
And an important issue here: does human deserve any credits in a piece of machine-generated art? Some will argue it should be, because humans still serve as the final judge of the aesthetic value of a piece, as well as the tuner of generative AI algorithms, such as GAN.
Perhaps one days machine-generated art will see for more than a Picasso. But before that, may be we should really understand how GAN works first.
Commercial Pilot of Waymo.
Perhaps the first commercial pilot of SDC, and Waymo is now ahead of its competitors such as Uber, Ford. ARK Invest, a research group, estimate the cost of automatic taxi is 0.35 cents per mile. No one knows for sure what Waymo’s pricing model is. If we do know, then we can decide if Waymo really worths tens or billions of dollars as suggested by analysts.
Blog Posts
A Review of the Neural History of NLP
This article was published about a month ago. But we found it relevant, readable and illuminating. It gives you an idea about language modeling from pre-deep-learning eras: such as smoothing by Kneser and Ney to our current days, where large scale pre-train model is a norm. Also notable is its non-neural section which mention BLEU, LDA, OneNote and many significant NLP innovations which has nothing to do with deep learning. This should be refreshing for many of us who believe deep learning is the only solution for all our problems.
Nathaniel Popper on Blockchain in AI
Blockchain and AI are perhaps the two trendiest technologies now. So no wonder someone try to thread them together. One project we mentioned is OpenMined led by Andrew Trask which combine model training technology with cryptography.
But the companies you may hear more is perhaps SingularityNet, by Dr. Ben Goertzel, which attempts to use blockchain to link AI services together. The author of this piece, Nathaniel Popper, has covered blockchain for years. He penned the book Digital Gold which is a very interesting historical accounts of the rise of blockchain. So we think you might be interested in his view of how blockchain affects AI.
Object Detection Using dlib by Adrian Rosebrock
Here is another article we quote from our great teacher, Adrian Rosebrock. This time he teach you how to write a fast object tracker using dlib. Other than the tutorial on dlib, perhaps the more interesting part is his presentation of dlib’s correlation tracking. We really enjoy it so don’t want to spoil it for you. Go check it out!
Curiosity vs Procrastination
Here is an interesting post from Google on how to deal with the sparse reward problem in RL, i.e. how to use curiosity to drive exploration of an RL algorithm. There are two interesting high-lights:
- First, when Google authors try to create rewards based on inability of prediction, or just surprise. They found that agents could fall into a state which is close to human’s procrastination. That lead to the agent to continue to explore infinitely. So for example, when agent saw a (virtual) TV in a 3-D maze, he will get stuck infinitely because the sensory experience is always a surprise.
- So the researchers figure out you can just store your recent experience in a memory bank, and try to reward exploration by comparing if the new experience is dissimilar to the old one. Sounds interesting. And that comparison is done by deep neural network.
We don’t pretend we fully understand the technical details, but Google’s demo is fairly intuitive, and we are fascinated.
Open Source
Facebook Mask-RCNN in Pytorch
Here is an implementation of Mask RCNN in Pytorch released by Facebook.
About Us
About Us
This newsletter is published by Waikit Lau and Arthur Chan. We also run Facebook’s most active A.I. group with 178,000+ members and host an occasional “office hour” on YouTube. To help defray our publishing costs, you may donate via link. Or you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.
Join our community for real-time discussions here: Expertify