This is an interview with Fei-Fei Li, on her view about AI is not human-centric enough. And indeed, all we are building are A(S)I, S for special. Everyone is looking for the next paradigm change that takes us beyond the restrictive pattern recognition approach today.
We covered Gluon at Issue 23. Technically, we found it an interesting hybrid between imperative (like PyTorch) and declarative (like Tensorflow) style of programming. We learn from the announcement that it is now available in AWS and MS.
For us the most interesting part is perhaps the partnership between MS and Amazon. It is the second time we heard they work with each other this year. In fact, back in September, we learn that they partner in integrating ther voice assistants.
The ONNX format introduced by FB and MS starts get more partners. This sounds good. For the most part, having an open format would allow results to spread more quickly across different sites/frameworks.
When we first look at this piece, we thought that Lattice is yet another cool brandname. As it turns out Lattice is a release of an interesting mathematical model call deep lattice model (DLM).
So what is DLM then? It's all about lattice analysis - normal regression analysis usually doesn't impose order relationship between your inputs and outputs. So what if you hope that your output and input monotonically increase? That's the point of lattice.
Of course, things are more complicated when the output is more monotonically increased with multiple inputs. Then using multi-layers of lattice layer would make sense. That was what the Google's original paper is about. Perhaps more importantly, they showed great results in several ML tasks.
We were trying to read DP Kingma's thesis on VAE. As you might know, he wrote the paper on reparametization trick on VAE. But this is not my topic this time. We just want to talk about a simple tutorial paper by Carl Doersch, which we found it to be better for beginners. Here are some notes:
VAE is really quite different sparse AE or denoising AE except you can think of both of them like having an encoder structure.
The Tutorial would guide you through the setup of VAE. e.g. You probably know that VAE is based on latent variable. But then the setup is special which the latent variable is randomly sampled from a Gaussian.
Then there is a detail readable section on how the common optimization evidence lower bound (ELBO) is formulated. What it bugs me a bit is that it doesn't quite use the term ELBO.
Lastly it's the reparametization trick. we don't fully grok it but that's why we still need to read Kingma
So far, this seems to be a great tutorial on VAE and it's a great first read on the topic.