Sshared by David Ha: We couldn't help thinking .... Ah, may be Prof. Schmidhuber has a point when he "schmidhuber" other people.
Google Research released a protocol buffer implementation for FHIR. This sounds dry, yet once you learn more about both FHIR and protocol buffers, this is a rather significant event.
For starter, FHIR is meant to be a standard to transfer electronic health record (EHR) for patient information. The use cases of FHIR can be real-time patient information gathering, and epidemic tracking.
So what is protocol buffers then? Protocol buffers is a language-independent storage/retrieval solution which is also known to be space-efficient. Google has been pushing the solution as compared to older standards such as XML.
When you combine FHIR and protocol buffers, you end up having an efficient representation of FHIR. That allows large scale ML happens in health care applications. As healthcare becomes more a focus of AI/DL, this is an important move of Google.
As a side note, the post seems to be teasing more integration between Google's application and Tensorflow. That certainly is an exciting direction for many DLers.
Sebastian Ruder's "Requests for Research" is a good survey on potential research direction of deep learning. It is similar to OpenAI's request for research earlier this year. Ruder's blog covers the following 6 topics:
- Task-independent data augmentation for NLP
- Few-shot learning for NLP
- Transfer learning for NLP
- Multi-task learning
- Cross-lingual learning
- Task-independent architecture improvements
Chris Olah did it again, in this new distill article, Olah et al look three different lines of visualization researches, including feature visualization, attribution and matrix factorization together. Much like other works, Olah and others doesn't just recreate others techniques, but creatively combine different ideas and come up with something new and interesting. For example, their idea on using semantic dictionary to decompose activation is one of the examples.
Perhaps the more interesting part is the discussion of the potential design space of visualization techniques. You would intuitively think visualization is deep, because interpretability seems to be a never-ending research. Just like Olah, we wonder if we can one day come up with tools for DNN which is truly interpretable.
Oh, don't forget to play with the examples! It gives you a lot of surprising insight of Convnet too! The post was also crossposted at both Google Research Blog and Google Open Source Blog. Google released an open source visualization library called Lucid as well.
(Fixed typos at 3:14 p.m. 20180309.)