Editorial
Thoughts From Your Humble Curators
We cover Windows ML this week. What’s the platform? Is it too late for MS to come up with a unified AI platform similar to CoreML? We will discuss in our News section.
You also see the articles from Chris Olah and Sebastian Ruder in the Blog section. Both of them are thought provoking. On resources, we share you links from OpenMined’s “Awesome AI Privacy” and AIDL’s own AIDL_KB.
As always, if you like our newsletter, feel free to forward it to your friends/colleagues!
Also, do check out our sponsor, ODSC in our Sponsor section!
This newsletter is a labor of love from us. All publishing costs and operating expenses are paid out of our pockets. If you like what we do, you can help defray our costs by sending a donation via link. For crypto enthusiasts, you can donate by sending Eth to this address: 0xEB44F762c58Da2200957b5cc2C04473F609eAA65.
Sponsor
ODSC New schedule, New speakers + 55% off ends March 16
Open Data Science Conference just released 80% of their schedule and the first round of speakers for ODSC East 2018.
See the dates, times and topics ODSC’s incredible speakers will be covering.
Learn, train and engage with 200+ world class experts on machine learning, deep learning, data visualization, data management, predictive analytics, artificial intelligence and more.
- Andreas Mueller, PhD – Core contributor to scikit-learn
- Arun Verma, PhD – Head of Quant Research at Bloomberg
- Maya Gupta, PhD – Machine Learning R & D Lead at Google
- Joshua Bloom, PhD – VP of Data Analytics at GE
- Mike Tamir, PhD – Head of Data Science at Uber
- Drew Conway, PhD – Renowned researcher and Data Science Venn Diagram creator
- See more speakers
Save 55% with code AIDL55 before prices increase substantially at midnight on Friday.
See you there!
News
Deals
- S&P global plans to acquired Kensho $550M
- UnitPath $153M Series B, now valued at $1.1B – finally confirmed by TechCrunch this week.
- Starsky Robotics Raised $16.5 Million Dollars
Windows ML
Microsoft is releasing a new AI platform, Windows ML, which allows developers to run existing ML models on various MS developer suites on multiple hardware devices. According to this page, it sounds like the platform will focus on 3 things:
- Hardware acceleration – built-in into DirectX 12.
- Local evaluation – this probably implies built-in model compression technology
- Image processing – it makes sense, because existing MS AI application seems to focus on image recognition anyway. e.g. facial recognition.
A fair question to ask: is it too late for Microsoft to create a unified platform? Especially when MS stack against Apple’s CoreML and Google’s colossal AI/DL architecture? Windows still has dominance in user-space OS. Yet developing AI applications on Windows are also known to be slightly harder on Mac and Linux. So perhaps developers would appreciate any help from Micosoft. Similar to Apple’s CoreML, if the feature can serve the existing developers well, then Windows ML is doing its job.
Blog Posts
Hand-drawn neural networks by Prof. Schmidhuber
Sshared by David Ha: We couldn’t help thinking …. Ah, may be Prof. Schmidhuber has a point when he “schmidhuber” other people.
Google Released a Protocol Buffer Implementation for FHIR
Google Research released a protocol buffer implementation for FHIR. This sounds dry, yet once you learn more about both FHIR and protocol buffers, this is a rather significant event.
For starter, FHIR is meant to be a standard to transfer electronic health record (EHR) for patient information. The use cases of FHIR can be real-time patient information gathering, and epidemic tracking.
So what is protocol buffers then? Protocol buffers is a language-independent storage/retrieval solution which is also known to be space-efficient. Google has been pushing the solution as compared to older standards such as XML.
When you combine FHIR and protocol buffers, you end up having an efficient representation of FHIR. That allows large scale ML happens in health care applications. As healthcare becomes more a focus of AI/DL, this is an important move of Google.
As a side note, the post seems to be teasing more integration between Google’s application and Tensorflow. That certainly is an exciting direction for many DLers.
Sebastian Ruder’s “Requests for Research”
Sebastian Ruder’s “Requests for Research” is a good survey on potential research direction of deep learning. It is similar to OpenAI’s request for research earlier this year. Ruder’s blog covers the following 6 topics:
- Task-independent data augmentation for NLP
- Few-shot learning for NLP
- Transfer learning for NLP
- Multi-task learning
- Cross-lingual learning
- Task-independent architecture improvements
Building Blocks of Interpretability
Chris Olah did it again, in this new distill article, Olah et al look three different lines of visualization researches, including feature visualization, attribution and matrix factorization together. Much like other works, Olah and others doesn’t just recreate others techniques, but creatively combine different ideas and come up with something new and interesting. For example, their idea on using semantic dictionary to decompose activation is one of the examples.
Perhaps the more interesting part is the discussion of the potential design space of visualization techniques. You would intuitively think visualization is deep, because interpretability seems to be a never-ending research. Just like Olah, we wonder if we can one day come up with tools for DNN which is truly interpretable.
Oh, don’t forget to play with the examples! It gives you a lot of surprising insight of Convnet too! The post was also crossposted at both Google Research Blog and Google Open Source Blog. Google released an open source visualization library called Lucid as well.
(Fixed typos at 3:14 p.m. 20180309.)
Open Source
Awesome AI Privacy
Prepared by the OpenMined project, “Awesome AI Privacy” archives important resources of understanding issues of privacy in AI. That should be a must read for developers who want to get into applications of AI which requires understanding of privacy/security issue, such as blockchain-based implementation of AI.
AIDL_KB
This is AIDL’s own knowledge base: AIDL_KB. AIDL_KB is a community-driven project to curate useful resources for AI/DL. The first version is written by AIDLer Jan Šablatura and it’s reviewed by Arthur and Prof. Amitā Kapoor. Other contributors are Danielius Visockas and Adam Milton-Barker.
We got warm reactions from the members. There are already 5+ pull requests we merged last few days. So come by and check it out!
Hand-written Notes on deeplearning.ai
Here are some well-drawn notes on deeplearning.ai by Tess Ferrandez.
About Us