This is an impression post of Coursera “Computational Neuroscience” by Rao and Fairhall: (Crossposted in both AIDL and CNAGI)
- Usually I would write an impression post when I audit a class, but write a full blog post when I completed all homeworks.
- In this case, while I actually finished “Computational Neuroscience”, I am not qualified enough to comments on some Neuroscientific concept such as Hogkins-Huxley models, Cable Theory or Brain plasticity, so I would stay at the “impression”-level.
- Strictly speaking, CN is more an OoT for AIDL, but we are all curious about the brain, aren’t we?
- It’s a great class if you know ML, but want to learn more about the brain. It’s also great if you know something about brain, but want to know how the brain is similar to modern-days ML.
- You learn nifty concepts such as spike-triggered-averages, neuronal coding/decoding and of course main dishes such as HH-models, cable theory.
- I only learned these topics amateurishly, and there are around 3-4 classes I might take to further knowledge. But it is absolutely interesting. e.g. this class is very helpful if you want to understand the difference between biological neural network and artificial neural network. You will also get insights on why deeper ML people don’t just use more biologically-realistic model in ML problems.
- My take: while this is not a core class for us ML/DLers, it is an interesting class to take, especially if you want to sound smarter than “We just run CNN with our CIFAR-10” data. In my case, it humbles me a lot because I now know that we human just don’t really understand our brain that well (yet).
Hope you enjoy this “impression”!
Arthur