Recent posts

Divergence in Deep Q-Learning: Two Tricks Are Better Than (N)one

On

By Emil Dudev, Aman Hussain, Omar Elbaghdadi, and Ivan Bardarov. Deep Q Networks (DQN) revolutionized the Reinforcement Learning world. It was the first algorithm able to learn a successful strategy in a complex environment immediately from high-dimensional image inputs. In this blog post, we investigate how some of the techniques...

Self-Explaining Neural Networks: A Review

On

For many applications, understanding why a predictive model makes a certain prediction can be of crucial importance. In the paper “Towards Robust Interpretability with Self-Explaining Neural Networks”, David Alvarez-Melis and Tommi Jaakkola propose a neural network model that takes interpretability of predictions into account by design. In this post, we...

Variational Bayesian Inference: A Fast Bayesian Take on Big Data.

On

Compared to the frequentist paradigm, Bayesian inference allows more readily for dealing with and interpreting uncertainty, and for easier incorporation of prior beliefs. A big problem for traditional Bayesian inference methods, however, is that they are computationally expensive. In many cases, computation takes too much time to be used reasonably...

Hello World!

On

Hi there! This is my first blog post ever!