I am a fourth-year PhD student working with Arthur Gretton at Gatsby Computational Neuroscience Unit, UCL. I am expected to finish my PhD study at the end of 2017. I received M.Eng. from Tokyo Institute of Technology where I worked with Masashi Sugiyama on supervised feature selection using squared-loss mutual information. Before that I was a research assistant working with Thanaruk Theeramunkong on a Thai news relations discovery project. I received B.Sc. in Computer Science from SIIT, Thammasat university, Thailand.
My works are listed on this page. I occasionally update my blog summarizing what I learn. Some photos I have taken are on Flickr. I also maintain a web site for Gatsby's machine learning journal club.
Contact: Wittawat Jitkrittum (วิทวัส จิตกฤตธรรม) ( )
Interpretable Two-Sample Test – The goal of this project is to learn a set of features to distinguish two given distributions P and Q, as observed through two samples. This task is formulated as a two-sample test problem. (05/2016)
Locally Linear Latent Variable Model (LL-LVM) – LL-LVM is a probabilistic model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set of neighbourhood relationships. (09/2015)
Learning to Pass EP Messages – In this project, we propose learning a kernel-based message operator that replaces the multivariate integral required in classical EP to compute an outgoing message given incoming messages. The operator allows fast computations of outgoing messages and can be updated online cheaply during EP inference. (03/2015)
$\ell_1$-LSMI – A supervised feature selection algorithm based on a squared-loss variant of mutual information. Implementation is available in Matlab. (03/2013)
Classifier-based Thai Word Tokenizer – Decision tree-based Java library to tokenize Thai text. The project was finished in two months for a competition. Warning: Not for production use. Detail is in this presentation file. (02/2010)
Nov 2016 . A new paper. Cognitive Bias in Ambiguity Judgements: Using Computational Models to Dissect the Effects of Mild Mood Manipulation in Humans in PLOS One.
Oct 2016. An Adaptive Test of Independence with Analytic Kernel Embeddings, a fast nonparametric independence test. Python code here.
Aug 2016. Interpretable Distribution Features with Maximum Testing Power is accepted to NIPS 2016 as a full oral presentation. See our 2-minute introduction video here.
May 2016. Interpretable Distribution Features with Maximum Testing Power: a linear-time nonparametric two-sample test which returns a set of local features indicating why the null hypothesis is rejected. Python code available on Github.
Dec 2015. K2-ABC: Approximate Bayesian Computation with Infinite Dimensional Summary Statistics via Kernel Embeddings: summary statistic free approximate Bayesian computation with kernel embeddings. Accepted to AISTATS 2016.
Nov 2015. We released the code for Locally Linear Latent Variable Model (LL-LVM), which was accepted to NIPS 2015. Check our Matlab code here on Github.
Jun 2015. Bayesian Manifold Learning: Locally Linear Latent Variable Model (LL-LVM): a probabilistic model for non-linear manifold discovery.
Mar 2015. Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages: a fast, online algorithm for nonparametric learning of EP message updates. Source code available here.
Based on Hyde Theme.