I am a fourth-year PhD student working with Arthur Gretton at Gatsby Computational Neuroscience Unit, UCL. I recently passed my thesis defense (viva), and will officially finish my PhD study in December 2017. I received M.Eng. from Tokyo Institute of Technology where I worked with Masashi Sugiyama on supervised feature selection using squared-loss mutual information. Before that I was a research assistant working with Thanaruk Theeramunkong on a Thai news relations discovery project. I received B.Sc. in Computer Science from SIIT, Thammasat university, Thailand.
Research interests My current topics of research include nonparametric two-sample testing, independence testing, goodness-of-fit testing, kernel methods, and approximate Bayesian inference. I, however, have a much broader interest in many other topics. These include, but not limited to, natural language processing (my special interest is in Thai language processing), deep neural networks, dimensionality reduction and reinforcement learning. If there is a chance that I can contribute to your projects, please get in touch to discuss. I am also open for a discussion even if your project idea is not yet consolidated.
Contact: Wittawat Jitkrittum (วิทวัส จิตกฤตธรรม) ( )
I gave an oral presentation at ICML 2017 on our new linear-time independence test. The slides are here.
A new fast nonparametric goodness-of-fit test. See A Linear-Time Kernel Goodness-of-Fit Test.
Our paper: An Adaptive Test of Independence with Analytic Kernel Embeddings is accepted to ICML 2017.
An Adaptive Test of Independence with Analytic Kernel Embeddings, a fast nonparametric independence test. Python code here.
Interpretable Distribution Features with Maximum Testing Power is accepted to NIPS 2016 as a full oral presentation. See our 2-minute introduction video here.
Interpretable Distribution Features with Maximum Testing Power: a linear-time nonparametric two-sample test which returns a set of local features indicating why the null hypothesis is rejected. Python code available on Github.
K2-ABC: Approximate Bayesian Computation with Infinite Dimensional Summary Statistics via Kernel Embeddings: summary statistic free approximate Bayesian computation with kernel embeddings. Accepted to AISTATS 2016.
We released the code for Locally Linear Latent Variable Model (LL-LVM), which was accepted to NIPS 2015. Check our Matlab code here on Github.
Bayesian Manifold Learning: Locally Linear Latent Variable Model (LL-LVM): a probabilistic model for non-linear manifold discovery.
Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages: a fast, online algorithm for nonparametric learning of EP message updates. Source code available here.
Based on Hyde Theme.