I am a postdoctoral researcher at Empirical Inference Department, Max Planck Institute for Intelligent Systems working with Bernhard Schölkopf. I work in the field of machine learning (intersection of computer science and statistics).
My research topics include (but not limited to)
I completed my PhD study at Gatsby Unit, UCL where I worked with Arthur Gretton on various topics related to kernel methods and Bayesian inference. My CV is here.
Contact: Wittawat Jitkrittum (วิทวัส จิตกฤตธรรม) ( )
Address:
Wittawat Jitkrittum
Max Planck Institute for Intelligent Systems
Max-Planck-Ring 4
72076 Tuebingen
Germany
Phone: +49 7071 601
I spoke at the Symposium on frontier research in information science and technology at VISTEC, Thailand. Topic: Recent Advances in Kernel Methods for Model Criticism. Slides.
We have released Python code for “Informative Features for Model Comparison”.
New preprint: Large sample analysis of the median heuristic. In this work, we theoretically study the commonly used median heuristic for setting the bandwidth of a Gaussian kernel.
Our NIPS paper “Informative Features for Model Comparison” is now online on Arxiv. Python code.
Our paper titled “Informative Features for Model Comparison” was accepted to NIPS 2018. Given two samples from two candidate models, and a reference sample (observed data), the goal is to design a statistical test to determine which of the two samples is closer to the reference sample. This has an application for comparing two GAN models.
I was invited to speak at Department of Mathematics and Computer Science, Chulalongkorn University. Topic: A Technical Introduction to Kernel Goodness-of-Fit Testing.
I was invited to speak at VISTEC, Wangchan Valley, Rayong, Thailand. Topic: Machine Learning Fundamentals. Event details. Slides.
I was invited to give a talk at the BKK Machine Learning Meetup event. Topic: Introduction to kernel methods for comparing distributions. Slides.
I was invited to give a talk at the Workshop on Functional Inference and Machine Intelligence (19-21 Feb 2018), Tokyo. Slides.
The podcast we did with This Week in Machine Learning & AI on our NIPS Best Paper 2017 was published here.
I started working remotely as a postdoc at Max Planck Institute for Intelligent Systems with Bernhard Schölkopf.
I gave a 50-minute presentation on A Linear-Time Kernel Goodness-of-Fit Test (NIPS 2017 best paper). Download slides.
I gave an oral presentation at NIPS 2017 on A Linear-Time Kernel Goodness-of-Fit Test. The paper was selected for one of three best paper awards at NIPS 2017. Python code here. Presentation slides here. Talk video here.
Our paper A Linear-Time Kernel Goodness-of-Fit Test has been accepted for oral presentation at NIPS 2017. Python code.
I gave an oral presentation at ICML 2017 on our new linear-time independence test. The slides are here.
Uploaded slides for the linear-time nonparametric goodness-of-fit test.
We have released Python code for our linear-time nonparametric goodness-of-fit test.
I gave a talk on a new linear-time dependence measure at the UCL workshop on the theory of big data. See the slides here.
A new fast nonparametric goodness-of-fit test. See A Linear-Time Kernel Goodness-of-Fit Test.
Our paper: An Adaptive Test of Independence with Analytic Kernel Embeddings is accepted to ICML 2017.
A new paper. Cognitive Bias in Ambiguity Judgements: Using Computational Models to Dissect the Effects of Mild Mood Manipulation in Humans in PLOS One.
An Adaptive Test of Independence with Analytic Kernel Embeddings, a fast nonparametric independence test. Python code here.
Interpretable Distribution Features with Maximum Testing Power is accepted to NIPS 2016 as a full oral presentation. See our 2-minute introduction video here.
Interpretable Distribution Features with Maximum Testing Power: a linear-time nonparametric two-sample test which returns a set of local features indicating why the null hypothesis is rejected. Python code available on Github.
K2-ABC: Approximate Bayesian Computation with Infinite Dimensional Summary Statistics via Kernel Embeddings: summary statistic free approximate Bayesian computation with kernel embeddings. Accepted to AISTATS 2016.
We released the code for Locally Linear Latent Variable Model (LL-LVM), which was accepted to NIPS 2015. Check our Matlab code here on Github.
Bayesian Manifold Learning: Locally Linear Latent Variable Model (LL-LVM): a probabilistic model for non-linear manifold discovery.
Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages: a fast, online algorithm for nonparametric learning of EP message updates. Source code available here.
Updated: 29-Dec-18
Based on Hyde Theme.