I am a postdoctoral researcher at Empirical Inference Department, Max Planck Institute for Intelligent Systems working with Bernhard Schölkopf. I work in the field of machine learning (intersection of computer science and statistics).
My research topics include (but not limited to)
Contact: Wittawat Jitkrittum (วิทวัส จิตกฤตธรรม) ( )
Max Planck Institute for Intelligent Systems
Phone: +49 7071 601
I will speak at the Symposium on frontier research in information science and technology at VISTEC, Thailand.
We have released Python code for “Informative Features for Model Comparison”.
New preprint: Large sample analysis of the median heuristic. In this work, we theoretically study the commonly used median heuristic for setting the bandwidth of a Gaussian kernel.
Our paper titled “Informative Features for Model Comparison” was accepted to NIPS 2018. Given two samples from two candidate models, and a reference sample (observed data), the goal is to design a statistical test to determine which of the two samples is closer to the reference sample. This has an application for comparing two GAN models.
I was invited to speak at Department of Mathematics and Computer Science, Chulalongkorn University. Topic: A Technical Introduction to Kernel Goodness-of-Fit Testing.
I was invited to give a talk at the Workshop on Functional Inference and Machine Intelligence (19-21 Feb 2018), Tokyo. Slides.
I started working remotely as a postdoc at Max Planck Institute for Intelligent Systems with Bernhard Schölkopf.
I gave an oral presentation at NIPS 2017 on A Linear-Time Kernel Goodness-of-Fit Test. The paper was selected for one of three best paper awards at NIPS 2017. Python code here. Presentation slides here. Talk video here.
I gave an oral presentation at ICML 2017 on our new linear-time independence test. The slides are here.
A new fast nonparametric goodness-of-fit test. See A Linear-Time Kernel Goodness-of-Fit Test.
Our paper: An Adaptive Test of Independence with Analytic Kernel Embeddings is accepted to ICML 2017.
An Adaptive Test of Independence with Analytic Kernel Embeddings, a fast nonparametric independence test. Python code here.
Interpretable Distribution Features with Maximum Testing Power is accepted to NIPS 2016 as a full oral presentation. See our 2-minute introduction video here.
Interpretable Distribution Features with Maximum Testing Power: a linear-time nonparametric two-sample test which returns a set of local features indicating why the null hypothesis is rejected. Python code available on Github.
K2-ABC: Approximate Bayesian Computation with Infinite Dimensional Summary Statistics via Kernel Embeddings: summary statistic free approximate Bayesian computation with kernel embeddings. Accepted to AISTATS 2016.
We released the code for Locally Linear Latent Variable Model (LL-LVM), which was accepted to NIPS 2015. Check our Matlab code here on Github.
Bayesian Manifold Learning: Locally Linear Latent Variable Model (LL-LVM): a probabilistic model for non-linear manifold discovery.
Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages: a fast, online algorithm for nonparametric learning of EP message updates. Source code available here.
Based on Hyde Theme.