I am a research scientist at Google Research. Previously I was a postdoctoral researcher at Empirical Inference Department, Max Planck Institute for Intelligent Systems working with Bernhard Schölkopf (from 2018 to 2020). I work in the field of machine learning (intersection of computer science and statistics). My research topics include (but not limited to)

  • Fast (linear runtime) non-parametric statistical tests
  • Kernel-based representation of data
  • Deep generative modelling of images
  • Approximate Bayesian inference
I completed my PhD study in 2017 at Gatsby Unit, UCL where I worked with Arthur Gretton on various topics related to kernel-based statistical tests and approximate Bayesian inference. Please feel free to check out my list of publications, software and contact me for a research discussion.

Contact: Wittawat Jitkrittum (วิทวัส จิตกฤตธรรม) ( )

 ArXiv  CV  DBLP  Github  Google Scholar  LinkedIn  Orcid ID  ResearchGate  Semantic Scholar  Twitter


20 Jul 2022

:memo: Our latest work “A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch” has been accepted to ECCV 2022. In this work, we propose a simple framework for constructing an image retrieval model using both text and sketch inputs. We show that using a sketch as input provides the user more flexibility in expressing the target image to retrieve. Paper will be uploaded soon. Stay tuned!

23 Jun 2022

:memo: “Discussion of Multiscale Fisher’s Independence Test for Multivariate Dependence” is accepted and will appear in Biometrika.

27 Apr 2022

:memo: A new preprint “ELM: Embedding and Logit Margins for Long-Tail Learning”. We address the problem of learning under skewed label distributions. We propose a new loss function called ELM (Embedding and Logit Margins) where two sets of margins are enforced: one defined in the logit space, and the other in the embedding space. We theoretically show that minimising the proposed ELM objective helps reduce the generalisation gap, and empirically show that it performs well in practice on long-tail learning benchmarks such as ImageNet-LT (1000 classes) and iNaturalist 2018 (8000+ classes).

27 Jul 2021

:memo: Our paper ABCDP: Approximate Bayesian Computation with Differential Privacy. is accepted to Entropy. This is one of the first works that studies differential privacy in the context of approximate Bayesian computation.

14 May 2021

:memo: A paper accepted at ICML 2021. Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces. With collgeaues at Google Research.

8 May 2021

I gave a virtual, invited talk at Deep Learning and Artificial Intelligence Summer School 2021 (DLAI5) on the topic of comparing generative models.

11 Feb 2021

:memo: A new preprint “An Optimal Witness Function for Two-Sample Testing”. In this work, we explore a new way of constructing a witness function for two-sample testing in a data-driven manner. We show experimentally that one can take a kernel selected by an existing approach, and simply plug it into the new framework to get higher test power.

1 Feb 2021

:memo: An accepted paper at AISTATS 2021: Kernel Distributionally Robust Optimization: Generalized Duality Theorem and Stochastic Approximation.

3 Dec 2020

I gave a virtual seminar at IBM Research on kernel based model comparison. Thanks to Youssef Mroueh for inviting me.

5 Nov 2020

I gave a virtual seminar at EURECOM on kernel testing for comparing generative models. Thanks to Motonobu Kanagawa for inviting me.

1 Aug 2020

I will serve as a workflow chair for AISTATS 2021.

27 Jul 2020

:computer: We have released Python source code for our work “Testing Goodness of Fit of Conditional Density Models with Kernels” (in UAI 2020). Please check it out!

10 Jul 2020

:couple: Today is the last day of the virtual Machine Learning Summer School 2020. On behalf of the organizing team, I would like to thank all parties involved including our valued sponsors, our speakers, volunteers, and staff at the MPI-IS. All lectures are online. Please feel free to check here!

15 May 2020

:v: Two papers accepted to UAI 2020: “Testing Goodness of Fit of Conditional Density Models with Kernels” and “Kernel Conditional Moment Test via Maximum Moment Restriction”.

1 May 2020

:man_scientist: I joined Google Research as a research scientist. I would like to thank everyone who has supported me.

2 Apr 2020

:memo: A new preprint: Worst-Case Risk Quantification under Distributional Ambiguity using Kernel Mean Embedding in Moment Problem with J. J. Zhu and colleagues.

25 Feb 2020

:memo: :v: Two new preprints, both tackling the problem of testing the goodness of fit of a conditional model. In the first work, the model is specified as an explicit conditional density function up to the normalizing constant. In the second work, the conditional model is specified implicitly in terms of a conditional moment function.

2 Jan 2020

:school: I am co-organizing the Machine Learning Summer School (MLSS) 2020 at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. Application is now open! Please apply. Deadline: 11 Feb 2020.

Last update: 25-Jul-22
Based on al-folio theme.