About me

I'm a computational cognitive scientist and a postdoctoral researcher at the University of Wisconsin-Madison where I study how we leverage language to learn about word meaning.

Most of my day to day work involves coming up with (statistical) models that describe and explain human behavior, in particular language use. My expertise is in computational methods, with a focus on embedding models of language, dealing with large datasets, and (Bayesian) generalized linear mixed models. I have worked with behavioral and fMRI data and data from large online language corpora such as Wikipedia and OpenSubtitles.

If you want to read my published work, my Google Scholar page has links to publicly accessible versions of every journal article and preprint. My Github has public repositories for some of the projects I've worked on; feel free to get in touch with me at vanparidon@wisc.edu if you have questions about any of it.

Education

I hold a PhD from the Max Planck Institute for Psycholinguistics and Radboud University in the Netherlands. My alma mater is Leiden University in the Netherlands, where I graduated with a BSc and MSc in Psychology, with a specialization in Cognitive Neuroscience.

Research

For an overview of topics I work on and projects currently ongoing (and in various stages of completion) see my research page.

Teaching

In the fall semesters of 2018 and 2019 I taught an Introduction to Python Programming course to a class of graduate students at the International Max Planck Research School for the Language Sciences. In addition to teaching programming, I have lectured on methods to probabilistically account for missing and censored data in statistical models.

Skills

  • Programming in Python / JavaScript / R / whatever else
  • Translating hypotheses into online or in-person experiment designs
  • Modelling behavior using multilevel models (both Bayesian and frequentist)
  • Power analysis for experiments, including for sequential testing designs
  • Accounting for missing/censored data using probabilistic imputation
  • Analyzing fMRI data using ML algorithms (e.g. searchlight SVM, RSA)
  • Parallel/cluster computing to accelerate analysis (e.g. using SGE)
  • General problem-solving and lateral thinking