“Little strokes fell great oaks.”
- Benjamin Franklin

Most of my research at Penn has focused on analyzing electroencephalographic (EEG) recordings - measurements of the changing electrical potential in people’s brains caused by neurons firing.

A topic of particular interest for the lab I work in is how we can use machine learning to actively predict a person’s behavior (like whether or not they will remember a studied item) based on on their brain activity. We’re still a ways off from mind-reading, but it’s cool stuff.

More broadly, I’m deeply interested in machine learning theory, methodology, and applications. There are really two main uses for data: inference (how/why something happened in the past) and prediction (what will happen in the future). We have good tools for both of these, but they rarely work together. Deep learning models, for example, achieve high prediction accuracy but are often criticized for being “black box” models without interpretable parameters. Bayesian approaches explicitly model a data generating process and are therefore highly interpretable, but they require making lots of structural assumptions about probability distributions in real world data that might not be justified - this makes them biased and potentially less robust. Across the board, lots of high-perfoming models have a tendency to overfit training data and consequently fail to make robust predictions out in the wild. A long term interest of mine is developing machine learning methods that are effective tools for both inference and prediction. These methods need to be both interpretable and robust - no easy task! I hope to devote my time and attention in graduate school and beyond to studying these challenges in data science, along with applications to neuroscience and other fields.

EEG analysis and machine learning applications

Decoding brain states and improving memory

Poster: Decoding and optimizing episodic memory

  • In this project we asked whether using machine learning to optimize the timing of item presentations during learning could improve memory performance. We show that, indeed, training classifiers on spectral features of scalp EEG allows us to posively modulate recall in a hybrid spatial navigation and episodic memory task.

Oscillatory biomarkers of memory

Paper: Hippocampal theta and episodic memory

  • I investigate how a method of distinguishing pink noise in brain recordings from true brain rhythms helps us understand what patterns of brain activity actually relate to successful memory encoding and retrieval. Presented at the Context and Episodic Memory Symposium in August 2021 and Computational and Systems Neuroscience (COSYNE) in March 2022.

Changing Parameters

EEG pre-processing methods

Undergraduate Research Project: Optimal EEG Referencing Schemes for Brain State Classification

  • Analyzing changing electrical potential requires choosing a reference point for the measurement. When we have some set of electrodes recording brain activity in distinct spatial locations in the brain, should they all be referenced the same way? To a common electrode? To their nearest neighboring electrode? To a weighted sum of other electrodes? I discuss a number of approaches, explain how they act as variable “spatial filters”, and compare their utility for classifying brain state and memory success.

Sports Analytics

In my free time I like to dabble in sports analytics a bit. I (along with a few other Penn grad students) was named a finalist for the 2022 NFL Big Data Bowl! You can check out our Kaggle notebook as well as the NFL’s press release announcing the finalists and my team’s video presentation of our project.

Our submission showed how high resolution player-tracking data allows us to train a model that predicts the outcome of a kick return, and we develop a framework for using this to compute optimal return paths and evaluate player decision-making. Big Data Bowl