Notice: This website is mostly outdated as of 2024. A new website is coming soon. Proceed with caution regarding earlier posts.

About

Intro

I am interested in intelligence from both artificial and biological perspectives. In looking for fundamental principles of information processing, I draw inspiration from the brain, our proof-of-concept for general intelligence.

I am currently a PhD student researching multimodal interpretability and alignment at Blake Richards’s lab at Mila Quebec.

Previously, I researched deep learning models on neural data at the Stringer Lab at Janelia, and collaborated with FOR.ai, a multi-disciplinary AI research group. I was also a ML/NLP research engineer at the search engine startup NLMatics, and consultant for various ML startups in the Bay Area.

Background

I studied computer science, computational neuroscience, and literature at Princeton University (Class of 2019), and wrote my thesis on natural language processing models for semantic representation in the human brain. Previously, I held positions at the Princeton Neuroscience Institute, the Harvard Data Science Initiative, and Ruane, Cunniff, and Goldfarb. I also worked as a machine learning and natural language processing consultant for startups in the Bay Area. Before Janelia, I was a ML/NLP research engineer at the search engine startup NLMatics.

I live in Palo Alto, CA, near Stanford’s campus. My favorite hobbies are dog training and writing science-fiction.

About this website

This website contains literature reviews, essays, projects, and coding experiments. Not all the work on here is polished, but I would rather my website be a living laboratory that reflects my latest thoughts. Some representative posts include my introduction to Hopfield networks and novel coding experiments into the semantic similarity of phrases.

Sample past work

Janelia Research Campus

Du F, Joseph S, Pachitariu M, Stringer C. (2020). Invariant Texture Recognition in the Mouse Visual Cortex. Cosyne 2021.

Open-Source Receptive Field Explorer, 2020. Advised by Carsen Stringer.

Work supervised at NLMatics

Research Project, 2020. Advised Batya Stein. Optimizing Transformer Q&A Models for Naturalistic Search.

Research Project, 2020. Advised Nicholas Greenspan. SQuAD 2.0 and Google Natural Questions: A Comparison and Investigation into Model Performance..

Technical Project, 2020. Advised Batya Stein & Nicholas Greenspan. A Comprehensive Guide to Training a Machine Learning Model for Question Answering.

Undergraduate work at PNI

Courswork, 2019, Advised by Dr. Uri Hasson. “Neural Encoding Models with Word Embeddings of Semantic Representation in the Human Cortex.” Thesis written on accurate and interpretable neural encoding models for semantic representation of auditory data in the human cortex, using machine learning and natural language processing models such as GloVe, on a large ECoG dataset of subjects listening to a podcast.

Coursework, 2018, Advised by Dr. David Tank. “Refining Goal-Driven Models through Behavioral Signatures to Understand the Sensory Cortex.” Proposal to compare the behavioral signatures of state-of-the-art neural networks (RNNs, HCNNs, ConvRNNs, ResNets) with the behavioral signature of human performance on an image recognition task with the CIFAR10 dataset to assess the biological plausibility of artificial neural nets.

Independent Work, 2018, Advised by Dr. Jonathan Pillow. “Using Goal-Driven Convolutional Neural Networks to Understand the Ventrolateral Prefrontal Cortex During Object Categorization.” Proposal to apply goal-driven CNNs in predicting neural response of object recognition in the ventrolateral prefrontal cortex of macaques on the CIFAR10 dataset.

Coursework, 2018, Advised by Dr. Jonathan Pillow. “Comparing the Performance of Generalized Linear Models (GLMs) and Neural Nets as Single-Neuron Encoding Models.” Compares highly interpretable generalized linear models (GLMs) with other, less interpretable machine learning models including vanilla CNNs, ResNets, and autoencoders on a dataset of a single neuron response’s to light.