I am a fifth year PhD student in a joint program between Brain and Cognitive Science and Computer Science. I research probabilistic inference in visual perception. Essentially this means looking at the brain as a data-processing machine, where the data are the images falling on your eyes, and the processing consists of working out the most likely entities that "caused" the images, like lights, objects, textures, etc.

My research spans the theory of how neural circuits could, in principle, implement an inference algorithm, data analysis of neural recordings from non-human primates doing visual discrimination tasks, and psychophysics with human-primates to test the theories.

Contact

<first initial><last name>@ur.rochester.edu

PhD Projects

Projects I've worked on during my PhD include:

  • The Perceptual Confirmation Bias: this is our term for a feedback loop the brain may fall into when expectations and vision reinforce each other ("you see what you expect to see, and you expect to see what you've seen"). We found that this effect is an unavoidable consequence of certain types of representations for approximate inference, and that it explains some previously discrepant results in the psychophysics literature.
  • Characterizing and interpreting the influence of internal variables on sensory activity. In this short opinion paper, we argue that simple linear models are sufficient to make sense of quite a few "top-down" effects on populations of sensory neurons. Intuitively, if a set of neurons all have "tuning curves" to some internal state, taking a linear approximation to this tuning gives nice expressions for how the population will covary.
  • A probabilistic population code based on neural samples. This work was presented in an oral presentation at NeurIPS 2018. This paper is a step towards resolving the 15+ year debate over how the brain represents probability distributions (a necessary component of probabilistic inference). Part of the debate has focused on whether the brain represents distributions parametrically or with samples. In this work, we construct an example of a system that is simultaneously both types of code, depending on how it is read out. We see this as a first step towards more precise characterizations of what it means for the brain to be an inference machine.
  • The "posterior coding" hypothesis. This work distinguishes between pure "feedforward" models of inference in the brain, and "posterior-coding" models in which we hypothesize even early sensory cortex represents the full posterior over senory features. This hypothesis suggests a computational role of feedback as priors, and (as we argue here) provides a link between tuning curves (varying likelihoods) and noise correlations (varying priors).
  • Neural Signatures Of Variable Beliefs Increase With Task Learning In V1. This is a large data analysis project in collaboration with Rick Born's lab at Harvard Med. Populations of V1 neurons were recorded in 2 macaque monkeys while they were trained to do two orientation-discrimination tasks. Our early results suggest that so-called "differential correlations" increase over learning, which is consistent with the theory that beliefs or expectations about the stimulus are fed back as far as V1, and that they become stronger over learning. Presented as a poster at AREADNE 2018 (the conference abstract is linked here).

Other Projects

I do a lot of things on the side that don't get published. Here are some highlights:

  • Check out my blog!
  • RocAlphaGo. This is a (not quite finished) replication of DeepMind's original AlphaGo Go-playing AI. What started as a class project turned into the largest single project I've worked on, where I took on a lead developer role. Ultimately the project was discontinued when DeepMind released AlphaZero and I became busy with other things, but we did create a decently optimized Go engine, supervised training on expert play with a variety of neural network architectures, basic reinforcement learning from self-play, and ultimately created a somewhat competitive bot (about 2 Dan).
  • Variational Auto-Encoders (VAE) Tutorial. For the computer science part of my joint degree, I wrote a short review of Variational Auto-Encoders (currently unpublished, but maybe someday). I created my own Keras classes for VAEs, which I later turned into this tutorial so that others could try creating their own VAE classes based on my skeleton code.
  • LORDAP (Load Or Run Data Analysis Pipeline). This is a Matlab/Octave system that I've used to manage my data analysis pipelines. It works by caching arbitrary function calls in .mat files while checking file modification times to automatically detect when results should be recomputed.
  • Gaussian Process Factor Analysis. This is my custom Matlab implementation of the GPFA algorithm, extending previous implementations by allowing for missing data and better handling unequal time sampling.
  • Quoridor in python. Quoridor is one of my favorite board games. At one point I decided to create it in Python with a simple AI to play against. The AI uses a fairly dumb tree search, but I optimized my core game engine enough to make brute monte carlo search at least somewhat competitive!

Teaching

As a graduate student, I've been in the following teaching roles:

  • Co-instructor: Philosophy of Perception. An upper-level undergraduate seminar discussing (visual) perception at the intersection of philosophy and brain science, co-instructed with Alison Peterman from the Philosophy department at Rochester. Fall 2018.
  • TA: Perception and Action. An upper-level undergraduate course on the neuroscience behind sensory processing and sensory-guided decision making. Instructed by Greg DeAngelis. Spring 2017.
  • TA: Social Implications of Computing. An undergraduate writing course on computing, ethics, responsibilities, etc. Instructed by Michael Scott. Spring 2016.
  • TA: Machines and Consciousness. An undergraduate writing course on the philosophy of consciousness, and whether machines could have it. Instructed by Len Schubert. Spring 2015.
  • Miscellaneous lectures: Computational Neuroscience. Basically I've guest-lectured a few times in the undergraduate and graduate Computational Neuroscience courses on topics like deep learning, data management practices, basic computational encoding/decoding models, etc.