Notice: This website is mostly outdated as of 2024. A new website is coming soon. Proceed with caution regarding earlier posts.

Computational Neuroscience and Artificial Intelligence Research Overview

7 minute read

In progress, and incomplete. If you have any paper recommendations, feel free to comment below.

Criteria

This research overview focuses on current researchers with at least three of the following criteria:

  1. Researching intelligence from both biological and artificial perspectives.
  2. Heavy mathematical/computational focus (including machine learning).
  3. High level of abstraction on Marr’s.
  4. Focus on data analysis over collection.
  5. Developing neurotechnology.
  6. Philosophical and/or AGI bent.

Cross-Institution Groups

  1. Theoretical Frameworks for Intelligence

Datsets

  1. Collaborative Research in Computational Neuroscience

Past Conferences

  1. NAISys @ Cold Springs Lab (March 2020)
  2. Triangulating Intelligence: Melding Neuroscience, Psychology, and AI @ Stanford (April 1, 2020)

Books

  1. Vaina and Passingham. Computational Theories and their Implementation in the Brain: The legacy of David Marr (2017)
  2. Dayan and Abbott. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (2005)

Courses

1.Stanford: CS 330: Deep Multi-Task and Meta Learning (2019)

Meta-Review (other research overviews)

  1. BNN-ANN Papers by takyamamoto

US Universities

Baylor College of Medicine

  1. Andreas Tolias

Caltech

Computational Neuroscience

  1. Jehoshua Bruck - combines distributed information systems with study of biological circuits
  2. Pietro Perona - computational vision
    1. Non-Parametric Probabilistic Image Segmentation (2007)
    2. Visipedia
  3. Thanos Siapas - neurotechnology, population recordings
  4. Yaser Abu-Mostafa

Machine Learning

  1. Anima Anandkumar
    1. Tensor Decompositions for Learning Latent Variable Models (2014)

Columbia

  1. L.F. Abbott
    1. Review of his book: Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (2003)
    2. The neuronal architecture of the mushroom body provides a logic for associative learning
  2. John P Cunningham
    1. Structure in neural population recordings: an expected byproduct of simpler phenomena? (2017)

Georgetown

  1. Maximilian Riesenhuber - representation, visual cortex

Harvard University

  1. Samuel Gershman

MIT

  1. Tomaso Poggio - representation, deep learning, visual cortex, biological/artificial intelligence, learning
    1. Hierarchical models of object recognition in cortex (1999)
    2. The Levels of Understanding Framework, Revised (2002)
    3. The Mathematics of Learning (2003)
  2. Fabio Anselmi - representation, visual cortex, machine learning (also at Istituto Italiano di Tecnologia)
    1. Deep Convolutional Neural Nets and Hierarchical Kernel Machines
    2. Representation Learning in Sensory Cortex: a Theory
  3. Polina Anikeeva - neurotechnologies
    1. Editorial overview: neurotechnologies (2018)
  4. Ed Boyden - neurotechnologies
  5. James DiCarlo - deep learning and visual stream
    1. Performance-Optimized Hierarchical Models Predict Neural Responses in Higher Visual Cortex (2014)
  6. Adam Marblestone - integration of deep learning + neuroscience, neurotechnology
    1. Toward an Integration of Deep Learning and Neuroscience (2016)
  7. Ila Fiete - neural population dynamics
    1. Training networks to generate hypotheses about how the brain solves hard navigation problems(2017)
    2. Flexible representation and memory of higher-dimensional cognitive variables with grid cells (2019)

NYU

  1. Dmitri “Mitya” B. Chklovskii - reverse engineer brain at algorithmic level
  2. Eero Simoncelli - analysis and representation of visual information
  3. Cristina Savin

Machine Learning

  1. Yann LeCun

Princeton University

  1. David Tank - persistent neural activity, neurotechnology
  2. Jonathan Pillow - statistical analysis of neural populations
    1. Unsupervised identification of the internal states that shape natural behavior (2019)
    2. Capturing the dynamical repertoire of single neurons with generalized linear models (2017)
  3. Uri Hasson
    1. Brain-to-Brain coupling: A mechanism for creating and sharing a social world (2013)
  4. Carlos Brody
  5. Yael Niv
  6. Jonathan Cohen
  7. Ken Norman
  8. Sebastian Seung
    1. How the brain keeps the eyes still (1996)
  9. Asif Ghazanfar
    1. Neuroscience Needs Behavior: Correcting a Reductionist Bias (2017)

Stanford

  1. Dan Yamins
    1. Performance-Optimized Hierarchical Models Predict Neural Responses in Higher Visual Cortex (2014)
    2. Flexible Neural Representation for Physics Prediction (2018)
  2. Shaul Druckmann - neural circuits, population dynamics
    1. “Neuronal Circuits Underlying Persistent Representations Despite Time Varying Activity” (2012)
    2. “Robust neuronal dynamics in premotor cortex during motor planning” (2016)
  3. Surya Ganguli
  4. Krishna Shenoy - neural prosthetics with brain controlling movement
  5. Kwabena Boahen - building a neuro-inspired computer
  6. Chelsea Finn - intelligence through robotic interaction at scale
  7. Scott Linderman
    1. Using computational theory to constrain statistical models of neural data

University of Pennsylvania

  1. Danielle S. Bassett - networks, complex systems
    1. On the nature and use of models in network neuroscience (2018)
  2. Konrad Kording
    1. How advances in neural recording affect data analysis (2011)
  3. Lyle Ungar
  4. Vijay Balasubramanian

University of California, Berkeley

Computational Neuroscience

  1. Michael Deweese - auditory attention
  2. Jack Gallant - visual neuroscience
    1. Identifying natural images from human brain activity. (2008)
    2. Topographic organization in and near human visual area V4. (2007)
    3. Complete functional characterization of sensory neurons by system identification. (2006)
    4. Goal-related activity in area V4 during free viewing visual search: Evidence for a ventral stream salience map.(2003)
  3. Alison Gopnik - AI inspired by developmental psychology, Bayesian models of child development
  4. Bruno Olshausen
    1. Probabilistic Models of the Brain: Perception and Neural Function (textbook)
  5. Fritz Sommer

Machine Learning

  1. Pieter Abbeel - deep learning for robotics (reinforcement learning, apprenticeship)
  2. Moritz Hardt - fairness in machine learning
    1. Understanding deep learning requires rethinking generalization (2017)
  3. Sergey Levine - machine learning for complex behavioral skills
    1. Deep Visual Foresight for Planning Robot Motion (2016)
    2. Deep Learning for Robots: Learning from Large-Scale Interaction (2016)
  4. Stuart Russell
  5. Joseph E. Gonzalez - “practical AI”, dynamic neural nets for transfer learning, explainable reinforcement learning, frameworks for deep RL and parameter tuning
  6. Jiantao Jiao - information theory, applied probability
  7. Yi Ma - mathematical principles of high-dimensional sensorial data
  8. Gireeja Ranade - broad AI, control theory
  9. Bin Yu - causal inference
    1. Interpretable machine learning: definitions, methods, and applications (2019)
    2. The DeepTune framework for modeling and characterizing neurons in visual cortex area V4 (2018)
    3. Hierarchical interpretations of neural network predictions (2018)

University of California, Riverside

  1. Fabio Pasqualetti - control theory, complex systems
    1. Controllability of structural brain networks (2015)
    2. Optimally controlling the human connectome: the role of network topology (2016)

University of Connecticut

  1. Ian Stevenson
    1. How advances in neural recording affect data analysis (2011)

University of Texas at Austin

  1. Alex Huth - representation of language
    1. Incorporating Context into Language Encoding Models for fMRI (2018)
    2. The revolution will not be controlled: natural stimuli in speech neuroscience (2018)

University of Washington

  1. Fred Rieke - physics imposing limits on sensory processing (photon counting in visual system)
  2. Adrienne Fairhall - adaptation at single neuron level
    1. The role of adaptation in neural coding
    2. Reconfiguring motor circuits for a joint manual and BCI task.
  3. Eberhard E. Fetz - cortical control of movement, bidirectional BCIs, neural modeling

Washington University, St Louis

  1. David Michael Kaplan
    1. Explanatory Force of Dynamical and Mathematical Models in Neuroscience: A Mechanistic Perspective (2010)
  2. Carl F. Craver
    1. Explanatory Force of Dynamical and Mathematical Models in Neuroscience: A Mechanistic Perspective (2010)

Universities outside the US

University College London

  1. Karl Friston - variational Laplacian procedures, generalized filtering for hierarchical Bayesian model inversion

Imperial College London

  1. Claudia Clopath

Gatsby Institute, University College, London

  1. Peter Dayan
    1. Review of researcher’s book: Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (2003)

University of Oxford

  1. Selen Atasoy - harmonic brain modes framework
    1. Human brain networks function in connectome-specific harmonic waves (2016)
    2. Harmonic Brain Modes: a Unifying Framework for Linking Space and Time (2017)

University of Edinburgh

Computational Neuroscience

  1. David Willshaw
  2. Peggie Series - RL and Bayesian models in computational psychiatry
    1. Comprehensive Review: Computational Modelling of Schizophrenia. Neuroscience and Biobehavioural Reviews. (2017)
    2. Changing expectations about speed alters perceived motion direction. (2011)
    3. Is the homunculus aware of sensory adaptation? (2009)
  3. Matthias H Hennig - neural network models of population activity
    1. Statistical models of neural activity, criticality, and Zipf’s law (2019)
    2. Local learning rules to attenuate forgetting in neural networks (2018)
    3. Optimal encoding in stochastic latent-variable models (2018)
  4. Arno Onken - machine learning models in neuro
    1. Categorical encoding of decision variables in orbitofrontal cortex (2019)
    2. Synthesizing realistic neural population activity patterns using Generative Adversarial Networks (2018)
  5. [Barbara Webb] - insect robotics
    1. The internal maps of insects (2019)

Machine Learning

  1. Christopher Bishop
    1. Machine Learning and Pattern Recognition
  2. Chris Williams - time series, image interpretation
    1. A Framework for the Quantitative Evaluation of Disentangled Representations
    2. Customizing Sequence Generation with Multitask Dynamical Systems (2019)
  3. Amos Storkey
    1. Learning to learn via Self-Critique (2019)
    2. Large-Scale Study of Curiosity-Driven Learning (2018)
  4. Iain Murray
    1. Dynamic Evaluation of Transformer Language Models
  5. Michael U. Gutmann
  6. Charles Sutton (moved to Google)
    1. A Survey of Machine Learning for Big Code and Naturalness

Australian National University

  1. Marcus Hutter - artificial general intelligence
    1. Universal Artificial Intelligence (2004)

Dalle Molle Institute for Artificial Intelligence Research

  1. Jurgen Schmidhuber - artificial general intelligence
    1. Deep Learning Annus Mirabilis
    2. Metalearning
    3. PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem
    4. Evolutionary principles in self-referential learning

Istituto Italiano di Tecnologia

  1. Fabio Anselmi (see MIT)

Maastricht University

  1. Alexander Sack
  2. Rainer Goebel

MILA

  1. Yoshua Bengio

Ruhr University Bochum

  1. Asja Fischer

University of Waterloo

  1. Chris Eliasmith - semantic pointer architecture, SPAUN, Nengo neural simulation, neural engineering
    1. A large-scale model of the functioning brain (2012)
  2. Andreas Stockel - neural engineering
    1. Course on simulating neurobiological systems

Other Private Institutions

Cold Spring Harbor Laboratory

  1. Anne Churchland - neural machinery underlying decision-making
  2. Anthony Zador - neural circuits and auditory processing, sequencing connectome, AI/neuro bridge
    1. A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains (2019)

Microsoft Research

Montreal
  1. Philip Bachman - deep infomax
Cambridge, UK
  1. Chris Bishop

Janelia Research Campus (in Virginia)

Focus on mechanistic cognitive neuroscience

The Salk Institute

  1. Terrence J. Sejnowski
    1. Spatial Transformations in the Parietal Cortex Using Basis Functions

Google Brain

  1. On the Expressive Power of Deep Networks (2017)
  2. Understanding Deep Learning Requires Rethinking Generalization (2017)
  3. David Sussillo

DeepMind

  1. Andrea Tacchetti
  2. Botvinick (prev Princeton)
    1. Reinforcement Learning, Fast and Slow (2019)
  3. Shane Legg
    1. Machine Superintelligence
  4. Jane Wang
  5. Timothy Lillicrap

Numenta

  1. Jeff Hawkins
    1. A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex (2019)

UberAI

  1. Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

OpenAI

  1. An Integrated Brain-Machine Interface Platform with Thousands of Channels

Leave a comment