A comparison of deep learning and linear-nonlinear cascade approaches to neural encoding

BioRxiv : the Preprint Server for Biology
Theodore H MoskovitzJonathan W Pillow

Abstract

A large body of work on neural encoding has focused on "cascade" type models such as the linear-nonlinear-Poisson (LNP) model. This approach seeks to describe the encoding process in terms of a series of stages: (1) projection of the stimulus onto a bank of linear filters; (2) a nonlinear function combining these filter outputs; and (3) a noisy spike generation process. Here we explore the relationship of the LNP modeling framework to more recent approaches arising from the deep learning literature. Specifically, we show that deep neural network (DNN) and convolutional neural network (CNN) models of neural activity sit firmly within the LNP framework, and correspond to particular parametrizations of the nonlinear stage of the LNP model. Using data from primate retina and primary visual cortex, we compare the performance of LNP models fit with deep learning methods to LNP models fit with traditional estimators, including spike-triggered covariance (STC), information-theoretic spike-triggered averaging and covariance (iSTAC), and maximum likelihood estimators also known as "maximally informative dimensions" (MID). We show that models with nonlinearities parametrized by deep networks achieve higher accuracy for a fixed number of f...Continue Reading

Related Concepts

Learning
Literature
Neuroma
Primates
Visual Cortex
Disorder of Visual Pathways
Depth
Nonlinear Dynamics
Biological Neural Networks
M Protein, multiple myeloma

Related Feeds

BioRxiv & MedRxiv Preprints

BioRxiv and MedRxiv are the preprint servers for biology and health sciences respectively, operated by Cold Spring Harbor Laboratory. Here are the latest preprint articles (which are not peer-reviewed) from BioRxiv and MedRxiv.