We present a systematic data-driven approach based on implicit image representations and contrastive learning, that allows the identification and parameterization of the manifold of highly activating stimuli, for visual sensory neurons.
We tested our method on simple Gabor-based model neurons with known (and exact) invariances as well as neural network models predicting the responses of macaque V1 complex cell neurons. Below are the learned invriance manifolds of two example V1 neurons:
You can read the full paper here.
This project requires that you have the following installed:
-
Clone the repository:
git clone https://github.com/sinzlab/cppn_for_invariances.git
-
Navigate to the project directory:
cd cppn_for_invariances
-
Run the following command inside the directory
docker-compose run -d -p 10101:8888 jupyterlab
This will create a docker image followed by a docker container from that image in which we can run the code.
-
You can now open the jupyter lab evironment in your browser via
localhost:10101
If you encounter any problems or have suggestions, please open an issue.
@inproceedings{baroni2022learning,
title={Learning Invariance Manifolds of Visual Sensory Neurons},
author={Luca Baroni and Mohammad Bashiri and Konstantin F Willeke and Jan Antolik and Fabian H Sinz},
booktitle={NeurIPS 2022 Workshop on Symmetry and Geometry in Neural Representations},
year={2022}
}