Getting to Know Kanaka Rajan

Kanaka Rajan had a circuitous route into computational neuroscience. Initially trained in biomedical engineering, she switched to experimental neuroscience in graduate school at Columbia University. While she enjoyed the challenge of the experiments, she felt impeded by experiment design, and ended up following some of her more mathematical training to computational neuroscience.

Kanaka’s research objective is none other than reverse engineering the human brain. Essentially, she’s trying to build an artificial system that works like the human brain, in order to learn about how our brains work. In order to do this, she uses AI and ML technology, alongside neuroscience theory and experimental research.

“Then of course, the rest is a little bit of hope and prayer, and a little bit of hard science to say, ‘Well, is this the same operating principles that the biological brain uses?’ So I build essentially artificial models that make the biological brain but are much more simplified and engineered.”

Using Neural Networks to Represent the Human Brain

This work is undertaken on two levels: the comprehensive behavioral level and the simplified level of neural activity.

One way to build an accurate representation of the human brain is by measuring the behavioral output. The idea is that if we can build a model that behaves the same way as an organism, we can better understand what causes the behavior in the organism to arise.

The other part of the work is trying to map the biological mechanisms of the brain, understand the way neurons (the cells that make up the brain) work together and recognize the different physical subsystems that make up the brain.

Kanaka builds neural networks that address both perspectives, and ultimately tries to combine them for an accurate representation of the brain as a whole.

The Elevator Problem

An example of a mental task that Kanaka tries to map for her work is called The Elevator Problem. On the surface, it’s simple:

If you’re standing in front of an elevator bank and two doors open, which one do you pick?

However, there are many elements that go into a decision like this: counting the number of people in each elevator, predicting the amount of time each would take, remembering where each elevator goes– all in a split second. We’ll come back to this later.

Recurrent Neural Networks (RNNs) and the Temporal Dichotomy

One of the complex things about the brain is that it works at many different time scales. Individual neurons within the brain fire in milliseconds, but memories can also be stored in these same neurons for a lifetime. Finding an artificial model that can bridge these two vastly different time scales has been a challenge in the field.

Recurrent neural networks (RNNs) are one answer to this temporal dichotomy. Because RNNs have feed forward and feedback connections, they are able to create short term and longer-range dynamics that can mirror neural activity more closely than many other models.

Modeling Human Memory with Neural Networks

While the traditional view of memory is as a static representation of the past, a huge body of research has shown that memory is in fact a dynamic process that can be changed and even re-written.

One of the ideas Kanaka worked on during her postdoc is the concept of memory as a sequence of neurons that creates a wave in the brain. This embodies the static idea, of each neuron being active at a certain level, and the dynamic nature of memory, as the movement of the group of neurons is also emblematic of change.

Theory of Mind

While Kanaka wants to be able to present a simple equation in response to how the mind works, it’s just not possible– right now there’s such variety in the approaches to understanding the brain that it’s hard to combine them all. In order to study the mind, we often have to simplify questions to measurable experiments, usually studying specific behaviors or biological structures. This makes it really hard to make generalizations about the mind, since the evidence is so specific and often comes in such different forms. As a theorist, Kanaka tries to combine different pieces of evidence from different branches of research, but it’s incredibly difficult to definitively answer any large question about the inner workings of the mind.

Using RNNs as Mental Substrates

There’s a lot of reasons why Kanaka likes RNNs, but a big one is that single RNNs can be split into sub-regions in a way that mimics the mammalian brain. For example, a certain neural circuit in the brain could control stress, which would differ from the one that controls planning. Kanaka likes that RNNs can imitate this, and part of her work is training specific sub-regions of RNNs for specific behaviors.

Curriculum Training for RNNs

Kanaka’s lab uses curriculum learning to train her RNNs. This more or less mimics the way school syllabi are taught– initially presenting a simple idea, and slowly adding on layers of complexity.

Here’s where the elevator problem comes back in. For this example, let’s say we’re trying to train the system to choose the elevator with the least number of people.

For the base level of the curriculum, a system is given the choice to walk into two empty elevators, and given a reward no matter which one it chooses (as the choice is equal). On the next level, there’s one person in one elevator while the other is empty, and the system will only be rewarded if it chooses the empty elevator. You can eventually scale this to large numbers of people in each elevator, and the system will know to always choose the one with fewer people.

An interesting thing that happens with curriculum training is that networks can begin to intuit the correct answer without needing to manually count the number of people in each elevator. The system can start to have a “feeling” about which one is correct, almost instinctive learning at the deeper levels of the neural network.

The Difficulties of Inferring Behavioral Output

One of the difficult things about only being able to measure the behavioral output is that two models can have the same behavior, but it’s hard to know if they use the same process to get there. Discerning the correct order of operations is one of the unknown parts of the work, but seeing different models trained on different curricula come to the same conclusion offers more possibilities for potential human mental processes.

The simpler a task is, the more possible curricula that could result in the correct behavior. As the intended behavior becomes more complex, the solution has to become more specific in order for the model to come to the correct conclusion.

The Nature of Models and Reality

“I think for me personally, the key is to not take my models so seriously that I conflate them with reality.”

Kanaka recognizes that her work is in simulation of biological processes, and while they can provide helpful information and simulate experiments that would be unethical, time-intensive, or impossible in the real world, they can’t map directly onto the biology that has evolved over millennia to make our brains the complex structures we know and love.

The Future of Computational Neuroscience

Kanaka views her work as building tools for deeper inquiry, allowing researchers to ask and test better questions. After all, it’s far easier to measure and alter variables in computer programs than living organisms.

“At the very, very least they generate predictions that the next experiment can then validate or falsify.”

To hear more about computational neuroscience, you can listen to the full episode here!