*Overview: **A new mathematical model identifying essential connections between neurons reveals that some neural networks in the brain are more essential than others.*

*Source: **HHMIE*

**After a career exploring the mysteries of the universe, a senior scientist from Janelia Research Campus is now exploring the mysteries of the human brain and developing new insights into the connections between brain cells. **

Tirthabir Biswas had a successful career as a theoretical high-energy physicist when he came to Janelia on a sabbatical in 2018. Biswas still enjoyed tackling problems about the universe, but the field had lost some of its excitement, with many important questions already answered.

“The neuroscience of today is a bit like the physics of a hundred years ago, when physics had so much data and they didn’t know what was going on and it was exciting,” said Biswas, who is part of the Fitzgerald Lab.

“There’s a lot of information in neuroscience and a lot of data, and they understand some specific big circuits, but there’s still no overarching theoretical understanding and there’s an opportunity to contribute.”

One of the biggest unanswered questions in neuroscience revolves around connections between brain cells. There are hundreds of times more connections in the human brain than there are stars in the Milky Way, but which brain cells are connected and why remains a mystery. This limits scientists’ ability to accurately treat mental health problems and develop more accurate artificial intelligence.

The challenge of developing a mathematical theory to better understand these connections was a problem Janelia Group Leader James Fitzgerald first posed when Tirthabir Biswas arrived at his lab.

While Fitzgerald was out of town for a few days, Biswas sat down with pen and paper and used his background in high-dimensional geometry to think about the problem—a different approach to that of neuroscientists, who typically rely on calculus and algebra to solve the problem. to deal with. math problems. Within days, Biswas had great insight into the solution and approached Fitzgerald as soon as he returned.

“It looked like this was a really difficult problem, so if I say, ‘I solved the problem,’ he will probably think I’m crazy,” Biswas recalls. “But I decided to say it anyway.” Fitzgerald was skeptical at first, but when Biswas finished formatting his work, they both realized he was on to something important.

“He had an insight that’s really fundamental to how these networks work that people haven’t had before,” Fitzgerald says. “This insight was made possible through interdisciplinary thinking. This insight was a flash of brilliance that he had because of his way of thinking, and it just translated into this new problem that he has never worked on before.

Biswas’s idea helped the team develop a new way to identify essential connections between brain cells, which was published on June 29 in *Physical Assessment Exam*† By analyzing neural networks – mathematical models that mimic brain cells and their connections – they were able to figure out that certain connections in the brain may be more essential than others.

In particular, they looked at how these networks convert inputs into outputs. For example, an input may be a signal detected by the eye and the output may be the resulting brain activity. They looked at which connection patterns resulted in the same input-output transformation.

As expected, there were an infinite number of possible connections for any input-output combination. But they also found that certain connections exist in every model, leading the team to suggest that these necessary connections might be present in real brains. A better understanding of which connections are more important than others could lead to a greater awareness of how real neural networks in the brain perform calculations.

The next step is for experimental neuroscientists to test this new mathematical theory to see if it can be used to make predictions about what happens in the brain. The theory has direct applications for Janelia’s efforts to map the fly brain connectome and record brain activity in larval zebrafish. Uncovering underlying theoretical principles in these small animals can be used to understand connections in the human brain, where capturing such activity is not yet possible.

“What we’re trying to do is put forward some theoretical ways to understand what really matters and use this simple brain to test those theories,” Fitzgerald says. “Because they are verified in simple brains, the general theory can be used to think about how brain computation works in larger brains.”

## About this neuroscience research news

**Author: **Nanci Bompey**Source: **HHMIE**Contact: **Nanci Bompey – HHMIE**Image: **The image is in the public domain

**Original research: **Closed access.

†Geometric framework to predict structure from function in neural networksby Tirthabir Biswas et al. *Physical Assessment Exam*

**Abstract**

**Geometric framework to predict structure from function in neural networks**

Neural computation in biological and artificial networks depends on the nonlinear summation of many inputs.

The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative relationships between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on the context.

Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown.

Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold linear neurons.

Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate the solution space of all feedforward and returning connectivity matrices that can generate the specified responses from the network inputs.

A generalization that takes noise into account further reveals that the geometry of the solution space can undergo topological transitions as the allowable error increases, which could provide insight into both neuroscience and machine learning.

We ultimately use this geometric characterization to derive certainty conditions that guarantee a non-zero synapse between neurons.

Thus, our theoretical framework could be applied to neural activity data to make rigorous anatomical predictions that generally follow from the model architecture.