Overview: Researchers have developed a new GPU-based machine learning algorithm to help predict the connectivity of networks in the brain.
A new GPU-based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) could help scientists better understand and predict connectivity between different brain regions.
The algorithm, called Regularized, Accelerated, Linear Fascicle Evaluation or ReAl-LiFE, can quickly analyze the vast amounts of data generated by diffusion Magnetic Resonance Imaging (dMRI) scans of the human brain.
Using ReAL-LiFE, the team was able to evaluate dMRI data more than 150 times faster than existing state-of-the-art algorithms.
“Tasks that previously took hours to days can be completed in seconds to minutes,” said Devarajan Sridharan, associate professor at the Center for Neuroscience (CNS), IISc, and corresponding author of the study published in the journal Nature Computational Science.
Millions of neurons fire in the brain every second, generating electrical pulses that travel through neuronal networks from one point in the brain to another via connecting cables or “axons.” These connections are essential for calculations that the brain performs.
“Understanding brain connectivity is critical to uncovering brain-behavioral relationships at scale,” said Varsha Sreenivasan, a PhD student at CNS and lead author of the study.
However, conventional approaches to study brain connectivity mostly use animal models and are invasive. In contrast, dMRI scans provide a non-invasive method to study brain connectivity in humans.
The cables (axons) that connect different brain regions are the information highways. Because bundles of axons are shaped like tubes, water molecules move through them in a targeted manner, along their length. With dMRI, scientists can track this movement to create a comprehensive map of the brain’s network of fibers called a connectome.
Unfortunately, it is not easy to locate these connectomes. The data obtained from the scans only gives the net flow of water molecules at any point in the brain.
“Imagine that the water molecules are cars. The information obtained is the direction and speed of the vehicles at any point in space and time without information about the roads. Our job is similar to inferring road networks by observing these traffic patterns,” explains Sridharan.
To accurately identify these networks, conventional algorithms closely match the predicted dMRI signal from the inferred connectome to the observed dMRI signal. Scientists had previously developed an algorithm called Linear Fascicle Evaluation (LiFE) to perform this optimization, but one of the challenges was that it worked on traditional Central Processing Units (CPUs), making the computation time-consuming.
In the new study, Sridharan’s team modified their algorithm to reduce computational effort in several ways, including removing redundant compounds, significantly improving LiFE’s performance.
To further speed up the algorithm, the team also redesigned it to run on specialized electronic chips — the kind found in high-end game consoles — called Graphics Processing Units (GPUs), which allowed them to analyze data at speeds as high as 100-150. times faster than previous approaches.
This improved algorithm, ReAl-LiFE, could also predict how a human subject would behave or perform a particular task. In other words, using the algorithm’s estimated connection strengths for each individual, the team was able to explain variations in behavioral and cognitive test scores in a group of 200 participants.
Such an analysis may also have medical applications. “Large-scale data processing is becoming increasingly necessary for big-data neuroscience applications, especially for understanding healthy brain function and brain pathology,” says Sreenivasan.
For example, using the connectomes obtained, the team hopes to identify early signs of aging or deterioration in brain function before they manifest themselves behaviorally in Alzheimer’s patients.
“In another study, we found that an earlier version of ReAL-LiFE would outperform other competing algorithms in distinguishing Alzheimer’s disease patients from healthy controls,” Sridharan says.
He adds that their GPU-based implementation is very common and can be used to address optimization problems in many other areas as well.
About this AI and neuroscience research news
Original research: Open access.
†GPU-accelerated connectome discovery at scaleby Devarajan Sridharan et al. Nature Computational Science
GPU-accelerated connectome discovery at scale
Diffusion magnetic resonance imaging and tractography make it possible to estimate the anatomical connectivity in the human brain in vivo. But without ground-truth validation, different tractographic algorithms can yield widely differing connectivity estimates. While streamlined pruning techniques reduce this challenge, slow computation times preclude its use in big data applications.
We present ‘Regularized, Accelerated, Linear Fascicle Evaluation’ (ReAl-LiFE), a GPU-based implementation of a state-of-the-art streamlined pruning algorithm (LiFE), which achieves >100× accelerations over previous CPU-based implementations .
By taking advantage of these accelerations, we overcome important limitations with LiFE’s algorithm to generate thinner and more accurate connectomes. We demonstrate the ability of ReAl-LiFE to estimate compounds with superlative test-retest reliability, while outperforming competing approaches.
In addition, we predicted inter-individual variations in multiple cognitive scores with ReAl-LiFE connectome functions. We propose ReAl-LiFE as a timely, surpassing state-of-the-art tool for accurate discovery of individualized brain connectomes on a large scale.
Finally, our GPU-accelerated implementation of a popular non-negative least squares optimization algorithm is broadly applicable to many real-world problems.