Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel artificial intelligence model inspired by neural oscillations in the brainwith the goal of significantly advancing how machine learning algorithms handle long sequences of data.
AI often struggles with analyzing complex information that unfolds over long periods of timesuch as climate trendsbiological signalsor financial data. One new type of AI modelcalled "state-space models," has been designed specifically to understand these sequential patterns more effectively. Howeverexisting state-space models often face challenges — they can become unstable or require a significant amount of computational resources when processing long data sequences.
To address these issuesCSAIL researchers T. Konstantin Rusch and Daniela Rus have developed what they call “linear oscillatory state-space models” (LinOSS)which leverage principles of forced harmonic oscillators — a concept deeply rooted in physics and observed in biological neural networks. This approach provides stableexpressiveand computationally efficient predictions without overly restrictive conditions on the model parameters.
"Our goal was to capture the stability and efficiency seen in biological neural systems and translate these principles into a machine learning framework," explains Rusch. "With LinOSSwe can now reliably learn long-range interactionseven in sequences spanning hundreds of thousands of data points or more."
The LinOSS model is unique in ensuring stable prediction by requiring far less restrictive design choices than previous methods. Moreoverthe researchers rigorously proved the model’s universal approximation capabilitymeaning it can approximate any continuouscausal function relating input and output sequences.
Empirical testing demonstrated that LinOSS consistently outperformed existing state-of-the-art models across various demanding sequence classification and forecasting tasks. NotablyLinOSS outperformed the widely-used Mamba model by nearly two times in tasks involving sequences of extreme length.
Recognized for its significancethe research was selected for an oral presentation at ICLR 2025 — an honor awarded to only the top 1 percent of submissions. The MIT researchers anticipate that the LinOSS model could significantly impact any fields that would benefit from accurate and efficient long-horizon forecasting and classificationincluding health-care analyticsclimate scienceautonomous drivingand financial forecasting.
"This work exemplifies how mathematical rigor can lead to performance breakthroughs and broad applications," Rus says. "With LinOSSwe’re providing the scientific community with a powerful tool for understanding and predicting complex systemsbridging the gap between biological inspiration and computational innovation."
The team imagines that the emergence of a new paradigm like LinOSS will be of interest to machine learning practitioners to build upon. Looking aheadthe researchers plan to apply their model to an even wider range of different data modalities. Moreoverthey suggest that LinOSS could provide valuable insights into neurosciencepotentially deepening our understanding of the brain itself.
Their work was supported by the Swiss National Science Foundationthe Schmidt AI2050 programand the U.S. Department of the Air Force Artificial Intelligence Accelerator.