Advancing AI’s boundaries at Cold Spring Harbor Labs

Posted

Kyle Daruwalla, an independent research neuro AI scholar at Cold Spring Harbor Laboratory, has unveiled pioneering advancements in artificial intelligence that could transform the field of computing. His research focuses on developing AI models inspired by the human brain, emphasizing both efficiency in data processing and environmental sustainability.
Daruwalla’s work centers around neural networks, the fundamental architecture behind modern AI. Unlike traditional models, which rely on back propagation algorithms, Daruwalla’s design introduces biologically plausible learning rules, a departure from conventional methods.
“Traditional AI models use back propagation to adjust connections between artificial neurons, mimicking brain processes,” explained Daruwalla. “However, this method is not entirely brain-like. In back propagation, the model processes data, identifies errors, and adjusts connections in a sequential manner. This approach does not reflect the brain’s simultaneous data processing and learning capabilities.
Daruwalla’s model introduces the concept of local and real-time learning, where each connection between neurons updates independently and continuously, without waiting for the entire network. This allows for more efficient and dynamic learning, akin to how the human brain functions.
“The brain updates connections in real-time, without pausing,” Daruwalla said. “Our model brings that type of learning to AI, maintaining the structure of existing models but changing how they are trained.”

A significant motivation behind this research is the need for more energy-efficient AI systems. Daruwalla explained how traditional AI training requires vast computational resources, consuming significant energy and generating substantial carbon emissions. For example, training a model like OpenAI’s GPT-3 can produce emissions equivalent to the total lifetime emissions of five automobiles.
“Our neuromorphic approach could reduce the environmental impact of AI training. Conventional computers move data back and forth between memory and processors, generating heat and consuming energy,” Daruwalla noted. “Neuromorphic hardware, designed to operate like neural networks, minimizes this data movement, leading to lower energy consumption.”
The inspiration for this novel approach stems from the field of neuromorphic computing, which seeks to build computers modeled after the brain. The brain processes vast amounts of information efficiently, using minimal energy. By emulating these processes, Daruwalla said that he aims to create AI models that are both powerful and sustainable.
While this research represents a fundamental shift in how AI is trained, its practical applications are still emerging. Daruwalla predicts that neuromorphic computing will first be adopted in servers running AI models, where energy efficiency can significantly reduce operational costs and environmental impact.
“The immediate application of this technology will likely be in large-scale AI servers,” Daruwalla said. “The current models consume substantial energy, and neuromorphic computing could mitigate this. However, mainstream adoption in consumer devices like smartphones and laptops is still a decade away.”
Daruwalla also said that the transition from theoretical research to practical application involves overcoming several challenges. He admitted that while he and his fellow researchers had made a huge step forward, there was still a long ways to go in terms of studying and experimenting with the new AI models.
“Implementing these ideas requires fine-tuning the models to bridge the gap between theory and practice,” Daruwalla said. “There’s a lot of experimentation involved to get it to work in real-world scenarios.”
Moving forward, he explained that the research will focus on refining these models and exploring additional inspirations from evolutionary biology and development.
“We need to create AI systems that learn efficiently from fewer examples, similar to humans and even animals,” Daruwalla emphasized. “This involves drawing more insights from how natural systems develop and learn.”
Ultimately, Daruwalla’s research is part of a broader effort to rethink computing fundamentals. He added that these experiments could help push forward our understanding of AI, and even fundamentally change the relationship between AI and humanity.
“The way we build computers has remained largely unchanged since the days of John von Neumann. With the plateauing of traditional computational advancements, neuromorphic computing offers a promising path forward,” Daruwalla said. “This is about more than just faster or more efficient AI—it’s about fundamentally redefining how we build and use computers.”