Algoritma Baru Bantu Neural Network Belajar Secara Berterusan
Neural networks are pretty good at learning specific tasks, like recognizing handwritten digits. However, when they try to take on new tasks, they often suffer from "catastrophic forgetting." This means they can learn the new tasks, but they end up forgetting how to do the original ones. For advanced systems, like those in self-driving cars, learning something new can mean starting all over again.
In contrast, biological brains, like ours, are much more adaptable. We can easily learn to play a new game without having to relearn basic skills like walking and talking.
Taking a cue from how our brains work, researchers at Caltech have come up with a new algorithm that allows neural networks to continuously update their knowledge without starting from scratch. This algorithm, called the functionally invariant path (FIP) algorithm, could be useful in various fields, from enhancing online shopping recommendations to improving self-driving technology.
The algorithm was created in the lab of Matt Thomson, an assistant professor of computational biology and an HMRI Investigator. Their findings were published on October 3 in the journal Nature Machine Intelligence.
Thomson and former graduate student Guru Raghavan (PhD ’23) drew inspiration from Caltech’s neuroscience research, specifically from Carlos Lois’s lab. Lois studies how birds can rewire their brains to relearn singing after an injury. Similarly, humans can adapt after brain damage, such as after a stroke, often finding new ways to regain everyday skills.
"This project took years because we wanted to understand how brains learn flexibly," says Thomson. "The challenge was figuring out how to give that same flexibility to artificial neural networks."
To develop the FIP algorithm, the team used a mathematical approach called differential geometry. This allows them to tweak a neural network without losing any of the info it’s already learned.
In 2022, with the help of Julie Schoenfeld, Caltech Entrepreneur In Residence, Raghavan and Thomson co-founded a company called Yurts to enhance the FIP algorithm and scale up machine learning solutions for various issues. Raghavan teamed up with industry experts Ben Van Roo and Jason Schnitzer to make this happen.
The study is titled "Engineering flexible machine learning systems by traversing functionally invariant paths," with Raghavan as the lead author. Alongside Raghavan and Thomson, the Caltech team includes graduate students Surya Narayanan Hari and Shichen Rex Liu, and collaborator Dhruvil Satani. Bahey Tharwat from Alexandria University in Egypt also contributed to the research. The work was funded by the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech, the HMRI, the Packard Foundation, and the Rothenberg Innovation Initiative. Thomson is also connected to the Tianqiao and Chrissy Chen Institute.
Source link
The post Algoritma Baru Bantu Neural Network Belajar Secara Berterusan appeared first on Edisi Viral Plus.
Artikel ini hanyalah simpanan cache dari url asal penulis yang berkebarangkalian sudah terlalu lama atau sudah dibuang :
https://plus.edisiviral.com/algoritma-baru-bantu-neural-network-belajar-secara-berterusan/