Science & Technology (Commonwealth Union) – A recent study suggests that systems governed by next-generation computing algorithms could pave the way for superior and more efficient machine learning products. By harnessing machine learning tools to generate a digital twin, essentially a virtual replica, of a chaotic electronic circuit, researchers successfully predicted its behavior and effectively controlled it.
Traditional devices such as thermostats and cruise control rely on linear controllers, employing straightforward rules to guide systems towards a desired state. However, these algorithms struggle when faced with complex behaviors like chaos.
In contrast, sophisticated technologies like self-driving cars and aircraft often employ machine learning-based controllers, which utilize intricate networks to learn and implement optimal control strategies. Nonetheless, these algorithms pose challenges, particularly in terms of computational complexity and implementation.
The availability of an effective digital twin is poised to revolutionize the development of future autonomous technologies, asserts Robert Kent, the lead author of the study and a graduate student in physics at The Ohio State University.
“The problem with most machine learning-based controllers is that they use a lot of energy or power and they take a long time to evaluate,” explained Kent. “Developing traditional controllers for them has also been difficult because chaotic systems are extremely sensitive to small changes.”
He emphasized that these matters are crucial, especially in scenarios where milliseconds can determine life or death, such as the split-second decisions required in self-driving vehicles to avoid accidents.
Recently published in Nature Communications, the study delves into a breakthrough. The team developed a digital twin compact enough to reside on an inexpensive computer chip, small as to balance on a fingertip, and capable of functioning offline. This twin was engineered to enhance the efficiency and performance of controllers, resulting in reduced power consumption. Its success stems from being trained through reservoir computing, a form of machine learning adept at understanding evolving systems over time.
Kent highlighted the advantage of this machine learning architecture, likening it to the dynamic connections within the human brain. This innovation marks a significant stride in computing, particularly for applications like self-driving vehicles and adaptive heart monitors. While similar chips have found utility in devices like smart fridges, this novel capability uniquely equips the new model to tackle dynamic systems requiring swift adaptation.
“Big machine learning models have to consume lots of power to crunch data and come out with the right parameters, whereas our model and training is so extremely simple that you could have systems learning on the fly,” he elaborated.
In order to assess this theory, scientists instructed their model to undertake intricate control tasks and juxtaposed its outcomes with those derived from conventional control methods. The investigation demonstrated that their approach outperformed its linear counterpart in accuracy and showcased a notable reduction in computational complexity compared to a previous machine learning-based controller.
The rise in accuracy was quite noteworthy in certain scenarios according to Kent. While the findings indicated that their algorithm demands more energy than a linear controller for operation, this compromise translates to prolonged durability and heightened efficiency when the team’s model is activated, surpassing current machine learning-based controllers available in the market.
“People will find good use out of it just based on how efficient it is,” Kent explained. “You can implement it on pretty much any platform and it’s very simple to understand.” The availability of the algorithm was made available recently for scientists.
Kent explained that besides motivating potential improvements in engineering, there’s another important reason to create algorithms that use less power. He pointed out that as our society relies more and more on computers and artificial intelligence in our daily lives, the need for data centers is growing rapidly. This is concerning because these digital systems consume a lot of power, and experts worry about how future industries will cope with this demand.
Additionally, the construction of data centers and large-scale computing projects can create a significant amount of carbon emissions, which contributes to environmental problems. Scientists are actively searching for ways to reduce these emissions from technology.
Looking ahead, Kent suggested that future research will likely focus on training the model to work on other tasks, like quantum information processing. He believes that these advancements will have a significant impact on the scientific community in the future.