Thursday, May 2, 2024
HomeGlobalScience & TechnologyArtificial neurons imitate complex brain function in AI computing

Artificial neurons imitate complex brain function in AI computing

-

Science & Technology, UK (Commonwealth Union) – Artificial intelligence (AI) can solve problems in various ways depending on the nature of the problem. In recent years AI has had a variety of functions. The analysis of large amounts of data and to recognize patterns that humans might not be able to identify has been one of its key functions. The algorithms of AI can also optimize a system or process by minimizing costs, increasing efficiency, or maximizing profits. AI can also analyze data and make predictions about future events or outcomes. For example, AI can predict which customers are most likely to churn, which stocks are likely to increase in value, or which medical treatments are most effective for a particular patient.

Scientists have produced atomically thin artificial neurons that can process both light and electric signals in computing. This material makes it possible for the simultaneous presence of separate feedforward as well as feedback routes inside a neural network. This enhances its capability to solve complex issues.

Over many years researchers have been evaluating ways to recreate the versatile computational capabilities of biological neurons to form quicker and with a higher energy-efficient machine learning systems. A highly possible approach consisted of the utilization of memristors: electronic components that can store a value by modifying their conductance followed by the utilization of that value for the processing of in-memory.

But, one of the main obstacles to copy the complex processes of biological neurons and brains with the application of memristors was the obstacle of integrating the feedforward as well as feedback neuronal signals. These mechanisms support our cognitive capability to know complex tasks, with the application of rewards and errors.

A study team from the University of Oxford, IBM Research Europe, together with the University of Texas, made public a significant feat: the development of atomically thin artificial neurons formed by stacking 2D materials. The findings appeared in Nature Nanotechnology.

The study had scientists have an expansion of the functionality of the electronic memristors by bringing out their response to optical and electrical signals. This made it possible for the simultaneous presence of separate feedforward and feedback paths inside the network. The leap forward permitted the team to produce the winner-take-all neural networks: computational learning programs with the potential for solving complex problems in machine learning, like unsupervised learning in clustering as well as combinatorial optimization issues.

Researchers further indicated that 2D materials are formed by just a few layers of atoms, where this fine scale provides them different exotic properties, capable of being fine-tuned that depends on the way the materials are layered. Researchers in the study utilized a stack of three 2D materials that consisted of graphene, molybdenum disulfide and tungsten disulfide. They used this to form a device exhibiting an alteration in its conductance that relies on the power as well as duration of light/electricity that shines on it.

In contrast to digital storage devices, they are analog and function in a similar manner to the synapses and neurons from our biological brain. The analog feature permitted computations, where a sequence of electrical or optical signals transmitted to the device formed gradual alterations in the stored electronic charge levels. This technique made the basis for threshold modes for neuronal computations, analogous to the technique our brain processes for a set of excitatory as well as inhibitory signals, as indicated by researchers.

Lead author Dr Ghazi Sarwat Syed, who is a Research Staff Member at IBM Research Europe Switzerland, says “This is a highly exciting development. Our study has introduced a novel concept that surpasses the fixed feedforward operation typically utilised in current artificial neural networks. Besides the potential applications in AI hardware, these current proof-of-principle results demonstrate an important scientific advancement in the wider fields of neuromorphic engineering and algorithms, enabling us to better emulate and comprehend the brain.”

spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Follow us

51,000FansLike
50FollowersFollow
428SubscribersSubscribe
spot_img