Any moment in which artificial intelligence (AI) has awestruck you or bolstered financial security, denotes the sanguinity of pivotal contributions made by scientists, mathematicians, and engineers over the years. AI has been instrumental in providing top-notch solutions like text-to-speech system that makes life a tad easier. However, it would be unfair not to mention the noteworthy contributions of two specific scientists in this breakthrough domain – John Hopfield, a physicist from Princeton University, and Geoffrey Hinton, a computer scientist from the University of Toronto.
These eminent researchers became Nobel laureates in physics due to their pioneering work in artificial neural networks on October 8, 2024. Despite artificial neural networks being replicas of biological neural networks, the laureates’ research leaned substantially on statistical physics, thereby justifying the award in physics.
Artificial neural networks emerged due to research carried out on biological neurons in living brains. In 1943, Warren McCulloch, a neurophysiologist and Walter Pitts, a logician, conceptualized a rudimentary model demonstrating how a neuron works. This model insinuates that neuron is linked with its neighboring neurons and has the capability to receive and amalgamate signals from them, further sending signals to other neurons. Remarkably, the model specifies that neurons can weigh signals from different neighbors differently.
For instance, imagine you are uncertain about purchasing a trending smartphone. Typically, you would consult your friends for recommendations and decide based on the majority’s opinion. However, if you hold a certain friend’s opinion in high esteem owing to their technical expertise, you may weigh their recommendation more. This signifies that the neuron can aggregate signals from the neighbors and contribute to the decision-making process.
Indeed, the structure of artificial and biological neural networks enables neurons to aggregate signals from neighbors and send a signal to other neurons. This key distinction allows the identification of cycle in the network. Hence, there are two types of artificial neural network, feedforward and recurrent.
Feedforward neural network, as the name suggests, carries signals in one direction from input to output with no loops and is arranged in layers. On the other hand, a recurrent neural network is characterized by its circular connectivity where the arrangement of neurons is more intricate.
The concept of artificial neural networks was initially influenced by biology but soon fields like logic, mathematics, and physics started to shape its development. Distinguished contribution was made by the physicist John Hopfield who utilized models from physics to examine recurrent neural networks, known today as Hopfield networks. His studies primarily focused on their dynamics such as viral memes and echo chambers, which arise from simple information exchanges between people in the network.
The 1980s witnessed a quantum leap in the domain of artificial neural networks with the inception of Boltzmann machines. Named after the 19th-century physicist Ludwig Boltzmann, these networks could generate new patterns, thereby laying the foundation for the modern generative AI revolution. Moreover, Hinton and his co-workers tactfully used Boltzmann machines to train multilayer networks, marking the dawn of the deep learning revolution.
The fact that a Nobel Prize in physics was given for AI research highlights the amalgamation of physics and AI, demonstrating how ideas from physics have stimulated the rise of deep learning. The present time sees deep learning returning the favor by facilitating precise and swift simulations of systems ranging from microscopic molecules and materials to macroscopic Earth’s climate. This reciprocal exchange of ideas and practices between physics and AI is a testament to the potential advancements humans can make in creating a sustainable world.