Advertisement

  • News
  • Columns
  • Interviews
  • BW Communities
  • Events
  • BW TV
  • Subscribe to Print
  • Editorial Calendar 19-20
BW Businessworld

Neuromorphic Computing Will Open New Frontiers For Artificial Intelligence

Current Machine Learning algorithms also trail humans in many recognition problems e.g. image, voice, characters etc especially when noise is present.

Photo Credit :

1592465248_BvCtXL_Neuromorphic1.jpg

One definition of Artificial Intelligence is that it is a type of computer technology which is concerned with making machines work in an intelligent way, similar to the way that the human mind works. 

Limitations

While current day computers can do many things that humans can do, they lack in at least 2 aspects i.e. Machine Reasoning and Transfer Learning. Léon Bottou, an expert defined Machine Reasoning as “algebraically manipulating previously acquired knowledge in order to answer a new question”. Transfer learning refers to ability to transfer learned experience from one context to another.

There is a third angle, i.e. physical dimensions and energy consumptions. Supercomputers represent the highest computing speeds and current versions have speed in PFLOPs (10^15 Floating point operations per second). But these are bulky devices housed in dedicated buildings and they need power in MW. Human brain consumes about 20 W of power.

AI accelerators

Current Machine Learning algorithms also trail humans in many recognition problems e.g. image, voice, characters etc especially when noise is present. These algorithms are almost invariably based on neural networks that are somewhat modelled on the neurons of human brains. DNN or Deep Neural Networks contains multiple layers of processing and offer better results. Accuracy of neural networks increases if they train on more data but that needs more computing power. This can be achieved by using smaller and more transistors in microprocessors. The latest ones are made using 7 nm process and have 10s of billions of transistors. Heat dissipation and quantum effects limit further improvements here.

Customized chips called AI accelerators offer a way out. GPUs originally designed for processing images are customized for Machine Learning e.g. Ampere chips from Nvidia have speed in PFLOPs. Field Programmable Gate Arrays (FPGAs) offer the flexibility of changing the hardware as DNN frameworks evolve and also allow for tuning for optimal batch size. They are also more power efficient. Another approach uses optimized memory and lower precision arithmetic (8 or 16 bit) to achieve higher throughput with reduced energy requirements. Tensor Processing Unit (TPU) v3 from Google has speed of 90 PFLOPS and consumes about 250W of power.

Neuromorphic computing

While AI accelerators increase accuracy, they do not fundamentally change the way Machine Learning algorithms train. Neuromorphic computing tries to mimic the way human brain works. Human brain has 100 billion neurons and can have upto 10000 connections or synapses with their neighbors leading to upto 100 trillion connections. Only a subset of these neurons are active at any time sending signal pulses to some of its neighbours. Important role played by neuromodulators e.g. dopamine that seem to work like reward mechanism for each signal. One important area of research is plasticity of human brain i.e. its ability to change its own form to suit the function it has to perform.

One way to implement this is through Spiking Neural Networks or SNNs. Here each “neuron” sends independent signals to other neurons. In a network the connections between neurons would have weights. As signals or spikes travel from one neuron to its destinations, the pattern of their timings and their weights can convey information e.g. a separate pattern for each animal in image recognition problem.

Dynamic mapping of synapses in SNNs is similar to the way brain learns. Also another branch of Machine Learning called Reinforcement Learning that works on the basis of rewards generated by the environment rather than labelled predefined data can further tune this. This reward mechanism is similar to the role played by neuromodulator dopamine. The changes in synaptic weightages can be modelled to reflect plasticity of the brain. Also changing weights imply that there is always a probability that a neuron in SNN will spike and is not deterministic. Similarly, signaling by neurons in human brains is also probabilistic.

The responses to spikes can be modulated to represent a continuum of values rather than ‘0’ or ‘1’ and hence provide analogue flavor that is closer to the way brain works. Also as neurons works only when spiked they are not constantly consuming energy and thus saving power.

Advantages

A neuromorphic computer would offer many advantages. It could learn with far less inputs than the current neural networks, learn from unstructured input, will be able to deal with noisy input and consume much less power.

The training of Machine Learning algorithms would shift from cloud hosted powerful servers to even mobile phones. The plasticity feature will remove the need of Transfer Learning as same machine will become multi purpose e.g. robots will understand changes in assembly line components and change their ways accordingly. The neuromorphic chips with other cognitive technologies will learn and reason adding to Machine Reasoning. They will supplement human decision making. Internet of Things is supposed to provide trained algorithms remotely but this may not be needed as the self learning could be embedded in chips. Training of Machine Learning algorithms is an important Use case for development of super computers, quantum computers and in general for increasing the computing speeds and this requirement will reduce.

Current status

Neuromorphic computing is slowly moving to commercial stage. Intel has created Liohi chip that has 130000 neurons and 130 million synapses and it can self learn. Its Pohoiki Springs system combines 768 chips to provide 100 million neurons. Similarly, IBM demonstrated creation of 16 million neurons and 4 billion synapses using 16 TrueNorth chips. BrainScaleS physical model at Human Brain Project (HBP) uses analogues systems to create 4 million neurons and 1 billion synapses using 20 wafers. HBP’s Spinnaker systems aims to simulate 1 billion neurons. Though these are equivalent to capacity of brains of small mammals and not humans, they are already practical applications of these. E.g. Intel demonstrated that its Loihi chips can achieve recognition accuracy using 3000 times less samples than conventional DNNs

Future

In the next few years, neuromorphic chips will become common and will change the way we work with technology. Computers will think and work as humans do paving way for collaborative way of working. And increase despair for those who see this in terms of man machine battle of supremacy.

Disclaimer: The views expressed in the article above are those of the authors' and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.


Sandeep K Chhabra

Sandeep K Chhabra is a software professional working as General Manager at Ericsson India Global Services Pvt Ltd (EGIL). He is B Tech from IIT Delhi in Computer Science and Technology has more than 24 years of experience of working in IT industry. He is a Digital/Business transformations expert, startup mentor and an evangelist of emerging technologies.

More From The Author >>