Scientists have developed a new type of neural network chip that can dramatically improve the efficiency of teaching machines to think like humans.
The network, called a reservoir computing system, could predict words before they are said during the conversation, and help predict future outcomes based on the present.
Reservoir computing systems, which improve on a typical neural network's capacity and reduce the required training time, have been created in the past with larger optical components.
Researchers from the University of Michigan in the US created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.
Memristors are a special type of resistive device that can both perform logic and store data.
This contrasts with typical computer systems, where processors perform logic separate from memory modules.
For the study published in the journal Nature Communications, researchers used a special memristor that memorises events only in the near history.
Inspired by brains, neural networks are composed of neurons or nodes, and synapses, the connections between nodes.
To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions.
In this process of what's called supervised learning, the connections between nodes are weighted more heavily or lightly to minimise the amount of error in achieving the correct answer.
Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.
"A lot of times, it takes days or months to train a network. It is very expensive," said Wei Lu, a professor at the University of Michigan.
Image recognition is also a relatively simple problem, as it does not require any information apart from a static image.
More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred or what has just been said.
"When transcribing speech to text or translating languages, a word's meaning and even pronunciation will differ depending on the previous syllables," Lu said.
This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu said.
Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network with the capability to remember.
This is because the most critical component of the system - the reservoir - does not require training.
When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data and hands it off in a simpler format to a second network.
This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.
"The beauty of reservoir computing is that while we design it, we don't have to train it," Lu said.
The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks.
Using only 88 memristors, compared to a conventional network that would require thousands for the task, the reservoir achieved 91 percent accuracy.