Science

AI news: Oxford University scientists synthesise human-like thoughts in machines


Oxford University researchers are attempting to recreate human thinking patterns in artificial intelligence machines. Their method, using a language guided imagination (LGI) network, could lead to artificial intelligence (AI) capable of mental ideas led by language. Cognition requires our brains to both comprehend a particular language expression and use it to organise the flow of ideas in the mind.

For example, if a person notices it is raining, they will internally say, “I need an umbrella” before deciding to bring an umbrella.

As this thought travels through the mind, however, they will automatically understand what the visual input means, and how holding an umbrella will prevent them from getting soaked.

AI machines can now recognise images, process language and sense raindrops.

However they have not yet acquired this imaginative thinking ability so far unique to humans.

We can achieve such “continual thinking” because they are able to generate mental images guided by language and extract language representations from real or imagined situations.

Researchers are currently developing Natural Language Processing (NLP) tools that can answer queries in a human-like way.

These however are unable to understand language in the same way and with the same depth as humans.

This is because humans have an innate cumulative learning capacity that accompanies them as their brain develops.

This “human thinking system” is associated with particular neural substrates in the brain, the most important of which is the prefrontal cortex.

This part of the brain is the region responsible for memory processes that take place as people are performing a task, including the maintenance and manipulation of information in the mind.

In an attempt to reproduce human-like thinking patterns in machines, researchers Feng Qi and Wenchuan Wu created an artificial neural network inspired by the prefrontal cortex.

The researchers wrote: ”We proposed a language guided imagination (LGI) network to incrementally learn the meaning and usage of numerous words and syntaxes, aiming to form a human-like machine thinking process.

The LGI network developed by Qi and Wu has three key components: a vision system, a language system and an synthetic prefrontal cortex.

The vision system is composed of an encoder that disentangles the input received by the network or imagined scenarios into abstract population representations, as well as an imagination decoder that reconstructs imagined scenarios from higher level representations.

The second sub-system, the language system, mimics a function of the human brain by extracting quantity information and converts them into text symbols.

While the final component of their network mimics the human cortex, combining inputs of both language and vision representations to predict text symbols and manipulated images.

Qi and Wu evaluated their LGI network in a series of experiments and found that it successfully acquired eight different syntaxes or tasks in a cumulative way.

Their technique also formed the first ‘machine thinking loop,” showing an interaction between imagined pictures and language texts.

In the future, the LGI network developed by the researchers could aid the development of more advanced AI, which is capable of human-like thinking strategies, such as visualisation and even imagination.

The researchers added: ”LGI has incrementally learned eight different tasks, with which a machine thinking loop has been formed and validated by the proper interaction between language and vision system.

“Our paper provides a new architecture to let the machine learn, understand and use language in a human-like way that could ultimately enable a machine to construct fictitious mental scenarios and possess intelligence.”



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.