Have you ever been totally dominated by the computer player in a video game? A new artificial intelligence system takes on all comers with a handful of old Atari titles, and it does so after learning the rules bit by bit like a human. Its creators claim this is just the very beginning of what it can do. In a few years, it may be driving you to work.
What is Artificial Intelligence (AI)?
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently in the similar manner the intelligent humans think.
Goals of AI:
- To Create Expert Systems − The systems which exhibit intelligent behavior, learn, demonstrate, explain, and advice its users.
- To Implement Human Intelligence in Machines − Creating systems that understand, think, learn, and behave like humans.
Learn how to navigate as humans, Google's latest artificial intelligence program, Deepmind, has beaten gamers in labyrinth games, The Guardian reported. Context is the virtual environment, DeepMind and gamers are tasked with tracing through the chain of rooms, roads and strange random. Equipped with an artificial mesh cell system, the AI is quicker and takes advantage of occasional blackouts in the game.
Senior research fellow Dharshan Kumaran said: "It acts as an animal, picking up lines wherever possible, and taking shortcuts anytime it appears. It's about surpassing a professional player. " Previously, scientists discovered the existence of artificial mesh cells in AI (mesh cells that are the basis for navigation activity, identified in humans and mammals in 2005).
As AI training moves across terrain, scientists find that AI develops electrical activity quite similar to that occurring in specialized brain cells that develop navigational skills. After discovering artificial mesh cells, the DeepMind program team created an enhanced version and as a result it beat the pro gamers in the game. This milestone marks an important milestone in the field of artificial intelligence.
"What we're trying to do is use the human brain as an inspiration," Google DeepMind researcher Demis Hassabis told reporters in a telephone conference call about the research, published in Thursday's issue of the journal Nature. "This is the first rung of the ladder to showing that a general learning system that goes from end to end, from pixels to actions, even on tasks that humans find difficult."
And as anyone who played games in the '70s and '80s will remember, Atari 2600 games were definitely difficult. The AI outscored humans on 23 out of 49 games, such as Road Runner, Space Invaders and Breakout, and came close on many more.
The state of the AI's learning can be visually inspected, which shows how it has clustered and categorized different types of data.
Although AI has demonstrated the supernatural ability of object identification, go play, go, poker, but the ability to navigate in space is another challenge. This reveals the potential for boom for intelligent programs that simulate human brain activity.
In addition, the discovery paves the way for computer engineers to build models that help scientists understand the human brain. "Without human or animal experiments, we can completely use AI and artificial neural networks to understand how the brain operates, performing various functions on the body," says Caswell Barry, co-author of the study. Neurobiology works at University College London working on the assertion project.
A report in the journal Nature (May 9), the scientists describe how they build a deep neural cell network, a computer program with artificial layers of neuron that processes information. Then they teach the navigation program in basic space, giving it some speed coding signals, like the directions in the mouse brain.
AI is better educated and responsive, and can predict where it is going in the virtual environment. A quarter of the artificial neuron on each layer of the deep neuronal network acts as a biological net. In other words, AI grasped the strategy of exploring the world as the human brain went through. "We were amazed at how well the program was running," Caswell said.
Grid cells are the basis for navigation in humans and mammals. Hexagonal cells, which differ in size from large to large, may even overlap into an invisible network that is thought to help the mammals perceive their location and calculate the shortest path to the entry.