Quantcast
Viewing all articles
Browse latest Browse all 10

ARTIFICIAL NEURAL NETWORKS GET EVEN SMARTER

ROBOTS ARE getting closer to achieving human-like dexterity. Computers are improving in recognition of speech and writing. Many of these advances are attributable to artificial neural networks, ANN, for short.

Briefly, an ANN mimics the workings of the human brain. However, this doesn’t tell me much because I don’t understand how the brain works, axons, synapses and all that jazz. Instead, let’s ignore the physiology and focus on results: Neural networks, human and artificial, process information. Both are able to make order out of the overkill of sensory input. ANNs, like humans, are able to learn by example.

Image may be NSFW.
Clik here to view.
m

A simple neuron accepts multiple inputs of data and rules of learning. Based on these, it produces a single output. This and the following image from Stergiou and Signanos, an excellent tutorial on the subject.

In its learning mode, a neuron is trained to fire or not, depending on specific input patterns. Its rules of fire are of the digital yes/no on/off type. Complex neurons have weighted inputs that nuance the decision-making; their operations are akin to analog computing. An ANN has interconnected arrays of these complex neurons.

Image may be NSFW.
Clik here to view.
m

A more sophisticated neuron with weighted inputs.

The first artificial neuron back in 1943 was formulated by a neurophysiologist and a mathematical logician. Alas, technology of information processing didn’t exist at that time to exploit their theoretical construct. Today, though, computers have achieved the power and sophistication to handle big data. And the latest research in “deep learning” has taken ANN beyond capabilities of conventional programming.

Conventional programming solves problems with algorithms, that is, sets of unambiguous instructions. By contrast, today’s ANNs use huge collections of highly interconnected neurons working in parallel and learning patterns by example. The two approaches are complementary, not competing. Often, conventional computing supervises ANN operation.

As its name implies, deep learning enhances an ANN’s establishing rules of neuron firing. Researchers at DeepMind, now part of Google, developed a new approach in deep learning and tested it on classic Atari 2600 games, including Breakout, Enduro, River Raid, Seaquest and Space Invaders. Their ANN achieved “a level comparable to that of a professional human game tester across a set of 49 games.”

Image may be NSFW.
Clik here to view.
m

Atari’s Space Invader is one of the games at which DeepMind’s ANN excels.

Stressing this, the researchers wrote their approach “bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.” A full discussion can be seen at “Human-level control through deep reinforcement learning.”

Researchers at the University of California, Berkeley have designed a Berkeley Robot for the Elimination of Tedious Tasks. BRETT learns human-like dexterity in things hitherto beyond robotic capability. The researchers’ work in advanced ANN is described in “End-to-End Training of Deep Visuomotor Policies.” Their deep convolutional neural networks (CNNs) have more than 92,000 parameters.

Image may be NSFW.
Clik here to view.
m

BRETT has learned to reverse the motion before twisting on a cap. Image from UC Berkeley News Center.

The UC Berkeley researchers write, “This method can learn a number of manipulation tasks that require close coordination between vision and control, including inserting a block into a shape sorting cube, screwing on a bottle cap, fitting the claw of a toy hammer under a nail with various grasps and placing a coat hanger on a clothes rack.”

Everything but the hammer I’m good at. ds

© Dennis Simanaitis, SimanaitisSays.com, 2015


Viewing all articles
Browse latest Browse all 10

Trending Articles