Think like a human
Posted by metaphorical on 3 April 2007
One of the things that drew me to being a tech writer was science, and the clever things scientists come up with to test hypotheses. A question that scientists have been working on for several years, according to an article in today’s NY Times’s science section, is whether animals have what psychologists call “episodic memory”:
Endel Tulving, a Canadian psychologist, defined episodic memory as the ability to recall the details of personal experiences: what happened, where it happened, when it happened and so on.
Episodic memory was also unique to our species, Dr. Tulving maintained. For one thing, he argued that episodic memory required self-awareness. You can’t remember yourself if you don’t know you exist. He also argued that there was no evidence animals could recollect experiences, even if those experiences left an impression on them.
Tulving seems to assume that animals have no self-awareness, though it’s hard to imagine how he would argue for it except by making unsubstantiated claims, for example that animals have no episodic memory. Perhaps that’s why the unique-to-humans claim rang false to Nicola Clayton, a comparative psychologist now at the University of Cambridge. The Times quotes her as thinking, “Hang on, that doesn’t make sense.” Next came the good part—she thought up an experiment that would show animal behavior inconsistent with Tulving’s belief.
Dr. Clayton began to test western scrub jays to see if they met any of the criteria for episodic memory. The jays can hide several thousand pieces of food each year and remember the location of each one. Dr. Clayton wondered if scrub jays simply remembered locations, or if they remembered the experience of hiding the food.
She ran an experiment using two kinds of food: moth larvae and peanuts. Scrub jays prefer larvae to peanuts while the larvae are still fresh. When the larvae are dead for a few hours, the jays prefer peanuts. Dr. Clayton gave the birds a chance to hide both kinds of food and then put them in another cage. She later returned the birds to their caches, in some cases after four hours and in other cases after five days.
The time the scrub jays spent away from their caches had a big effect on the type of food they looked for. The birds that waited four hours tended to dig up larvae, and the birds that had to wait for five days passed the larvae by and dug up peanuts instead. (To make sure they were not just picking up the smell of rotten larvae and avoiding those spots, Dr. Clayton dumped out the caches as soon as the birds had made them, and filled all of them with fresh sand.)
In 1998, Dr. Clayton and her colleagues published the results of their experiment, declaring that scrub jays met the standards for “episodic-like” memory.
Brain scan studies of episodic memory show a link between recollections of the past and thoughts of the future.
Daniel Schacter, a psychologist, and his colleagues at Harvard University recently studied how brains function as people think about past experiences and imagine future ones. Constructing an episodic memory causes a distinctive network of brain regions to become active. As a person then adds details to the memory, the network changes, as some regions quiet down and others fire up.
The researchers then had their subjects think about themselves in the future. Many parts of the episodic memory network became active again.
So Clayton and other researchers have been looking for evidence that animals plan for the future, and they’ve started to find it.
All this is a long way around toward pointing out an article in the current issue of Spectrum, “Think Like a Human,” by Jeff Hawkins, which I was fortunate enough to edit. One piece of fortune was meeting and spending time with Hawkins, who is a uniquely interesting guy. Both before and after he revolutionized the PDA industry at Palm Computing and then Handspring, he has been obsessed with the question of how the human brain works and whether we could make machines work more like it.
In 2002 Hawkins founded, and funded, the Redwood Neuroscience Institute, which is now attached to UC Berkeley, to push science’s understanding of neocortical anatomy and physiology. They came up with a key concept, that of a fundamental node, similar to a neuron, that can learn from observation. They call this key concept HTM, which stands for “hierarchical temporal memory.” Hawkins recounts:
A colleague of mine, Dileep George, was aware of my work and created the missing link. He showed how HTM could be modeled as a type of Bayesian network, a well-known technique for resolving ambiguity by assigning relative probabilities in problems with many conflicting variables. George also demonstrated that we could build machines based on HTM.
His prototype application was a vision system that recognized line drawings of 50 different objects, independent of size, position, distortion, and noise. Although it wasn’t designed to solve a practical problem, it was impressive, for it did what no other vision system we were aware of could do.
In 2005, with a theory of the neocortex, a mathematical expression of that theory, and a working prototype, George and I decided to start Numenta, in Menlo Park, Calif. Our experience in industry and academia taught us that people move more quickly in industry, especially if there is an opportunity to build exciting products and new businesses. Today, the RNI continues as the Redwood Center for Theoretical Neuroscience at UC Berkeley. George and 15 other employees work at Numenta, and I split my time between Numenta and Palm.
In 2005 Hawkins formed a for-profit company, Numenta, to commercialize some of the discoveries made at RNI. Numenta has already had some success in visual pattern recognition—for example getting a computer to recognize pictures of dogs as dogs.
We have built and tested enough HTMs of sufficient complexity to know that they work. They work on at least some difficult and useful problems, such as handling distortion and variances in visual images. Thus we can identify dogs as such, in simple images, whether they face right or left, are big or small, are seen from the front or the rear, and even in grainy or partially occluded images.
It’s not hard to think of other applications, such as speech recognition and locomotion. If Numenta can create electronic brains that are good problem solvers, it would go a long way towards the creation of general-purpose robots.
One of RNI’s chief findings was that the process of learning is in a very fundamental way temporal.
Strange though it may seem, we cannot learn to recognize pictures without first training on moving images. You can see why in your own behavior. When you are confronted with a new and confusing object, you pick it up and move it about in front of your eyes. You look at it from different directions and top and bottom. As the object moves and the patterns on your retina change, your brain assumes that the unknown object is not changing. Nodes in a [computer model of the brain] assemble differing input patterns together under the assumption that two patterns that repeatedly occur close in time are likely to share a common cause. Time is the teacher.
The two senses of time, that of RNI’s dog-learning, and the one that’s involved in Clayton’s scrub-jay larvae-caching, and are quite different, of course, and it will be an interesting question to see whether a computer can be given episodic memory, and of what use it would be, especially in a robot.
The connection between the two sets of research is that both are coming at, from completely different directions, the idea of intelligent behavior. In each case as well, results from neurobiology—such as those brain scans of episodic memory—have been a starting point for, in the one case, animal psychology, and in the other, computer science and robotics.
We have an enormous amount still to learn. But science will get there, with new theories based on clever experiments that answer tough questions.