After our earlier discussion on deep learning this week, news emerged that Google acquired an artificial intelligence gaming company called DeepMind. I read a post on TechCrunch about the DeepMind acquisition, in which Darrell Etherington speculates that the acquisition will get Google closer to its goal of developing the ultimate human computer interface. This interface might be what Etherington refers to as the perfect virtual “personal valet.”
Since the DeepMind acquisition, quite a bit has been written about Demis Hassabis. One of the most informative articles I found on Hassabis was published in The Independent. For a very brief summary, Hassabis was a prodigy in chess at the age of 13, a successful commercial games designer at 16, founded his own game company after getting a computer science degree, and then earned degrees in cognitive neuroscience. Those are just a few highlights. In short, he’s a pretty sharp guy. His background in systems neuroscience is interesting. Systems neuroscience studies the interaction of neurons and synapses and how these support higher brain functions.
I ran across a video of a presentation that Hassabis did at the Singularity Summit 2010 on the topic of Artificial General Intelligence (AGI). In the presentation, Hassabis talks about the difference between biologically inspired machine learning models and non-biological models. He’s a proponent of biologically inspired models. In fact, he advocates what he calls a systems neuroscience approach to building AGI.
Hassabis describes the biological approach as using the brain as a blueprint for ways that machine learning algorithms might work. He advocates combining discoveries from the field of systems neuroscience with the best of existing machine learning techniques. Where we don’t know how to build a specific machine learning algorithm, he says, let’s look at systems neuroscience to get ideas on how the brain addresses the problem.
One of the areas Hassabis investigates is whether or not reinforcement learning techniques should be used to create AGI (and to construct machine learning algorithms). In the video, Hassabis presents the results of research with monkeys that shows the power of reinforcement learning. In an experiment, monkeys were given a random reward of a drop of juice, which resulted in a noticeable uptick in the firing of a dopamine neuron. After that, the same monkeys were shown a light and directly thereafter, they were given the reward. After a certain number of repetitions of this pattern, the dopamine neurons fired at the appearance of the light, which was a reliable predictor of the juice reward (instead of firing when the juice was delivered). The monkeys were trained to use the light as predictor for the reward and the dopamine neuron supported this learning. As a final step in the experiment, the light was shown, but there was no juice reward following it. After a certain amount of time, the dopamine neurons gradually reduced their firing when the monkeys saw the light. The monkeys were re-trained to no longer use the light as a predictor for the reward.
Dopamine systems seem to be very effective in reinforcing learning and thereby in training us and other animals to make accurate predictions. Since that’s the case, Hassabis suggests that we should look to this brain system, and others like it, as a model for machine learning processes. That’s my interpretation of his Singularity Summit presentation, but watch the video yourself to verify.
It’s an exciting and eventful time for machine learning and certainly for intelligent virtual assistants and virtual agent technologies. Will personal virtual assistants and smart advisors of the future be based on biologically inspired algorithms? Some may already be. We’ll have to wait a bit to find out whether using the brain as a blueprint will lead to more capable virtual agents.