The Icelandic Institute for Intelligent Machines (IIIM) might be the next tech company to make international headlines. Boasting an impressive staff of computer scientists, physicists and electronics engineers, IIIM is currently and actively working on creating machines that possess artificial intelligence. The goal is an ambitious one, bringing its own sets of unique technological and ethical challenges. We spoke with IIIM’s Managing Director, Dr. Kristinn R. Þórisson, to learn more.
What exactly qualifies as an “intelligent machine”? How does this differ from artificial intelligence, if at all?
To answer the second part first: There is no real difference. The field of artificial intelligence aims at building intelligent machines. The difficulty—and this is a big one—lies in answering the question, “What is intelligence?”
You see, most people I know have an intuitive notion of what “intelligence” is. But this is typically not what computer scientists and engineers mean when they use the term. Intelligence refers to operational features of a special kind of system. Intelligent systems in nature, such as fully grown dogs and humans, can handle a range of complex data, work with time limits, deal with novel things, reason, invent things—well, maybe not so much the dogs—anyway, naturally intelligent systems can learn to do these things, improving measurably every day, week, month, and year: life-long learning. No artificial systems can do any of this yet. All of these properties are typically part of what most people mean by the term “intelligence”—and many of them, for instance life-long learning, have not been addressed in any significant way by any branch of AI for all of its sixty or more years. In some sense humans are “intelligent machines,” but only when we get artificial intelligence that can really understand things—in the vernacular meaning of that term—can we start to compare it to human intelligence in any meaningful way.
What are some of the more promising advances your team have made in this field? And what are the biggest challenges?
IIIM does very little basic research—this we leave to the universities. The Center for Design of Intelligent Agents at Reykjavik University is one of our close collaborators; they have made contributions to AI on various fronts.
The biggest challenge in bringing advanced automation to industry, and allowing academia to work more closely, lies in the way these two worlds operate on different timescales, and are driven each in opposite directions by their goals: universities are driven to think far into the future, as far as possible while still sounding convincing, while industry is driven by quarterly earnings. There is a lot of public funding that goes to waste because of lack of closer collaboration. The only way to bridge that is to take direct action—by instituting something like IIIM.
We now have several “instruments”—collaboration formats, intellectual property arrangements, and so on—that allow us to bridge very effectively between basic research and applied R&D. We have provided some of our industry partners with solutions that would have cost a lot more to get in other ways, if they had been gotten at all within the required timeframes.
Who’s expressed an interest in having such hardware and software, and why?
Some of IIIM’s industry partners are interested in machine learning solutions, while others want help with system integration and design. Both require specialised personnel that is highly proficient in cutting-edge research on systems, networks, and AI algorithms, while understanding timelines, deadlines, and who can easily adopt an efficient work ethic.
THE ULTIMATE TOOL FOR MANIACS EVERYWHERE
It’s interesting to see you already have an ethics policy in place for intelligent machines. What prompted that?
Our view on this is very simple: Scientists need to think about the moral implications of their work, especially the potential negative uses of the knowledge they contribute to society, and take a clear stance on it. In my experience the number of scientists who want their work to be for the benefit of all vastly outnumber those who are perfectly ok with abuse and violations of human rights. For an institute like IIIM, whose purpose is to improve society and life on this planet for all, the choice is a rather obvious one. Our new Ethics Policy codifies that in very clear terms: We don’t want to participate in activities that can increase instability, heighten tension between groups, nations and countries. This policy is an important part of that aim.
The biggest concern, however, is the kind of nightmarish future that many science fiction authors have predicted, where a small elite takes control of the vast population by privileged access to powerful technologies. Although some of this trend is already discernible in many societies today, artificial intelligence could possibly kick this into high gear. Of course, artificial intelligence coupled with modern weaponry is in a sense the ultimate tool for maniacs everywhere.
Anything you’re working on right now that you can tell us about?
We recently reached a major milestone in developing a self-programming AI, which is ultimately needed for “real artificial intelligence.” We have shown this machine to be capable of learning highly complex spatio-temporal tasks that no other machine learning system has been shown to come even close to.
Another thing that we’re looking at—and this will produce results within the next two years I think—is new ways of evaluating intelligence. It turns out that IQ tests the way psychologists do them can only work for humans and animals, and just barely at that. AI researchers haven’t come up with any good ideas for how to compare the diverse set of systems that we call “artificially intelligent.”
My colleagues and I are also looking deeply into the relationship between computation and physics, which we believe is a more or less completely ignored issue. Whoever gets to the bottom of that relationship will instantaneously revolutionise both computing and AI, possibly causing these to merge into a brand new field of research of “truly intelligent machines.”
One of the classic fears of artificial intelligence is that they will replace workers and lead to greater unemployment; that they will benefit the ruling class more than the working class. Do you think this is necessarily so? Why or why not?
It has been clear from the beginning of the industrial revolution that some human labor would be replaced by machines. The advent of AI is simply the extension of this effect into the information age. There may be reason for concern due to the speed at which this can happen when we are mostly dealing with software—when the inherent sluggishness and cost of hardware does not impact speed of adoption as much.
There is also reason for concern regarding any use that could help tilt the scales even faster towards a widened income gap, which directly affects power and decision-making. The individuals, groups, and institutions that have the better position to apply automation to their ends will be in position to abuse that power. We should be watchful and use any means possible to ensure prosperity and equality for all. This is why we have instantiated the Ethics Policy, of which we are very proud.
How would you respond to people worried that tech advances in this direction only increase our dependency on technology?
This is in some ways the ultimate technology to become dependent on—in a similar way that a manager relies on staff to get things done. Whether this is better or worse than our reliance on technology now doesn’t simply depend only on the technology and its deployment, but a number of other functions in society, such as our educational system, our monetary and value generation system, and our systems of government, to name some.
Seen from another angle, given that many of the problems we must address in the coming decades and centuries may be quite a bit more difficult than the ones we face at present, we could use a bit more brainpower to come up with better plans, ideas, and perhaps even make new scientific discoveries that can help with that.
For a majority of people on Earth, knowledge has helped reduce suffering, ensure survival, and increase quality of life. The remaining work to be done in that respect is to some extent not getting done because of lack of knowledge per se, but because of the way we structure, distribute, and control wealth—and due to a serious lack of instruments for mobilising the wealth of the Western nations in ways that can improve the state of affairs elsewhere on the planet. We could use some ideas and leadership for solving this deadlock. Whether it comes from individuals, groups of people, or machines, or some combination, shouldn’t matter.