by Peter High, published on Forbes
There has been a lot written about the transformational power of artificial intelligence. If you are a regular reader of this column, you have gained the perspectives of eight of the leading thinkers on the topic. (See links to each below.) Nick Bostrom is perhaps the most influential thinker on safety concerns associated with the march toward artificial intelligence. He calls artificial intelligence “the single most important and daunting challenge that humanity has ever faced.”
Bostrom is an extraordinary polymath, having earned degrees in physics, philosophy, mathematical logic, and neuroscience. In many ways, he personifies the need for thinkers to collaborate at the intersection of disciplines in order to fully understand the opportunities and challenges represented by artificial intelligence. In his bestselling book, Superintelligence: Paths, Dangers, Strategies, the Oxford University professor and the founding Director of the Future of Humanity Institute highlights that just as the fate of gorillas depends on the actions of humans rather than on gorillas themselves, the fate of humanity may come to depend on superintelligent machines. He points out that we have the advantage in that we are the authors of this fate, unlike our primate relatives. He worries that we are not taking full advantage, however.
His work has profoundly influenced leading thinkers such as Elon Musk, Bill Gates, and Stephen Hawking. In this wide ranging interview, Bostrom explains his concerns with artificial intelligence, providing thoughts on what we might do to avoid them.
(To listen to an unabridged audio version of this interview, please click this link. This is the ninth interview in my artificial intelligence series. Please visit these links to interviews with Mike Rhodin of IBM Watson, Sebastian Thrun of Udacity, Scott Phoenix of Vicarious, Antoine Blondeau of Sentient Technologies, Greg Brockman of OpenAI, Oren Etzioni of the Allen Institute for Artificial Intelligence, Neil Jacobstein of Singularity University, and Geoff Hinton of Google.)
Peter High: Nick, you described yourself as a dis-interested student prior to age 15, but you experienced a profound awakening that led you to ambitious intellectual pursuits. At university you studied physics, philosophy, mathematical logic, neuroscience, and I am sure that this is not an exhaustive list. You perhaps are the first among an elite group that I have had the pleasure of speaking with who personify this need to be a polymath, having covered so many different topics. I am sure that does not mean that you do not require collaboration with people in these and many other areas, but I wonder how did it occur to you and why did you elect to pursue so much breadth in addition to depth in your studies? This seems not to be the norm with a lot of thinkers who operate in a similar space.
Nick Bostrom: I was following my instinct as to what I thought was interesting and potentially important from an intellectual point of view, and what I thought were interesting and important insights, ideas, and techniques in a number of different academic fields. I would say that among quite a few of my colleagues here at the research institute, many also have multi-disciplinary degrees in their background having studied more than one subject in university or having done masters in one field and then switching to a different field for their Ph.D.
High: In 2004 you were among the founders of the principles of ethics in emerging technologies. Not only were you studying the opportunities represented in the various areas that we just described, but you were also thinking about the ethical aspects about developing technology. How did the idea of ethics become something relevant to you?