Shane Dunn joins our #SciFi, #AI, and the Future of War series to discuss Good News from the Vatican by Robert Silverberg. The book presents a thought-provoking discussion on human perceptions the strengths, weaknesses, and proper place of artificial intelligence.
The science fiction story considered here is Good News from the Vatican by Robert Silverberg (1971). The story is set around a group of acquaintances gathering in a cafe near St. Peter’s Square during a papal conclave. The conclave to elect a new pope has been hung between two candidates. To break the stalemate, a compromise candidate is required bringing a robot cardinal to favouritism. The robot cardinal is described as being ‘tall and distinguished with a fine voice and a gentle smile’ with ‘something inherently melancholy about his manner’. The cafe-dwellers are split about the suitability of a robot as pope. Those who might be considered more spiritual, an aged bishop and a young rabbi, being in favour. The ‘swingers’ (hipsters?), those who might be described as being less spiritual or not so clearly attached to a doctrinal, moral philosophy, are opposed.
It is interesting to note that both our gentlemen of the cloth, one quite elderly and one fairly young support this remarkable departure from tradition.
The positions of the observers from the cafe on the prospect of a robot pope are the core of the story. They bring attention to the difficulties predicting different people’s reactions to the spread of technology into realms heretofore held to be the province of human judgement. In the story, those with a more overtly philosophic perspective accept, and even welcome, the prospect of a robot pope while those who might be described as being less analytical find the concept offensive. Irrespective of the narrative presented here, a point that can be taken is that it is not necessarily easy to understand how people will react to increasing machine autonomy and that the role of people’s fundamental values can be expected to play a crucial role in acceptance or otherwise of such technological concepts.
It is noteworthy that a critical theme presented in favour is that the robot pope will more readily embrace a broad ecumenism suggesting a machine would be better placed to overcome centuries of entrenched human bigotry.
If he’s elected,” says Rabbi Mueller, “he plans an immediate time-sharing agreement with the Dalai Lama and a reciprocal plug-in with the head programmer of the Greek Orthodox Church, just for starters, I’m told he’ll make an ecumenical overture to the Rabbinate as well.
The story is presented in a matter-of-fact manner that makes the concept of a robot pope seem plausible. Backroom deal-making and disagreements in cafes – everyday occurrences coupled with the election of a new pope who just happens to be a robot. Maybe this plausibility stems from the nature of the modern papacy where, while the pope might be said to have significant moral authority, he has little power. In this sense, the pope might be considered as taking on the role of an adviser, or a guide, and humans are free to make their decisions. The story stops with the pope’s election, and it is up to the reader’s imagination as to how it will play out. However, on the face of it, there seems no reason to assume that a robot in such a role signals the end of humanity.
This short story has much to offer in stimulating discussion about the prospect of future artificial intelligence with the following as prospective starting points:
the manner of presentation of a ‘machine intelligence’ – how does presentation, anthropomorphised or otherwise, influence acceptance of machine advice?
the importance and fickleness of people’s values – are there generalisations that can be made about how people will respond to different types of machine autonomy, or is it so individually value-based that no generalisations can be made?
the nature of the role played by an office like the papacy – is a machine intelligence more acceptable in an advisory rather than a decision-making role?
the notion that a machine will be free of human biases – recent experience suggests that human biases are not removed through the application of algorithms and that such biases can be more insidious through being hidden in coding and training data.
Dr Shane Dunn completed his Bachelor’s degree in Aeronautical Engineering from the Royal Melbourne Institute of Technology in 1986 and was awarded a PhD from the University of Melbourne in 1992. Shane has over 30 years’ experience in Defence Science and Technology Group and has published research in a range of disciplines including solid mechanics; thermodynamics; structural dynamics and aeroelasticity; unmanned and autonomous systems; distributed computing architectures; and, artificial intelligence and machine learning. He is currently the Scientific Adviser to the Joint Domain in the Australian Department of Defence. The views expressed are his alone and do not reflect the opinion of the Defence Science and Technology Group, the Department of Defence, or the Australian Government.
Comments