Search

National AI Strategy


Gordon R. Dickinson’s 1965 short science fiction story ‘Computers Don’t Argue’ told of a series of misunderstandings and miscommunications relating to human reliance upon technology. The lack of human control led to decisions being made by machines without any human oversight. The story centres on confusion over a library book, a copy of Robert Louis Stevenson’s ‘Kidnapped’, which inadvertently leads to the arrest and ultimate execution of the book’s borrower for the kidnapping and death of its author. It sounds highly implausible yet remains an uneasy read. Although over fifty years have passed since its publication, the tale it told is not far removed from the reality we now experience daily, where many human interactions have been replaced by machines.


If there are risks, the current UK government has not been deterred from advancing such developments. In September 2021 it outlined its ‘AI Strategy’ which aims to invest and plan long- term for the AI ‘ecosystem’, support the transition to an AI-based economy, and to ensure the UK gets the national and international AI technologies ‘correct’. They outline that this will be achieved through broad public trust and support, and the involvement of a range of individuals they perceive to be ‘talented’ in this sector, although it is noted that society more generally will play a role too. The proposed plans cover the next decade, and the intention is that by the end of this period the UK should be an AI and science superpower. Subsequently, on the 10th of February 2022 the government announced £23 million to create scholarships in courses related to AI to fund conversion courses for those from underrepresented groups in the industry.


Currently, one area artificial intelligence is used in in job recruitment. Amongst other prominent companies, Vodafone and PwC have been noted for using AI to filter applications. Proposed EU legislation aims to counter this as potentially biased systems may infringe upon human rights and break laws. This legislation exists in tandem with the further development of AI by companies like Zuckerberg’s ‘Meta’, currently suggesting it has the fifth fastest AI supercomputer internationally, a status it claims will increase to simply be the fastest. The development of artificial intelligence has seen Prof Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, call for international treaties to regulate the development of technology as he, and many other experts, believe machines will be more intelligent than humans within the present century.


Cambridge University have suggested that developments in AI have the potential to be as

transformative, in virtually all spheres of life, as was the Industrial Revolution of the late eighteenth century. This, in part, is reflected in the acceleration of the ‘training’ of AI. Such progression has been noted as challenging ‘Moore’s Law’ which in its simplest form, suggests that the power of a microchip doubles every two years, though the cost of computers halves; simply, there is a rapid change in information processing technology. Many have argued that such development will lead to 'singularity’. There are differences as to what this singularity will actually look like, ‘The Economist’ proposed a definition of this being that technological progress is exponential, to the extent that it will ultimately escape human control. Although there is divided opinion as to the actual outcome of this. Equally, there are overlaps between this concept, and those of ‘transhumanism’, the philosophy which argues for the advancement of technology and the availability of this so as to enhance the human condition through longevity and cognition, and the inevitability of this.


The potential for machines to ‘think’ has long been considered. In 1950 in ‘Computing Machinery and Intelligence’, Alan Turing asked: “Can machines think?”, and this paper introduced the concept of the Turing test, which would determine whether an AI would be capable of thinking like a human being. Questions of the possibility of machines being able to think reaches back to the philosopher Descartes and his 1637 treatise ‘Discourse on the Method of Rightly Conducting One's Reason and of Seeking Truth in the Sciences’, which is notable for the introduction of the statement ‘I think, therefore I am’. Descartes noted that ‘automata’ (a self-operating machine) are capable of responding to human interaction but cannot do so in the way another human can, therefore drawing a distinction between the human and the machine on a linguistic basis. Descartes may be forgiven for not having foreseen the future potential for this problem to be overcome, but he did outline a basic principle for the Turing test. What is now understood as the ‘Imitation Game’ involved a man and a computer sitting in separate rooms, with a jury asking them questions, as each pretends to be the other, although communication can only take place using notes. The program, ELIZA, created by Joseph Weizenbaum in 1966 was successful in passing the test, and similarly, PARRY, created by Kenneth Colby in 1972, both of which are mimicked by today’s chat-bots. It is worth noting that the Turing test has received wide criticism for not actually being able to prove whether a machine can think. Turing noted in his 1950 text that: “Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage, therefore, we should have to expect the machines to take control”.


On a more mundane level, this could be considered a beneficial development. The automation of repetitive processes has been with us for decades, and whilst this may have impacted on employment in certain industries, it has hardly been viewed as unwelcome or a matter of concern. In August 2021, Elon Musk announced the ‘Tesla Bot’, a humanoid robot created with the intention of completing repetitive or dangerous work. The robot uses similar technologies to that of Tesla’s self-driving cars, and a bipedal robot is not dissimilar to those created by other robotics groups, like Boston Dynamics. Although Musk claimed a prototype would be complete within the year, it is unclear what progression on the product looks like, and some claim that it was merely just a ‘joke’. Regardless of whether this is reality or not, some have argued for the positives of this: the prospect of AI being able to fill gaps within the labour market, as well as removing humans from potentially dangerous jobs. However, others have noted the negatives, such as these being able to replace many human jobs, and the potential consequences due to labour shortages no longer existing, and from this, the ability for workers to acquire better conditions diminishing. Whilst the development of AI to reach a level where it can replace human capability, even in more menial jobs, might remain largely distant, for some, this is already an inevitability. The development of existing technology will dictate this. A more profound issue arises when we reach the point where the control of the process is no longer confined to a programmed algorithm.


In effectively accepting that such developments may ultimately prove inevitable, there have been proposals for how governments should monitor developments in AI. One of these suggests that the government should synthesise data from the AI industry to understand the impacts of this technology, measuring and monitoring this. As proponents are already under pressure to consider their risk mitigation capabilities, the concerns regarding further development are fully justified. In Dickinson’s 1965 short story, the machines were based on the relatively primitive technology of the time, significantly less capable than those of today. Currently, the vast majority of AI capabilities are limited to artificial ‘narrow intelligence’, performing a defined, programmable task. In the future, artificial ‘general intelligence’ is anticipated to be something that will be capable of learning and understanding and will develop to the extent it becomes capable of making its own judgements.


Human fallibility in virtually all spheres of life is widely recognised and judged accordingly. If artificial intelligence is to replace human judgement on the grounds that it is simply more reliable, we could question who or what will judge the AI. Whilst this issue is not likely to confront us in the near future, in the midst of government efforts to develop this technology, it is a question worthy of consideration.

 

Written by Frances Rigby


20 views0 comments

Top Stories