top of page

In Defence of AI

  • journal86
  • Jun 6
  • 2 min read

Updated: Jul 23

by Captain Dave Meaney


ree

AI isn’t something to be afraid of in and of itself, but rather something that can fundamentally enhance the way we go about our business, with huge potential to generate advantage in almost any conceivable area. In 2023, the British Computer Society wrote an open letter to government calling for AI to be recognised as ‘a transformational force for good, not an existential threat to humanity’.


AI is also not sentient, there is no implementation of AI that has an awareness of itself in the way humans do: In the words of Ada Lovelace: “[AI] has no pretentions whatever to originate anything… its province is to assist us in making available what we are already acquainted with”. Ultimately AI is a set of tools that we can use to help us do things with information better than we can do them ourselves and in that regard the concept is not a new one, but it is powerful when thinking about our ability to achieve decision advantage in Defence.


It seems clear that the thing we really need to be afraid of is the possibility that our adversaries could achieve “AI Superiority” or, worse still “AI Dominance” in a way that might enable them to out-think and out-manoeuvre us, across any and all of the modern battlespace domains. It stands to reason therefore that we must guard against being over-cautious in our approach to the adoption of AI tools, lest we become left behind by our adversaries. Whilst policy makers and planners are well aware of this imperative, the key, as with all things, is in effective execution.


This article aims to examine some reasons why building from these foundations towards the actual and effective embedding of AI across Defence is imperative and how we might go about doing this. It combines some lessons from history with modern approaches to rapid adoption from the IT industry.


 
 
 

Comments


bottom of page