Category : Insights
Editor, correspondent, and business writer for leading publications, news wires and research organizations in India and the Gulf region.
14 November 2024
Background
“Deontic” and “devops” may seem unrelated terms at first glance. They originate from distinct fields like philosophy and technology, respectively. Deontic refers to obligations, permissions, and prohibitions, while devops is a technological practice of integrating software development and IT operations. The fusion of these two terms in the context of an artificial intelligence (AI) project speaks volumes about its disruptive and collaborative potential and the enormous shifts it is bringing about in the security sector.
The United States Defense Advanced Research Projects Agency (DARPA) has launched the Human-AI Communications for Deontic Reasoning DevOps program, known as CODORD, to develop AI systems capable of understanding and reasoning about human obligations, permissions, and prohibitions. CODORD aims to streamline this process by enabling AI to efficiently translate natural language (like English or French) into formal logical language, thereby reducing the time and cost of knowledge acquisition. While still in its infancy, CODORD holds potential for military and civilian applications. Its key focus is creating automated methods for translating deontic knowledge from natural to logical language.
In military contexts, projects such as CODORD could help AI systems accurately interpret and convey a commander’s intent, improving decision-making in complex or high-pressure situations. Since current methods of translating human knowledge into logical programming languages are slow and costly, requiring specialized knowledge engineers, this is being seen as a significant and timely shift. Beyond AI’s transformative role in navigating complex decision-making processes in the security sector, such moves also signal the primacy and relevance of adhering to rules, ethics, and operational directives to simplify and expedite systems and processes.
Industry watchers believe such programs could broadly apply to military and civilian sectors. They help AI follow rationale related to compliance with laws and regulations besides operational policies, ethics, and contracts. [1] For example, AI could assist in operations planning, autonomous systems, healthcare, and financial compliance, providing explanations in natural language that non-experts can easily understand. Developments such as these lie at the core of the churning that AI has brought about in numerous spheres, including in the security sector.
‘Critical Infrastructure’ in the Security Ecosystem
Studies indicate that AI can identify patterns and detect anomalies, significantly improving security monitoring for both military and civilian use. [2] AI-driven surveillance systems exemplify this, with Video Content Analytics (VCA) providing essential support to security personnel and operators. By automating threat detection, AI minimizes the need for manual review of lengthy video footage, making the process more efficient for security teams.
While AI’s integration has transformed security systems’ approach to threat detection, deep learning (DL) technologies, which also happen to be an AI subset, have significantly increased analytics accuracy. In the words of Mats Thulin, the Director of Core Technologies at Axis Communications, deep learning (DL) technologies, a subset of AI, have significantly increased the accuracy of analytics solutions, “leading to more reliable and efficient security systems.” [3] These value additions have enabled security professionals to tackle potential threats before they escalate proactively. These advanced capabilities have facilitated many new applications beyond traditional safety and security functions.
A Geneva Centre for Security Sector Governance report highlights how rapid AI growth reshapes the security and justice sectors. The report posits that AI enhances preparedness and resilience during crises like pandemics. Consequently, countries with advanced technological solutions before COVID-19 showed greater resilience (see Graphic 2). [4] According to this report, AI can increase these sectors’ transparency by improving data collection and storage. It can also lead to more data-driven reporting on needs and usage patterns, supporting more effective medium- and long-term governance.
One primary reason AI cannot be ignored by most sectors, especially the security sector, is that it is being seen as part of “critical infrastructure,” a term traditionally referring to the physical and cyber systems crucial to a nation’s functioning. This criticality suggests that any disruption in these systems could severely impact national security. Adherence to this concept ensures that AI systems nowadays are treated as part of the security apparatus.
Making AI part of critical infrastructure starts with acknowledging its use in critical sectors. It also means recognizing it as a vital component that requires safeguarding due to its essential role in supporting and optimizing traditional infrastructure operations. Anything critical to essential infrastructure also carries significant geopolitical consequences, affecting the balance of power, international relations, and strategic decision-making. For this reason, countries are progressively incorporating AI into their national security frameworks even as the geopolitical landscape undergoes enormous transformation in several critical areas.
Hallucinations and the Pitfalls of Over-Reliance
Despite the rapid strides and dramatic advancements, not everything is hunky-dory in the world of Al. These pitfalls automatically become far more relevant to the security sector, considering their implications for the overall apparatus. Considering its significance, numerous instances are being cited in this regard. One is AI’s continued limitations in understanding the nuances of complex scenes and human behavior. The term hallucination [5] – or AI’s memory loss – explains the phenomenon of inaccurate AI statements and suggestions. Industry insiders say things worsen when AI summarizes inaccurate or made-up content to cover up the error. This is counterproductive, even disastrous, in any security ecosystem, let alone security-related matters.
AI’s reliance on the human interface, especially in decision-making, has also been emphasized. The common refrain in these matters is that there is no alternative to keeping the “human in the loop.” In other words, a balance is needed between adopting new capabilities and mitigating their risks; only human intervention can ensure that. The other issue being highlighted is the reliability of AI-generated applications, which leaves them less secure than desired. [6] Since AI is getting trained mostly on publicly available data, the codes are not always secure, an anomaly that still needs addressing.
Another issue lurking in the background is the security of data being shared on major AI platforms. Since these platforms are also used internally by organizations in and out of government control, it only increases the associated risk of data and security breaches, leading to unfair outcomes and sometimes disastrous consequences. Fixing responsibility when things go wrong has also been a work in progress as AI increasingly automates complex decision-making processes.
Of Human Capability and AI ‘Black Boxes’
AI systems may exceed human capabilities regarding specific tasks, especially in the security sector. However, human beings bring moral reasoning, intuition, and empathy toward decision-making, which AI lacks. Artificial intelligence is already integral to decision-making in many companies and is being increasingly adopted to guide policy and public sector decisions globally. [7] As a result, AI’s capacity to process vast amounts of data and make decisions can result in overly centralized power structures. This is another factor the security sector would need to watch out for.
There are also more generic concerns that cut across industries and compel us to look at AI with slight suspicion. Job losses due to AI-driven automation can aggravate economic inequality, particularly in sectors where routine tasks are easily automated. If left unchecked, this could leave underprivileged segments without viable employment options, leading to long-term ramifications related to national security. Another challenge is overconfidence in AI’s capabilities, resulting in complacency among human operators. [8] This factor is akin to human drivers getting over-reliant or callous toward machines, often causing significant damage.
Concerns about AI overreliance are also closely tied to the “black box” phenomenon. The term is used to explain that many AI systems, especially those using deep learning, function in a way that is difficult for humans to understand, including the engineers who developed them. These systems analyze massive data through multiple layers of complex algorithms.
Regulation is another key factor that will define AI’s relevance and future. It has been observed that the lower you go in governance, the more difficult it becomes to regulate AI, [9] requiring the entire regulatory mechanism to up its game. Like the rest of the world, the GCC and the broader Middle East region are trying their best to keep up with the frantic pace of AI development. The region’s security apparatus also monitors the sector’s developments.
A strategy& report suggests that GCC defense and security forces must grasp the AI opportunity and that some countries are already integrating AI. “They could adopt AI more widely to enjoy the full benefits in terms of information superiority, such as for operations, and improvements, such as logistics and predictive maintenance,” it says. As per this report, GCC forces could select their AI pilot schemes, build experience and expertise, and change their operating model to enable broader AI deployment.
Conclusions
Artificial intelligence has had a profound dual impact on the security sector worldwide. It offers enhanced operational efficiency while introducing risks like bias, privacy concerns, and new vulnerabilities. However, it is critical for the security sector because while it strengthens security defenses, attackers looking to jeopardize the systems can also weaponize it. Moreover, adversarial attacks can exploit persisting weaknesses.
Relying on large datasets, especially in surveillance, has raised ethical questions about data privacy and use. Responsible management, ethical governance, and collaborative security frameworks are essential as AI becomes critical to national security and global geopolitical strategies. The same systems designed to protect may be targeted by sophisticated cyberattacks, making the balance between AI’s potential to secure and destabilize a vital issue in security discourse.
Ehtesham Shahid is an editor and researcher based in the UAE. X: @e2sham
References:
Altmann, J., & Sauer, F. (2017). Autonomous Weapon Systems and Strategic Stability. Journal of International Affairs, 72(1), 55-69.
Arora, P., & Freeh, E. (2020). AI-Driven Surveillance in Public Spaces: Ethical Implications. Journal of Security Studies, 14(2), 45-67.
Feldstein, S. (2019). The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance. Journal of Democracy, 30(2), 40-54.
Gertz, B. (2021). AI in Modern Warfare: Transforming Battlefields. Defense Intelligence Journal, 25 (3), 67-83.
Russell, S. (2019). Human Compatible: AI and the Problem of Control. Penguin Press.
Schneier, B. (2020). We’ve Created a Monster: The Risks of Autonomous Weapons. Foreign Policy, 14 (3), 78-82.
Wang, L. (2022). The Ethics of AI-Powered Surveillance: A Global Perspective. Ethical AI Review, 8 (1), 12-29.
[1] U.S. Department of Defense. DIB AI Principles: Supporting Document. 31 Oct. 2019, https://media.defense.gov/2019/Oct/31/2002204459/-1/-1/0/DIB_AI_PRINCIPLES_SUPPORTING_DOCUMENT.PDF.
[2] Clarke, Alexander. “How AI Could Change Threat Detection.” TechTarget, https://www.techtarget.com/searchsecurity/tip/How-AI-could-change-threat-detection#:~:text=How%20AI%20is%20used%20in,that%20human%20analysts%20cannot%20match. Accessed 24 Oct. 2024.
[3] Axis Communications. “AI Security: Navigating the Challenges and Opportunities.” Axis Newsroom, 10 Oct. 2023, https://newsroom.axis.com/blog/ai-security.
[4] Geneva Centre for Security Sector Governance (DCAF). Artificial Intelligence in Security Sector Governance and Reform: An Advisory Note. 2021, https://www.dcaf.ch/sites/default/files/imce/ISSAT/Adv-note-AI-SSGR.pdf.
[5] “Wikipedia Contributors. “Hallucination (Artificial Intelligence).” Wikipedia, 20 Oct. 2024, https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence).
[6] Securiti. “Generative AI Security: Addressing Risks of AI-Generated Content.” Securiti, https://securiti.ai/generative-ai-security/. Accessed 24 Oct. 2024.
[7] World Economic Forum. “How Artificial Intelligence Will Transform Decision-Making.” World Economic Forum, 22 Sept. 2023, https://www.weforum.org/agenda/2023/09/how-artificial-intelligence-will-transform-decision-making/.
[8] Taha, Hanadi, et al. “Exploring Explainability of AI Systems: A Human-Centered Approach.” International Journal of Human-Computer Interaction, vol. 39, no. 7, 2023, pp. 689-705, https://doi.org/10.1080/10447318.2023.2301250.
[9] West, Darrell M. “The Three Challenges of AI Regulation.” Brookings Institution, 10 Aug. 2023, https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/.