Impacts of AI on Public Safety and the Homeland Security Enterprise | Threats and Challenges

Artificial intelligence (AI) has the potential to change the world for the better, but it is not without risk. The director of the Cybersecurity and Infrastructure Security Agency (CISA), Jen Easterly, provided an assessment of AI that recognized it’s potential to be the “most powerful capability of our time” while highlighting its equal potential to be the “most powerful weapon of our time” during a May 2023 security summit at Vanderbilt University (Vasquez, 2023). This characterization of AI by one of the nation’s top cyber experts as not just a weapon, but the “most” powerful weapon of our time speaks to the endless potential for threat actors to utilize AI to harm life and property. This article is the second in a three-part series on artificial intelligence from the Maryland-National Capital Region Emergency Response System (MDERS) that will explore a few emerging threats posed by AI.

Generative AI, a form of narrow AI, is the focus of a large portion of the concern with narrow AI. Narrow AI is any artificial intelligence that has been trained or developed to perform specific tasks or analyses (IBM, 2023). Generative AI is AI that can create numerous forms of content, from images to computer code to human voices. AI created media is commonly referred to as a “deepfake”, and while earlier versions of deepfakes were riddled with errors, as the technology advances, so does the quality of deepfake media. For example, in April of 2023, a song titled, “Heart on My Sleeve” showed up online supposedly written and performed by popular musicians Drake and The Weeknd. The song was so similar to the style and vocals of the two artists that their music label had to make statements that they did not release new songs (Coscarelli, 2023).

The ability of AI to mimic well known voices could be utilized by threat actors to create audio bites of politicians or world leaders (Allen, 2023). ChatGPT, a form of generative AI that was discussed in the previous article, recognizes the concerns homeland security leaders have with AI. When prompted to respond to questions regarding the cybersecurity concerns of AI, ChatGPT provided numerous responses that included recognition of the potential for deepfakes to spread disinformation or even impersonate individuals (Steed, 2023).

Considering the possible concerns that generative AI will be used by threat actors to develop computer code or change existing malware, this capability can generate a damaging impact in the digital realm. During the same summit at Vanderbilt, Easterly also raised concerns that generative AI could provide instructions to terrorists for developing chemical and bioweapons (Steed, 2023).

The advances in AI pose significant risks and threats to numerous fields. Fortunately, there is ongoing research on how to use AI to counter these sophisticated threats to create opportunities for the betterment of society. The next and final article in this series will cover promising beneficial uses for artificial intelligence by the homeland security and public safety industry.

Answer to the Previous Article: The second to last paragraph of the previous article on artificial intelligence was written with the assistance of ChatGPT.


Allen, G. (2023, September 19). Advanced Technology: Examining Threats to National Security. Center for Strategic and International Studies.

Coscarelli, J. (2023, April 24). An A.I. Hit of Fake “Drake” and “The Weeknd” Rattles the Music World. New York Times

IBM. (2023). What is Artificial Intelligence (AI)? IBM.

Steed, M. (2023, August 23). Responsible AI: The Solution To Generative AI’s Threats. Forbes.

Vasquez, C. (2023, May 5). Top US cyber official warns AI may be the ‘most powerful weapon of our time.’ CYBERSCOOP.