Artificial intelligence (AI) has the potential to change the world for the better, but it is not without risk. The director of the Cybersecurity and Infrastructure Security Agency (CISA), Jen Easterly, provided an assessment of AI that recognized it’s potential to be the “most powerful capability of our time” while highlighting its equal potential to be the “most powerful weapon of our time” during a May 2023 security summit at Vanderbilt University (Vasquez, 2023). This characterization of AI by one of the nation’s top cyber experts as not just a weapon, but the “most” powerful weapon of our time speaks to the endless potential for threat actors to utilize AI to harm life and property. This article is the second in a three-part series on artificial intelligence from the Maryland-National Capital Region Emergency Response System (MDERS) that will explore a few emerging threats posed by AI.
Generative AI, a form of narrow AI, is the focus of a large portion of the concern with narrow AI. Narrow AI is any artificial intelligence that has been trained or developed to perform specific tasks or analyses (IBM, 2023). Generative AI is AI that can create numerous forms of content, from images to computer code to human voices. AI created media is commonly referred to as a “deepfake”, and while earlier versions of deepfakes were riddled with errors, as the technology advances, so does the quality of deepfake media. For example, in April of 2023, a song titled, “Heart on My Sleeve” showed up online supposedly written and performed by popular musicians Drake and The Weeknd. The song was so similar to the style and vocals of the two artists that their music label had to make statements that they did not release new songs (Coscarelli, 2023).

The ability of AI to mimic well known voices could be utilized by threat actors to create audio bites of politicians or world leaders (Allen, 2023). ChatGPT, a form of generative AI that was discussed in the previous article, recognizes the concerns homeland security leaders have with AI. When prompted to respond to questions regarding the cybersecurity concerns of AI, ChatGPT provided numerous responses that included recognition of the potential for deepfakes to spread disinformation or even impersonate individuals (Steed, 2023).
Considering the possible concerns that generative AI will be used by threat actors to develop computer code or change existing malware, this capability can generate a damaging impact in the digital realm. During the same summit at Vanderbilt, Easterly also raised concerns that generative AI could provide instructions to terrorists for developing chemical and bioweapons (Steed, 2023).
The advances in AI pose significant risks and threats to numerous fields. Fortunately, there is ongoing research on how to use AI to counter these sophisticated threats to create opportunities for the betterment of society. The next and final article in this series will cover promising beneficial uses for artificial intelligence by the homeland security and public safety industry.
Answer to the Previous Article: The second to last paragraph of the previous article on artificial intelligence was written with the assistance of ChatGPT.
References
Allen, G. (2023, September 19). Advanced Technology: Examining Threats to National Security. Center for Strategic and International Studies. https://www.csis.org/analysis/advanced-technology-examining-threats-national-security
Coscarelli, J. (2023, April 24). An A.I. Hit of Fake “Drake” and “The Weeknd” Rattles the Music World. New York Times. https://www.nytimes.com/2023/04/19/arts/music/ai-drake-the-weeknd-fake.html
IBM. (2023). What is Artificial Intelligence (AI)? IBM. https://www.ibm.com/topics/artificial-intelligence
Steed, M. (2023, August 23). Responsible AI: The Solution To Generative AI’s Threats. Forbes. https://www.forbes.com/sites/forbesfinancecouncil/2023/08/23/responsible-ai-the-solution-to-generative-ais-threats/?sh=4ccff2e03cac
Vasquez, C. (2023, May 5). Top US cyber official warns AI may be the ‘most powerful weapon of our time.’ CYBERSCOOP. https://cyberscoop.com/easterly-warning-weapons-artificial-intelligence-chatgpt/













The program started off with a panel discussion on public order and crowd control. Darrell Darnell moderated the conversation and was joined by Glendale Fire Chief Ryan Freeburg, Philadelphia Office of Emergency Management Homeland Security Program Manager Gary Spector, Yale University’s Associate Vice President for Public Safety and Community Engagement Ronnell Higgins, and Emergency Department Physician and the Institute of Emergency Management Director for the MedStar Washington Hospital Center Craig DeAtley. The panelists explored discipline-specific and multiagency coordination in planning for and responding to large-scale public order events. Symposium attendees posed various questions to the panelists to facilitate discourse about various public order topics. The breadth of this discussion supplied participants with lessons learned and best practices for response to large-scale planned and unplanned public order events.


Over the past four years, MCPD, MCFRS, and OEMHS have partnered with MDERS to develop, utilize, and expand their sUAS capabilities. By focusing on improving information gathering and situational awareness, MDERS and Montgomery County public safety agencies collaborated to establish a systematic framework that provided the planning, training, and equipment imperative to bolstering emergency response procedures. The capabilities of sUAS aid in capturing essential information and maintaining shared situational awareness among public safety agencies to formulate appropriate response measures in evolving environments.