Workshop Abstract
Generative AI tools such as ChatGPT, Bard, BingChat, DALL-E, Stable Diffusion, and the mathematical models that power them have captured the attention of the public at large as well as artificial intelligence and cybersecurity experts. Their abilities to create convincingly written prose, poetry, song lyrics, software code, images, and even videos have many people considering them to be the biggest thing to impact education and the workplace since the handheld calculator. However, for policymakers and security professionals already struggling with the implications of mis- and disinformation, “deepfakes”, and controlling access and use of proprietary, confidential, or classified information, these tools create perhaps the biggest techno-social challenge since strong cryptography. Notable researchers and entrepreneurs, and even AI researchers have called for a pause on these lines of research until the risks are better understood. At the same time, the use of increasingly advanced artificial intelligence, from machine learning up to increasingly capable general-purpose agents that can be fine-tuned to specific purposes and collaborate in real-time, present new opportunities for automating or streamlining the management of cyber risks and responses to cyber-attacks. This workshop will identify priority research gaps and objectives that the cybersecurity research community at large can work on together to further understand the potential impacts and develop solutions. Results of the workshop will be briefed to the general audience immediately following the workshop.
Workshop Organizers
Dr. Benjamin Blakely, Argonne National Laboratory
Neil Fendley, Johns Hopkins Applied Physics Laboratory
Dr. Bradford Kline, National Security Agency
Workshop Location
The workshop was held on September 21, 2023 (10am – 3pm) in conjunction with the 2023 National Cybersecurity Education Colloquium at Moraine Valley Community College in Palos Hills, Illinois.
Updated: Dec 6, 2024