AI-Generated Cybersecurity Overview Video

Using AI prompts to edit the background music of a video

UXU Competency Alignment

UXU Competency Project Alignment
Needs assessment and task analysis. Started with needs assessment to define project goals as well as user needs.
Design frameworks and theories. Considered cognitive load, simple and intuitive use, consistency, and affordance.
Usability engineering or iterative design through prototyping. Iterated and improved script and video with lab director providing feedback from development team.
Diversity and inclusion methods to increase efficient and effective use. Considered diversity, accessibility, and inclusive design principles related to neurodiversity throughout design and development of script and video.

Description

During my internship, I assisted the Information Experience Lab at the University of Missouri with uSucceed, a virtual reality (VR) game which aims to enhance employment prospects in Cybersecurity for individuals with conditions like autism, ADHD, and dyslexia.

My first task centered on creating a 2–3-minute video using generative artificial intelligence (AI) to provide an overview of cybersecurity to neurodiverse adults. I used the AI tools ChatGPT and in.video AI to develop a script and the video. A version of this video will become an asset within the VR game.

Editing video using AI prompts.
Image 1: Editing the AI-generated video using prompts.

Details

Course: IS_LT 9480: User Experience and Usability Internship
Semester: Spring 2024
Project Type: Individual
Role: Intern, designer, writer, editor

Learning Process

I iterated a script for the uSucceed Cybersecurity Overview Video using ChatGPT. This entailed providing a prompt—and revising prompts based on the response and feedback from the lab director. I considered how written text sounds different than how we speak. At times, I edited the script directly using the principles of accessibility and plain writing to ensure the script used concrete language and examples as well as avoided jargon. I then created an initial draft of the video by entering the script into in.video AI.

When I edited the video, I asked questions as I reviewed the pacing and flow. Does the video tell a story with a beginning, middle, and end? Does anything stick out with the audio or video? For example, does the visual content match the audio content? Does the narrator’s voice sound natural? Does the background music level balance with the narrator?

After this review, I used prompts to make edits: reduce the animations and flash; change the background music; and choose a less jarring narrator voice (Image 1). Additionally, we knew our learners require some accessibility modifications, so I asked the tool to assist with removing flickering animations and abrupt transitions. Finally, I asked the tool to add a closing side which acknowledged the use of AI in the creation of the video.

Reflection

This project required me to plan the design and development of the video while collaborating with the lab director. I analyzed the needs of the players as well as the game developers. As I iterated the AI-generated content, I focused on thinking holistically from script to video to VR game—and make recommendations based on the heuristics of user experience and accessibility. After completing this project, I had the opportunity to present my experience and thoughts about using the in.video AI video tool to the professor, lab director, and other interns. I considered the needs of the audience to determine what they might find useful to know about the affordances and constraints of the tool.