Generative Video AI enables users to quickly generate new content based on various inputs. The generative AI video platform reduces barriers to creation by guiding creators on what drives engagement and showing relevant content to viewers. At the same time, the reduced barriers and improved guidance enable the creators to increase the value and the speed at which they can create. This is an excellent method for a new way to get content quickly. Generative AI video tools can help create high-quality marketing and product demo videos, increasing brand awareness and facilitating conversions. I created a couple of promotional, educational videos using Generative Video Ai. The first is a talking video #1 as practice.
This space is quickly expanding, and I hope to revisit it multiple times over the coming weeks to see how it changes. I began exploring the tools with a vague goal in sight. First, I researched the history of screen printing using Chat GPT and asked for links, which I followed to learn and understand more. Through exploring the Ai platforms I found the idea of the Mona Lisa explaining screen printing, including around the time when she was painted. Video #1 became a rough draft that I could use to further expand my idea and practice with the tools in video #2. When you list all of the resources required to produce a video using traditional methods (scriptwriting, locations, equipment, crew, talent, editing), it adds up to many expenses. AI-generated video requires none of these things. It can take a simple source text file or even a concise prompt and turn it into a complete video with music, graphics, and lifelike virtual presenters.
I even encouraged chat to be my teacher and use the 80/20 rule. I asked the chat to summarize the history giving me 20% that covers the other 80%, leading to the critical points in the second video. For example, video #2 shows the katazome and bingata (pronounced incorrectly because English language AI seems not to handle the pronunciation of Japanese words very well), the Samuel Simon patent, and the Pop Art movement.
Chat GPT wrote the video #1 script first draft of the video based on the prompt. “Can you write a 2-minute video script for this historic content, from the perspective of the Mona Lisa, using the dialect she might have spoken in?”
Video #2 is shorter.
I prompted Chat GPT to write a script for Mona Lisa in the style of Ernest Hemingway. Hemingway’s writing is often described as “economical” or “spare,” as he avoided flowery language and ornate descriptions in favor of a more straightforward and concise style. Hemingway believed in the power of understatement. His writing often featured short, declarative sentences that conveyed a great deal of meaning with few words.
Next, I prompted the Ai to remove 50 seconds from the script, so I could shorten the video even more. I did this in small 10-second steps to remove what I wanted and keep the flow. In the first video, with all the research, I spent a few hours learning as much as possible and developing the idea. I was in the chat on the second video for about 15 minutes. Traditional videos must be shot in real-time and often require multiple takes and hours in the editing room—not to mention all the work that goes into preparing for a shoot. AI tools can generate a finished product almost instantaneously.
I generated images using Blue Willow on my Discord Channel. Blue Willow is a FREE image generator and makes some great images, especially for internet-based work, like video. It is detailed and will generally not require specific prompting for more advanced ideas in MidJourney, like tee shirt designs. In this case, using the Mona Lisa image for referencing screen printing throughout history was perfect. The prompt “/imagine Mona Lisa in the Style of….” Was the ideal prompt to yield all the styles needed. This took about 15 minutes. The affordability and speed of generative AI also make it endlessly scalable. Once your source material is ready, there’s not much difference in cost between producing a single video and producing hundreds in different languages or with other virtual presenters, images, or variations on content; series can be created quickly.
Next, I used Heygen.com to make the Mona Lisa speak. It’s a fantastic site for Generative Ai Videos. The Mona Lisa is one of their free Avatars. You gain free minutes over days, but they also offer a paid service. They also have Voice Ai that replicates a recorded voice for $99. This could make the whole video more fluid and value-add with a unique vocal style. I wonder if this will become a sellable item, like platelets or …well, you know. This took 9 minutes on video #2. Seven minutes were render minutes, and I was on to the next step. When it costs nothing to generate a video, nothing stops you from customizing and personalizing video content to benefit your target audience. Generative AI can help you reach vast new market segments by creating tailored experiences for individual viewers by incorporating bespoke scripts and imagery.
After that, I tried Descript, an Ai video editor that breaks the video into the transcript. Adding a “/” forward slash to the text creates breaks in the edit where you can adjust the image. Also, you can delete parts of the transcript; this tool will edit your video to match. It’s an exciting tool. As a final editing tool, it could be really amazing. It is lacking in many of the more advanced aspects of Final Cut and others. Still, others will need to quickly adopt this tech as things progress, and/or Descript will become more powerful. Editing was the longest part of the second video, it took me about an hour, and I definitely could refine it more than I did. Doing both videos got me used to the Descript space, and I would utilize it again or at least keep an eye on it to see how it changes.
Everything I experimented with in this production was free.
Some more examples of Gen Ai Video Ideas:
Deep Fake Morgan Freeman