Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Paper page – EarthCrafter: Scalable 3D Earth Generation via Dual-Sparse Latent Diffusion

Stability AI is working on a licensing marketplace for creators

Alibaba’s Qwen-MT Promises Smarter, Cheaper Translations Across 92 Languages

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Video Generation

Amazon Nova Reel 1.1: Featuring up to 2-minutes multi-shot videos

By Advanced AI EditorApril 7, 2025No Comments10 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Voiced by Polly

At re:Invent 2024, we announced Amazon Nova models, a new generation of foundation models (FMs), including Amazon Nova Reel, a video generation model that creates short videos from text descriptions and optional reference images (together, the “prompt”).

Today, we introduce Amazon Nova Reel 1.1, which provides quality and latency improvements in 6-second single-shot video generation, compared to Amazon Nova Reel 1.0. This update lets you generate multi-shot videos up to 2-minutes in length with consistent style across shots. You can either provide a single prompt for up to a 2-minute video composed of 6-second shots, or design each shot individually with custom prompts. This gives you new ways to create video content through Amazon Bedrock.

Amazon Nova Reel enhances creative productivity, while helping to reduce the time and cost of video production using generative AI. You can use Amazon Nova Reel to create compelling videos for your marketing campaigns, product designs, and social media content with increased efficiency and creative control. For example, in advertising campaigns, you can produce high-quality video commercials with consistent visuals and timing using natural language.

To get started with Amazon Nova Reel 1.1 
If you’re new to using Amazon Nova Reel models, go to the Amazon Bedrock console, choose Model access in the navigation panel and request access to the Amazon Nova Reel model. When you get access to Amazon Nova Reel, it applies both to 1.0 and 1.1.

After gaining access, you can try Amazon Nova Reel 1.1 directly from the Amazon Bedrock console, AWS SDK, or AWS Command Line Interface (AWS CLI).

To test the Amazon Nova Reel 1.1 model in the console, choose Image/Video under Playgrounds in the left menu pane. Then choose Nova Reel 1.1 as the model and input your prompt to generate video.

Amazon Nova Reel 1.1 offers two modes:

Multishot Automated – In this mode, Amazon Nova Reel 1.1 accepts a single prompt of up to 4,000 characters and produces a multi-shot video that reflects that prompt. This mode doesn’t accept an input image.
Multishot Manual – For those who desire more direct control over a video’s shot composition, with manual mode (also referred to as storyboard mode), you can specify a unique prompt for each individual shot. This mode does accept an optional starting image for each shot. Images must have a resolution of 1280×720. You can provide images in base64 format or from an Amazon Simple Storage Service (Amazon S3) location.

For this demo, I use the AWS SDK for Python (Boto3) to invoke the model using the Amazon Bedrock API and StartAsyncInvoke operation to start an asynchronous invocation and generate the video. I used GetAsyncInvoke to check on the progress of a video generation job.

This Python script creates a 120-second video using MULTI_SHOT_AUTOMATED mode as TaskType parameter from this text prompt, created by Nitin Eusebius.

import random
import time

import boto3

AWS_REGION = “us-east-1”
MODEL_ID = “amazon.nova-reel-v1:1”
SLEEP_SECONDS = 15 # Interval at which to check video gen progress
S3_DESTINATION_BUCKET = “s3://”

video_prompt_automated = “Norwegian fjord with still water reflecting mountains in perfect symmetry. Uninhabited wilderness of Giant sequoia forest with sunlight filtering between massive trunks. Sahara desert sand dunes with perfect ripple patterns. Alpine lake with crystal clear water and mountain reflection. Ancient redwood tree with detailed bark texture. Arctic ice cave with blue ice walls and ceiling. Bioluminescent plankton on beach shore at night. Bolivian salt flats with perfect sky reflection. Bamboo forest with tall stalks in filtered light. Cherry blossom grove against blue sky. Lavender field with purple rows to horizon. Autumn forest with red and gold leaves. Tropical coral reef with fish and colorful coral. Antelope Canyon with light beams through narrow passages. Banff lake with turquoise water and mountain backdrop. Joshua Tree desert at sunset with silhouetted trees. Iceland moss- covered lava field. Amazon lily pads with perfect symmetry. Hawaiian volcanic landscape with lava rock. New Zealand glowworm cave with blue ceiling lights. 8K nature photography, professional landscape lighting, no movement transitions, perfect exposure for each environment, natural color grading”

bedrock_runtime = boto3.client(“bedrock-runtime”, region_name=AWS_REGION)
model_input = {
“taskType”: “MULTI_SHOT_AUTOMATED”,
“multiShotAutomatedParams”: {“text”: video_prompt_automated},
“videoGenerationConfig”: {
“durationSeconds”: 120, # Must be a multiple of 6 in range [12, 120]
“fps”: 24,
“dimension”: “1280×720”,
“seed”: random.randint(0, 2147483648),
},
}

invocation = bedrock_runtime.start_async_invoke(
modelId=MODEL_ID,
modelInput=model_input,
outputDataConfig={“s3OutputDataConfig”: {“s3Uri”: S3_DESTINATION_BUCKET}},
)

invocation_arn = invocation[“invocationArn”]
job_id = invocation_arn.split(“/”)[-1]
s3_location = f”{S3_DESTINATION_BUCKET}/{job_id}”
print(f”\nMonitoring job folder: {s3_location}”)

while True:
response = bedrock_runtime.get_async_invoke(invocationArn=invocation_arn)
status = response[“status”]
print(f”Status: {status}”)
if status != “InProgress”:
break
time.sleep(SLEEP_SECONDS)

if status == “Completed”:
print(f”\nVideo is ready at {s3_location}/output.mp4″)
else:
print(f”\nVideo generation status: {status}”)

After the first invocation, the script periodically checks the status until the creation of the video has been completed. I pass a random seed to get a different result each time the code runs.

I run the script:

Status: InProgress
. . .
Status: Completed
Video is ready at s3:////output.mp4

After a few minutes, the script is completed and prints the output Amazon S3 location. I download the output video using the AWS CLI:

aws s3 cp s3:////output.mp4 output_automated.mp4

This is the video that this prompt generated:

In the case of MULTI_SHOT_MANUAL mode as TaskType parameter, with a prompt for multiples shots and a description for each shot, it is not necessary to add the variable durationSeconds.

Using the prompt for multiples shots, created by Sanju Sunny.

I run Python script:

import random
import time

import boto3

def image_to_base64(image_path: str):
“””
Helper function which converts an image file to a base64 encoded string.
“””
import base64

with open(image_path, “rb”) as image_file:
encoded_string = base64.b64encode(image_file.read())
return encoded_string.decode(“utf-8”)

AWS_REGION = “us-east-1”
MODEL_ID = “amazon.nova-reel-v1:1”
SLEEP_SECONDS = 15 # Interval at which to check video gen progress
S3_DESTINATION_BUCKET = “s3://”

video_shot_prompts = [
# Example of using an S3 image in a shot.
{
“text”: “Epic aerial rise revealing the landscape, dramatic documentary style with dark atmospheric mood”,
“image”: {
“format”: “png”,
“source”: {
“s3Location”: {“uri”: “s3:///images/arctic_1.png”}
},
},
},
# Example of using a locally saved image in a shot
{
“text”: “Sweeping drone shot across surface, cracks forming in ice, morning sunlight casting long shadows, documentary style”,
“image”: {
“format”: “png”,
“source”: {“bytes”: image_to_base64(“arctic_2.png”)},
},
},
{
“text”: “Epic aerial shot slowly soaring forward over the glacier’s surface, revealing vast ice formations, cinematic drone perspective”,
“image”: {
“format”: “png”,
“source”: {“bytes”: image_to_base64(“arctic_3.png”)},
},
},
{
“text”: “Aerial shot slowly descending from high above, revealing the lone penguin’s journey through the stark ice landscape, artic smoke washes over the land, nature documentary styled”,
“image”: {
“format”: “png”,
“source”: {“bytes”: image_to_base64(“arctic_4.png”)},
},
},
{
“text”: “Colossal wide shot of half the glacier face catastrophically collapsing, enormous wall of ice breaking away and crashing into the ocean. Slow motion, camera dramatically pulling back to reveal the massive scale. Monumental waves erupting from impact.”,
“image”: {
“format”: “png”,
“source”: {“bytes”: image_to_base64(“arctic_5.png”)},
},
},
{
“text”: “Slow motion tracking shot moving parallel to the penguin, with snow and mist swirling dramatically in the foreground and background”,
“image”: {
“format”: “png”,
“source”: {“bytes”: image_to_base64(“arctic_6.png”)},
},
},
{
“text”: “High-altitude drone descent over pristine glacier, capturing violent fracture chasing the camera, crystalline patterns shattering in slow motion across mirror-like ice, camera smoothly aligning with surface.”,
“image”: {
“format”: “png”,
“source”: {“bytes”: image_to_base64(“arctic_7.png”)},
},
},
{
“text”: “Epic aerial drone shot slowly pulling back and rising higher, revealing the vast endless ocean surrounding the solitary penguin on the ice float, cinematic reveal”,
“image”: {
“format”: “png”,
“source”: {“bytes”: image_to_base64(“arctic_8.png”)},
},
},
]

bedrock_runtime = boto3.client(“bedrock-runtime”, region_name=AWS_REGION)
model_input = {
“taskType”: “MULTI_SHOT_MANUAL”,
“multiShotManualParams”: {“shots”: video_shot_prompts},
“videoGenerationConfig”: {
“fps”: 24,
“dimension”: “1280×720”,
“seed”: random.randint(0, 2147483648),
},
}

invocation = bedrock_runtime.start_async_invoke(
modelId=MODEL_ID,
modelInput=model_input,
outputDataConfig={“s3OutputDataConfig”: {“s3Uri”: S3_DESTINATION_BUCKET}},
)

invocation_arn = invocation[“invocationArn”]
job_id = invocation_arn.split(“/”)[-1]
s3_location = f”{S3_DESTINATION_BUCKET}/{job_id}”
print(f”\nMonitoring job folder: {s3_location}”)

while True:
response = bedrock_runtime.get_async_invoke(invocationArn=invocation_arn)
status = response[“status”]
print(f”Status: {status}”)
if status != “InProgress”:
break
time.sleep(SLEEP_SECONDS)

if status == “Completed”:
print(f”\nVideo is ready at {s3_location}/output.mp4″)
else:
print(f”\nVideo generation status: {status}”)

As in the previous demo, after a few minutes, I download the output using the AWS CLI:
aws s3 cp s3:////output.mp4 output_manual.mp4

This is the video that this prompt generated:

More creative examples
When you use Amazon Nova Reel 1.1, you’ll discover a world of creative possibilities. Here are some sample prompts to help you begin:

Color Burst, created by Nitin Eusebius

prompt = “Explosion of colored powder against black background. Start with slow-motion closeup of single purple powder burst. Dolly out revealing multiple powder clouds in vibrant hues colliding mid-air. Track across spectrum of colors mixing: magenta, yellow, cyan, orange. Zoom in on particles illuminated by sunbeams. Arc shot capturing complete color field. 4K, festival celebration, high-contrast lighting”

Shape Shifting, created by Sanju Sunny

prompt = “A simple red triangle transforms through geometric shapes in a journey of self-discovery. Clean vector graphics against white background. The triangle slides across negative space, morphing smoothly into a circle. Pan left as it encounters a blue square, they perform a geometric dance of shapes. Tracking shot as shapes combine and separate in mathematical precision. Zoom out to reveal a pattern formed by their movements. Limited color palette of primary colors. Precise, mechanical movements with perfect geometric alignments. Transitions use simple wipes and geometric shape reveals. Flat design aesthetic with sharp edges and solid colors. Final scene shows all shapes combining into a complex mandala pattern.”

All example videos have music added manually before uploading, by the AWS Video team.

Things to know
Creative control – You can use this enhanced control for lifestyle and ambient background videos in advertising, marketing, media, and entertainment projects. Customize specific elements such as camera motion and shot content, or animate existing images.

Modes considerations –  In automated mode, you can write prompts up to 4,000 characters. For manual mode, each shot accepts prompts up to 512 characters, and you can include up to 20 shots in a single video. Consider planning your shots in advance, similar to creating a traditional storyboard. Input images must match the 1280×720 resolution requirement. The service automatically delivers your completed videos to your specified S3 bucket.

Pricing and availability – Amazon Nova Reel 1.1 is available in Amazon Bedrock in the US East (N. Virginia) AWS Region. You can access the model through the Amazon Bedrock console, AWS SDK, or AWS CLI. As with all Amazon Bedrock services, pricing follows a pay-as-you-go model based on your usage. For more information, refer to Amazon Bedrock pricing.

Ready to start creating with Amazon Nova Reel? Visit the Amazon Nova Reel AWS AI Service Cards to learn more and dive into the Generating videos with Amazon Nova. Explore Python code examples in the Amazon Nova model cookbook repository, enhance your results using the Amazon Nova Reel prompting best practices, and discover video examples in the Amazon Nova Reel gallery—complete with the prompts and reference images that brought them to life.

The possibilities are endless, and we look forward to seeing what you create! Join our growing community of builders at community.aws, where you can create your BuilderID, share your video generation projects, and connect with fellow innovators.

— Eli

How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleRent These Luxury Belizean Villas Owned By Francis Ford Coppola
Next Article NVIDIA debuts GR00T N1 AI model for humanoid robots
Advanced AI Editor
  • Website

Related Posts

Midjourney brings AI video generation to Discord, and now you can make them loop seamlessly

July 25, 2025

YouTube Shorts & Google Photos Are Getting Great New Gen AI Video Tools

July 24, 2025

Google Is Rolling Out More Ways to Turn Photos Into Videos With AI

July 24, 2025
Leave A Reply

Latest Posts

Artist Loses Final Appeal in Case of Apologising for ‘Fishrot Scandal’

US Appeals Court Overturns $8.8 M. Trademark Judgement For Yuga Labs

Old Masters ‘Making a Comeback’ in London: Morning Links

Bill Proposed To Apply Anti-Money Laundering Regulations to Art Market

Latest Posts

Paper page – EarthCrafter: Scalable 3D Earth Generation via Dual-Sparse Latent Diffusion

July 25, 2025

Stability AI is working on a licensing marketplace for creators

July 25, 2025

Alibaba’s Qwen-MT Promises Smarter, Cheaper Translations Across 92 Languages

July 25, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Paper page – EarthCrafter: Scalable 3D Earth Generation via Dual-Sparse Latent Diffusion
  • Stability AI is working on a licensing marketplace for creators
  • Alibaba’s Qwen-MT Promises Smarter, Cheaper Translations Across 92 Languages
  • China sees surge in Nvidia AI chip repair businesses despite export bans
  • Kenya Among 4 Beneficiaries as Google Announces KSh 900m AI Funding

Recent Comments

  1. Sign up to get 100 USDT on The Do LaB On Capturing Lightning In A Bottle
  2. binance Anmeldebonus on David Patterson: Computer Architecture and Data Storage | Lex Fridman Podcast #104
  3. nude on Brain-to-voice neuroprosthesis restores naturalistic speech
  4. Dennisemupt on Local gov’t reps say they look forward to working with Thomas
  5. checkСarBig on How Cursor and Claude Are Developing AI Coding Tools Together

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.