LogoOmni Video 2
  • Create
  • Agent
  • AI Image
  • AI Video
  • Pricing
Now officially launched and accessible to all public community membersMarch 2026

Wan 2.7: Advanced AI Video Generator

Alibaba’s latest text-to-video and image-to-video model, Wan 2.7 delivers first/last frame control, multi-reference input, and intuitive instruction-based editing. Create polished 5–15 second videos at 720P or 1080P resolution.

Detailed descriptions of scenes, actions, and styles will result in better generation quality

2 seconds5 seconds15 seconds

Loading...

Join Discord community
Content policy notice
Any non-compliant generation will fail. Real-person faces, NSFW content, violence, and potentially infringing inputs may be rejected by model-level safety checks. Stylized art, fictional characters, products, and AI-generated subjects tend to work better.
Standout Capabilities of Wan 2.7

Why Wan 2.7 Is a Top Video Generation Choice

Wan 2.7 expands Alibaba’s video generation toolkit with first/last frame control, multi-reference input, intuitive instruction-based editing, and support for up to 15-second clip outputs.

First / Last Frame Control

Define your opening and closing frames upfront. Wan 2.7 generates the seamless motion between those two points, giving you precise cinematic control without complex prompt work.

Lock in your desired starting and ending compositions before generation starts
Perfect for product reveals, character transitions, and clean scene cuts
Eliminates the guesswork of hitting a specific final shot

Multi-Reference Input

Upload up to 5 reference videos at once, and let Wan 2.7 use them to align character design, environment details, and motion style across your output.

Add up to 5 reference videos to guide the final output’s look and feel
Preserves tight visual consistency for characters and entire scenes
Ideal for brand campaigns, fashion shoots, and projects needing consistent product placement

Instruction-Based Editing

Refine existing videos using plain natural language. Tweak backgrounds, adjust lighting, swap clothing, or alter style entirely without starting a full re-generation from scratch.

Outline your desired changes in simple text, no complex timeline editing required
Swap backgrounds, update outfits, or tweak lighting with a quick prompt
Iterate quickly without losing the original clip’s core motion and timing

Up to 15 Seconds

Create clips up to 15 seconds long — three times longer than prior Wan models, making it perfect for full product showcases or short narrative scenes.

Choose output durations of 5, 10, or 15 seconds to match your scene’s needs
Offers 480P, 720P, and 1080P resolution options
Supports both 16:9 landscape and 9:16 portrait orientation for flexible output
Discover Additional Models

Alternative Video Generation Models

Compare Wan 2.7 to other leading video generation models available on this platform.

Kling v3.0

Native audio-integrated video generation using Kling’s 3.x motion language

Browse Our Curated Selection of Companion AI Models

Kling v3.0 Pro

Professional-tier Kling 3.x output with enhanced fidelity and refined detail

Browse Our Curated Selection of Companion AI Models

Hailuo 02

MiniMax’s newest video generation model, featuring robust motion dynamics

Browse Our Curated Selection of Companion AI Models

PixVerse V6: Advanced AI Video Generator

The latest flagship model from PixVerse, PixVerse V6 delivers industry-leading cinematic video generation with 20+ cinema-grade camera controls, native audio synchronization, a multi-shot scene engine, and 1080p clips up to 15 seconds long. Turn text prompts or source images into polished cinematic footage.

Browse Our Curated Selection of Companion AI Models
FAQs

FAQ

About Omni Video 2, Google Omni, and current AI video generation support

What is Wan 2.7?

Wan 2.7 is Alibaba Tongyi Lab’s newest video generation model, launched in March 2026. Building on Wan 2.6, it adds first/last frame control, support for up to 5 concurrent reference video inputs, 9-grid image input, instruction-based editing, and more accurate motion physics.

What is first/last frame control in Wan 2.7?

First/last frame control (branded FLF2V) lets you lock in both the opening and closing frames of your target video. Wan 2.7 automatically generates the smooth motion between those two points, granting you precise cinematic oversight. Just define your starting and ending compositions, and the model will interpolate the full sequence of action.

How long can videos be with Wan 2.7?

Wan 2.7 lets you create clips ranging from 2 to 15 seconds in length — a major jump from prior Wan models that topped out at roughly 5 seconds. On this platform, you can select preset durations of 5, 10, or 15 seconds for your generated videos.

What modes does Wan 2.7 support?

Wan 2.7 supports text-to-video, image-to-video, first/last frame video (FLF2V), and instruction-based video editing. On this dedicated page, you can access text-to-video and image-to-video generation workflows.

What resolutions does Wan 2.7 support?

Wan 2.7 delivers output at 480P, 720P, and 1080P resolutions. You can use both 16:9 landscape and 9:16 portrait aspect ratios to match your project’s needs.

Is Wan 2.7 open source?

Wan 2.1 was fully open-sourced under the Apache 2.0 license. At launch, Wan 2.7’s official open-source release status had not been confirmed — visit the Alibaba Wan GitHub repository at github.com/Wan-Video for the most up-to-date details.

How does Wan 2.7 compare to Wan 2.6?

Wan 2.7 introduces several key upgrades over Wan 2.6: first/last frame control, 9-grid multi-image input, support for up to 5 reference video inputs, and instruction-based editing, all of which were absent in the prior model. It also boosts maximum clip length to 15 seconds, and improves motion physics accuracy and character consistency across generated content.

Still have unanswered questions? Our team is here to assist you

Join Discord
Resources
  • Blog
  • Create
  • Scenes
  • Works
  • Prompts
  • Image to Prompt
  • Batch Image to Prompt
Company & Legal
  • About
  • Contact
  • Privacy Policy
  • Terms of Service
  • Refund Policy
Image Models
  • Z-Image
  • GPT-4o
  • Flux 2
  • Flux 2 Pro
  • Flux 2 Klein
  • Qwen Image 2
  • Seedream 4.0
  • Seedream 4.5
  • Seedream 5.0
  • Grok Imagine
  • Nano Banana Pro
  • Nano Banana Flash
  • Nano Banana 2
Video Models
  • Google Veo 3.1
  • Google Veo 3.1 Lite
  • Google Veo 3.1 Pro
  • Seedance 1.5 Pro
  • Seedance Fast
  • Seedance Quality
  • Seedance 2.0
  • Hailuo 02
  • Kling v2.6
  • Kling v2.5 Turbo
  • Kling v2.1
  • Kling v2.1 Master
  • Kling O1
  • Kling v3.0
  • Kling v3.0 Pro
Friends
  • Omni Video 2
  • Seedream AI
  • Kling AI
LogoOmni Video 2

omni video 2 prompts · Current model generation · Google Omni watchlist

TwitterX (Twitter)DiscordEmail

Omni Video 2 is an independent third-party AI video workspace and Google Omni watchlist. We are not affiliated with Google, Gemini, Veo, OpenAI, ByteDance, or any model provider. Model availability, names, pricing, and capabilities may change.

© 2026 Omni Video 2 All Rights Reserved. DREAMEGA INFORMATION TECHNOLOGY LLC

[email protected]