LogoOmni Video 2
  • Crear
  • Agent
  • Imagen con IA
  • Video con IA
  • Precios
Ahora lanzado oficialmente y accesible para todos los miembros de la comunidad públicaMarch 2026

Wan 2.7: Advanced AI Video Generator

Alibaba’s latest text-to-video and image-to-video model, Wan 2.7 delivers first/last frame control, multi-reference input, and intuitive instruction-based editing. Create polished 5–15 second videos at 720P or 1080P resolution.

Las descripciones detalladas de escenas, acciones y estilos darán como resultado una mejor calidad de generación

2 segundos5 segundos15 segundos

Cargando...

Unirse a la comunidad de Discord
Aviso de política de contenido
Cualquier generación que no cumpla las reglas fallará. Las caras de personas reales, contenido NSFW, violencia y entradas potencialmente infractoras pueden ser rechazadas por las comprobaciones de seguridad a nivel de modelo. El arte estilizado, los personajes ficticios, los productos y los temas generados por IA suelen funcionar mejor.
Standout Capabilities of Wan 2.7

Why Wan 2.7 Is a Top Video Generation Choice

Wan 2.7 expands Alibaba’s video generation toolkit with first/last frame control, multi-reference input, intuitive instruction-based editing, and support for up to 15-second clip outputs.

First / Last Frame Control

Define your opening and closing frames upfront. Wan 2.7 generates the seamless motion between those two points, giving you precise cinematic control without complex prompt work.

Lock in your desired starting and ending compositions before generation starts
Perfect for product reveals, character transitions, and clean scene cuts
Eliminates the guesswork of hitting a specific final shot

Multi-Reference Input

Upload up to 5 reference videos at once, and let Wan 2.7 use them to align character design, environment details, and motion style across your output.

Add up to 5 reference videos to guide the final output’s look and feel
Preserves tight visual consistency for characters and entire scenes
Ideal for brand campaigns, fashion shoots, and projects needing consistent product placement

Instruction-Based Editing

Refine existing videos using plain natural language. Tweak backgrounds, adjust lighting, swap clothing, or alter style entirely without starting a full re-generation from scratch.

Outline your desired changes in simple text, no complex timeline editing required
Swap backgrounds, update outfits, or tweak lighting with a quick prompt
Iterate quickly without losing the original clip’s core motion and timing

Up to 15 Seconds

Create clips up to 15 seconds long — three times longer than prior Wan models, making it perfect for full product showcases or short narrative scenes.

Choose output durations of 5, 10, or 15 seconds to match your scene’s needs
Offers 480P, 720P, and 1080P resolution options
Supports both 16:9 landscape and 9:16 portrait orientation for flexible output
Discover Additional Models

Alternative Video Generation Models

Compare Wan 2.7 to other leading video generation models available on this platform.

Kling v3.0

Native audio-integrated video generation using Kling’s 3.x motion language

Navega por nuestra selección cuidadosamente elegida de modelos de IA complementarios

Kling v3.0 Pro

Professional-tier Kling 3.x output with enhanced fidelity and refined detail

Navega por nuestra selección cuidadosamente elegida de modelos de IA complementarios

Hailuo 02

MiniMax’s newest video generation model, featuring robust motion dynamics

Navega por nuestra selección cuidadosamente elegida de modelos de IA complementarios

PixVerse V6: Advanced AI Video Generator

The latest flagship model from PixVerse, PixVerse V6 delivers industry-leading cinematic video generation with 20+ cinema-grade camera controls, native audio synchronization, a multi-shot scene engine, and 1080p clips up to 15 seconds long. Turn text prompts or source images into polished cinematic footage.

Navega por nuestra selección cuidadosamente elegida de modelos de IA complementarios
FAQs

Preguntas frecuentes

Acerca de Omni Video 2, Google Omni y el soporte actual de generación de video IA

What is Wan 2.7?

Wan 2.7 is Alibaba Tongyi Lab’s newest video generation model, launched in March 2026. Building on Wan 2.6, it adds first/last frame control, support for up to 5 concurrent reference video inputs, 9-grid image input, instruction-based editing, and more accurate motion physics.

What is first/last frame control in Wan 2.7?

First/last frame control (branded FLF2V) lets you lock in both the opening and closing frames of your target video. Wan 2.7 automatically generates the smooth motion between those two points, granting you precise cinematic oversight. Just define your starting and ending compositions, and the model will interpolate the full sequence of action.

How long can videos be with Wan 2.7?

Wan 2.7 lets you create clips ranging from 2 to 15 seconds in length — a major jump from prior Wan models that topped out at roughly 5 seconds. On this platform, you can select preset durations of 5, 10, or 15 seconds for your generated videos.

What modes does Wan 2.7 support?

Wan 2.7 supports text-to-video, image-to-video, first/last frame video (FLF2V), and instruction-based video editing. On this dedicated page, you can access text-to-video and image-to-video generation workflows.

What resolutions does Wan 2.7 support?

Wan 2.7 delivers output at 480P, 720P, and 1080P resolutions. You can use both 16:9 landscape and 9:16 portrait aspect ratios to match your project’s needs.

Is Wan 2.7 open source?

Wan 2.1 was fully open-sourced under the Apache 2.0 license. At launch, Wan 2.7’s official open-source release status had not been confirmed — visit the Alibaba Wan GitHub repository at github.com/Wan-Video for the most up-to-date details.

How does Wan 2.7 compare to Wan 2.6?

Wan 2.7 introduces several key upgrades over Wan 2.6: first/last frame control, 9-grid multi-image input, support for up to 5 reference video inputs, and instruction-based editing, all of which were absent in the prior model. It also boosts maximum clip length to 15 seconds, and improves motion physics accuracy and character consistency across generated content.

¿Todavía tienes preguntas sin responder? Nuestro equipo está aquí para ayudarte

Unirse a Discord
Recursos
  • Blog
  • Crear
  • Escenas
  • Obras
  • Prompts
  • Imagen a Prompt
  • Imagen a Prompt por Lote
Empresa & Legal
  • Acerca de
  • Contacto
  • Política de Privacidad
  • Términos del Servicio
  • Política de Reembolsos
Image Models
  • Z-Image
  • GPT-4o
  • Flux 2
  • Flux 2 Pro
  • Flux 2 Klein
  • Qwen Image 2
  • Seedream 4.0
  • Seedream 4.5
  • Seedream 5.0
  • Grok Imagine
  • Gemini 3 Pro Image
  • Nano Banana Flash
  • Nano Banana 2
Video Models
  • Google Veo 3.1
  • Google Veo 3.1 Lite
  • Google Veo 3.1 Pro
  • Seedance 1.5 Pro
  • Seedance Fast
  • Seedance Quality
  • Seedance 2.0
  • Hailuo 02
  • Kling v2.6
  • Kling v2.5 Turbo
  • Kling v2.1
  • Kling v2.1 Master
  • Kling O1
  • Kling v3.0
  • Kling v3.0 Pro
Amigos
  • Omni Video 2
  • Seedream AI
  • Kling AI
LogoOmni Video 2

prompts de omni video 2 · Generación de modelo actual · Lista de seguimiento Google de Omni

TwitterX (Twitter)DiscordEmail

Omni Video 2 es un espacio de trabajo de IA de terceros independiente y la lista de seguimiento Google de Omni. No estamos afiliados con Google, Gemini, Veo, OpenAI, ByteDance, ni ningún proveedor de modelos. La disponibilidad de modelos, sus nombres, precios y capacidades pueden cambiar.

© 2026 Omni Video 2 All Rights Reserved. DREAMEGA INFORMATION TECHNOLOGY LLC

[email protected]