FireRed Image Edit
FireRed
HomeFireRed Image Edit
Pricing
FireRed Image Edit
FireRed

FireRed Image Edit is an open-source, general-purpose image editing model by Xiaohongshu, trained on 1.6 billion samples for production-grade editing.

About

  • FAQ
  • Showcases
  • Pricing
  • Changelog
  • Video Generator
  • API
AI Image Models
AI Video Models
AI Tools
© 2024 FireRed, All rights reserved
Privacy PolicyTerms of ServiceRefund PolicyRefund RequestAbout Us
deDeutschenEnglishesEspañolfrFrançaiszh-HK繁体中文ja日本語ko한국어trTürkçezh中文heעבריתplPolski
FireRed Image Edit is an open-source model by Xiaohongshu. This service provides a hosted interface for the model.
  1. Home
  2. AI Video Generator
  3. Seedance 2.0
ByteDance Flagship

Seedance 2.0

ByteDance's next-generation AI video model with the revolutionary @-reference system. Combine text, images, video clips, and audio in a single prompt. Native audio-video synchronization, V2V editing, and up to 2K resolution at 30fps — all in one unified generation.

About

About Seedance 2.0

Seedance 2.0 is ByteDance's most advanced AI video generation model, unveiled in February 2026. It adopts a unified multimodal audio-video joint generation architecture supporting 4 input modalities simultaneously — text, up to 9 images, up to 3 video clips, and up to 3 audio tracks. The ground-breaking @-reference system lets you tag specific elements in your prompt and bind them to uploaded references for granular control over camera movement, character appearance, audio rhythm, and visual style. Outputs reach up to 2K resolution with native synchronized audio including multilingual lip-sync, sound effects, and background music.

About Seedance 2.0

Key Features of Seedance 2.0

@-Reference System

Revolutionary reference tagging using @Image, @Video, and @Audio labels in your prompt. Bind specific elements to uploaded files for precise control over camera movement, character actions, audio rhythm, and visual style.

4-Modality Input

Combine text, up to 9 images, up to 3 video clips, and up to 3 audio tracks in a single generation request. Seedance 2.0 is the first model to process all four input types simultaneously.

Native Audio-Video Sync

Joint audio-video synthesis produces lip-sync dialogue, sound effects, and background music synchronized with the visual output. Supports multilingual lip-sync with phoneme-level precision.

V2V Video Editing

Edit existing videos through reference-to-video mode. Transfer motion patterns, camera paths, and pacing from uploaded clips. Change outfits, modify actions, or replace elements while preserving the original structure.

2K Resolution & 30fps

Native 2K (2048x1080) output at 30fps with multiple quality levels: 480p, 720p, and 1080p. Video duration ranges from 4 to 15 seconds per generation.

Multi-Shot Character Consistency

Upload multiple reference images of the same character from different angles. Seedance 2.0 maintains consistent faces, clothing, body proportions, and accessories across multiple generated clips.

Official Showcase

Official Showcase

Explore Seedance 2.0's capabilities in multimodal reference control, native audio generation, and video editing

Multi-reference prompt combining all modalities
@-Reference System

“@Image1 walks through @Image2 with camera movement from @Video1 and background music from @Audio1”

Multi-reference prompt combining all modalities

Character motion guided by audio beat reference
@-Reference System

“@Image1 character dances with rhythm from @Audio1 in @Image3 environment”

Character motion guided by audio beat reference

Lip-sync dialogue with visual content
Native Audio Generation

“A person giving a presentation with synchronized English speech and slide transitions”

Lip-sync dialogue with visual content

Narration synchronized with cooking actions
Native Audio Generation

“Cooking tutorial with step-by-step narration and ambient kitchen sounds”

Narration synchronized with cooking actions

FAQ

Seedance 2.0 FAQ

Seedance 2.0 FAQ

01
02
03
04
05
06
07
08
Testimonials

What Creators Say About Seedance 2.0

“The @-reference system is genuinely revolutionary. I can extract camera movements from a reference clip and apply them instantly — it's a completely new creative workflow.”

Alex Kim

Alex Kim

Video Director

Alex Kim: “The @-reference system is genuinely revolutionary. I can extract camera movements from a reference clip and apply them instantly — it's a completely new creative workflow.”

Priya Sharma: “Native audio sync saves hours of post-production. The lip-sync quality is surprisingly precise even with non-English dialogue.”

Lucas Müller: “V2V editing lets me enhance existing footage without reshooting. Seedance 2.0 is now a core tool in our production pipeline.”

Yuki Tanaka: “The 4-modality input is a game-changer. I can bring a character design, a camera movement reference, and background music all into one prompt and get exactly what I envisioned.”

Explore More AI Video Models

Veo 3.1 Free AI Video Generator

Veo 3.1 Free AI Video Generator

New

Veo 3.1 is Google DeepMind's most advanced free AI video generator with native audio generation. It creates synchronized sound effects, dialogue, and environmental audio alongside 1080p video at 24 FPS — all available online with no watermark. Generate unlimited HD videos up to 8 seconds per clip, extendable to 60+ seconds.

Try now
Wan 2.6

Wan 2.6

New

Wan 2.6 is Alibaba's video generation model delivering high-quality videos with diverse style support, smooth motion, and cinematic output from text prompts and reference images.

Try now
Sora 2

Sora 2

Sora 2 is OpenAI's flagship video generation model capable of producing high-quality videos from both text descriptions and image inputs. It understands complex scene compositions, character interactions, camera movements, and real-world physics to deliver cinematic results. Sora 2 represents a major leap in AI video generation with improved temporal consistency, longer duration support, and more faithful prompt interpretation.

Try now
Kling 2.6

Kling 2.6

Kling 2.6 is Kuaishou's latest AI video generation model, recognized for its exceptional motion quality and cinematic output. Built on advanced spatiotemporal modeling, Kling 2.6 produces videos with fluid character movement, dynamic camera transitions, and rich visual detail. It supports both text-to-video and image-to-video generation, making it a versatile tool for creators seeking professional-quality AI video content.

Try now
Grok Video

Grok Video

New

Grok Video (powered by Grok Imagine Video) is xAI's video generation model built directly into the Grok ecosystem. Powered by the proprietary Aurora engine, it converts text prompts or static images into short video clips with synchronized audio. What sets Grok Video apart is its speed — clips generate in seconds, not minutes — combined with real-time web data access for current, relevant visual references. The model prioritizes prompt adherence and natural motion coherence, making it ideal for rapid social media content, quick prototyping, and iterative creative workflows.

Try now
Grok Imagine

Grok Imagine

Grok Imagine is xAI's image generation model, producing photorealistic imagery and creative compositions from natural language prompts with minimal restrictions on creative expression.

Try now

Start Creating with Seedance 2.0

Experience Seedance 2.0 — the most advanced video generator from ByteDance, free online

Try Seedance 2.0 Free
user 1
user 2
user 3
user 4
user 5

10,000+ users

Seedance 2.0

0 / 3000
Auto
Cost 6 credits
Buy Credits

Video Preview

Ready to Generate

No Videos Generated

Veo 3.1

Veo 3.1

30
Sora 2

Sora 2

30
Wan 2.6

Wan 2.6

80
Kling Motion Control

Kling Motion Control

55
Kling 2.6

Kling 2.6

55
Seedance 1.5 Pro

Seedance 1.5 Pro

30
Seedance 2

Seedance 2

76
Seedance 2 Fast

Seedance 2 Fast

64
Grok Imagine

Grok Imagine

20
Grok Video

Grok Video

10