TL;DR
Seedance 2.0 from ByteDance is now live on Renderful. Two variants: Seedance 2.0 at $1.03/generation and Seedance 2.0 Fast at $0.83/generation. Generate video with native audio, combine up to 9 images + 3 videos + 3 audio files as references, and output at up to 720p with 4–15 second durations. If you're already on Renderful, it's a one-line model ID change.
We're excited to announce that Seedance 2.0 by ByteDance is now available through the Renderful API. It generates video with synchronized native audio, supports multi-modal reference inputs, and produces physics-based motion — all accessible through our standard REST API. No ByteDance account required.
See It in Action
All videos generated by Seedance 2.0 via the Renderful API. More examples on the model page.
What Seedance 2.0 Brings to the API
Seedance 2.0 is ByteDance's most advanced video generation model. Here's what it adds to Renderful:
Native Audio Generation
Video and audio are generated together — dialogue, ambient sound, music, and effects are synthesized as part of the generation. No separate audio pipeline needed. Audio is on by default (2x cost) and can be toggled off.
Multi-Reference Input
Combine up to 9 reference images, 3 reference videos, and 3 reference audio files in a single request. Use @Image 1, @Video 1, @Audio 1 syntax in your prompt to reference each file.
First & Last Frame Control
Define both the starting and ending frames of your video for precise composition and storytelling.
Physics-Based Motion
Realistic fluid dynamics, cloth movement, gravity, and object collisions — built into the model, no prompt engineering required.
Flexible Output
4–15 second videos at 480p or 720p. Six aspect ratios: 16:9, 9:16, 1:1, 4:3, 3:4, and 21:9 ultrawide.
API Integration
Seedance 2.0 follows Renderful's standard generation format. If you're already integrated, it's a one-line model ID change:
{
"model": "bytedance/seedance-2.0",
"input": {
"prompt": "A cinematic chase through neon-lit
streets at night, rain reflecting city
lights, dramatic orchestral score",
"aspect_ratio": "16:9",
"duration": 10,
"resolution": "720p",
"generate_audio": true
},
"webhook": "https://your-app.com/webhook"
}Multi-Reference Example
Combine images, videos, and audio as creative references. Reference them in your prompt with @Image, @Video, and @Audio syntax:
{
"model": "bytedance/seedance-2.0",
"input": {
"prompt": "Replace the character with the
figure from @Image 1, replicate the
movements from @Video 1, use the music
from @Audio 1",
"reference_image_urls": [
"https://example.com/character.png"
],
"reference_video_urls": [
"https://example.com/motion-ref.mp4"
],
"reference_audio_urls": [
"https://example.com/soundtrack.mp3"
],
"aspect_ratio": "9:16",
"duration": 8,
"generate_audio": true
},
"webhook": "https://your-app.com/webhook"
}Full API reference in the Renderful docs.
Pricing
Two variants, both with full multi-reference and native audio support:
| Model | Cost | Speed | Best For |
|---|---|---|---|
| Seedance 2.0 | $1.03/gen | Standard | Maximum quality, complex scenes |
| Seedance 2.0 Fast | $0.83/gen | Fast | Speed-sensitive workflows, shorter clips |
Audio generation is on by default and doubles the base cost. Set generate_audio: false to generate video-only at half cost. Full pricing details.
How It Compares
Seedance 2.0 alongside other video models on Renderful:
| Feature | Seedance 2.0 | Kling 2.6 | Sora 2 |
|---|---|---|---|
| Native Audio | Yes | No | No |
| Max Resolution | 720p | 1080p | 1080p |
| Physics Simulation | Yes | Partial | Partial |
| Multi-Reference | 9 img + 3 vid + 3 audio | 1 | 1 |
| Max Duration | 15s | 10s | 20s |
| First & Last Frame | Yes | No | No |
Get Started
- 1Get an API key — create a Renderful account if you don't have one. Free credits included.
- 2Try the playground — test prompts and references directly on the model page before writing code.
- 3Integrate — use model ID
bytedance/seedance-2.0orbytedance/seedance-2.0-fast. Results delivered via webhook.
Start Using Seedance 2.0
Generate video with native audio, multi-reference input, and physics-based motion through a single API call.