Seedance 2.0 API Coming to Renderful
ByteDance's Seedance 2.0 is a major leap for AI video generation — and we're bringing it to the Renderful API. Generate cinematic videos with native audio, 2K resolution, and physics-based realism, all through a single API call.
More Samples
What is Seedance 2.0?
Seedance 2.0 is ByteDance's next-generation AI video model. Unlike most video generation models that produce silent clips, Seedance 2.0 generates video and audio together natively — dialogue, ambient sound, music, and effects are all synthesized as part of the same forward pass. No post-processing, no separate audio pipeline.
The model also introduces physics-based simulation for realistic object motion, multi-modal reference input (up to 12 images), and natural language video editing. It outputs at 2K resolution and renders 30% faster than its predecessor.
What This Means for API Users
When Seedance 2.0 lands on Renderful, you'll be able to access all its capabilities through our standard REST API. Here's what changes for your workflow:
Video + Audio in One Request
No more chaining a video generation call with a separate TTS or audio synthesis call. One request returns a complete video file with synchronized audio.
2K Resolution Output
Request up to 2048×2048 resolution directly. No upscaling step needed for high-quality deliverables.
Multi-Reference Input
Pass up to 12 reference images in a single request to control character appearance, environment, and style with precision.
Physics-Aware Generation
Realistic fluid dynamics, cloth movement, gravity, and object collisions without prompt engineering hacks.
Text-Based Video Editing
Pass a natural language instruction to modify an existing video — change lighting, swap objects, adjust timing — through the API.
API Integration Preview
Seedance 2.0 will follow our standard prediction API format. Here's a preview of what the request will look like:
{
"model": "seedance/seedance-2-0",
"input": {
"prompt": "A cinematic chase through neon-lit
streets at night, rain reflecting city
lights, dramatic orchestral score",
"aspect_ratio": "16:9",
"duration": 10,
"audio": true,
"resolution": "2k"
},
"webhook": "https://your-app.com/webhook"
}The exact API schema will be published in our docs when the model is available. If you're already integrated with Renderful, switching to Seedance 2.0 will be a one-line model ID change.
Supported Generation Modes
Seedance 2.0 will be available across multiple generation modes on Renderful:
Describe a scene, get a video with audio.
Animate a still image with motion and sound.
Modify existing videos with text instructions.
Control characters and scenes with up to 12 reference images.
How Seedance 2.0 Compares
Seedance 2.0 sits alongside other top-tier models already available on Renderful. Here's a quick comparison:
| Feature | Seedance 2.0 | Kling 2.6 | Sora 2 |
|---|---|---|---|
| Native Audio | Yes | No | No |
| Max Resolution | 2K | 1080p | 1080p |
| Physics Simulation | Yes | Partial | Partial |
| Multi-Reference Input | Up to 12 | 1 | 1 |
| Text-Based Editing | Yes | No | No |
Getting Ready
Seedance 2.0 will be available on Renderful soon. If you want to be among the first to use it:
- 1Create a Renderful account if you don't have one yet. You'll get free credits to start.
- 2Set up webhooks so you can receive async generation results. Seedance 2.0 videos take a few seconds to generate.
- 3Start with our existing Seedance models (1.0 Lite, 1.0 Pro, 1.5 Pro) to familiarize yourself with the API format. Upgrading to 2.0 will be seamless.
Be First to Access Seedance 2.0
Create your Renderful account now and start building with our existing Seedance models. When 2.0 launches, you'll be ready to go.