Reference to Video
WAN 2.7 Reference-to-Video
Character-consistent video from reference images/videos.
Add images (0/5)
Upload up to 5 images
Add videos (0/3)
Upload up to 3 videos
Hint: You can drag and drop a file or click to upload
示例
上方显示的是示例输出。提示词和设置已预填在左侧。
您的请求将花费 $0.50 每次运行。 使用 $10 您可以运行此模型约 20 次。
还有一件事:
相关模型
README
Alibaba / WAN 2.7 Reference-to-Video — Reference to Video (wan-2.7-r2v)
WAN 2.7 Reference-to-Video creates character-consistent videos from reference images and videos while maintaining appearance, voice tone, and style.
亮点
- •Reference images – Provide up to 5 images for character/object consistency.
- •Reference videos – Provide up to 3 video clips for motion and style reference.
- •Reference voice – Add voice audio for character voice consistency.
- •Multi-character – Support distinct references for multiple characters in a scene.
参数
- •prompt* – Text description to guide video generation. Reference characters with "Image 1", "Video 1" labels.
- •negative_prompt – Things to avoid in the video
- •reference_images – Reference images (up to 5)
- •reference_videos – Reference videos (up to 3)
- •reference_voice_url – Audio for voice reference consistency
- •resolution – Output video resolution
- • 720p
- • 1080p
- •duration – Video duration in seconds (2-10)
- •enable_prompt_expansion – Enable LLM prompt rewriting for better quality
- •seed – Random seed for reproducibility (0-2147483647)
定价
$1.00 每次生成
| 分辨率 | 价格 |
|---|---|
| 720p | $0.12 |
| 1080p | $0.18 |
如何使用
- 1.Provide reference images and/or videos as URL arrays.
- 2.Use labels like "Image 1", "Video 1" in the prompt to reference specific inputs.
- 3.Set resolution (720P/1080P) and duration (2-10s).
专业技巧
- •Clear, front-facing reference images produce the best character consistency.
- •Combined total of images + videos cannot exceed 5.
- •Use reference_voice_url for lip-sync and voice tone consistency.
更多可试用的模型
•Creating consistent character animations from photos.
•Generating videos with specific people, objects, or styles.
•Multi-character storytelling with consistent appearances.
Related Guides
Frequently Asked Questions
What is the WAN 2.7 Reference-to-Video API?
Character-consistent video from reference images/videos.
How much does WAN 2.7 Reference-to-Video cost via API?
WAN 2.7 Reference-to-Video costs $1.0000 per generation through Renderful's API. No subscription required — pay only for what you use.
How do I use WAN 2.7 Reference-to-Video via API?
Sign up for a free Renderful API key, then send a POST request to the /v1/predictions endpoint with model "wan-2.7-r2v". See the documentation at renderful.ai/docs for code examples in Python, JavaScript, and cURL.
What type of content does WAN 2.7 Reference-to-Video generate?
WAN 2.7 Reference-to-Video is a reference to video model by Alibaba. Key features include: Up to 5 reference images, Up to 3 reference videos, Reference voice.
Is the WAN 2.7 Reference-to-Video API fast?
WAN 2.7 Reference-to-Video has medium generation speed. Results are delivered via polling or webhook callback for seamless integration.
