- Are these prompts officially endorsed by ByteDance?
- No. Seedance2Video is an independent operator built on the public Seedance 2.0 API. These prompts were tested on the same API endpoint that powers our generator, but are not authored or reviewed by ByteDance.
- Why does the same prompt produce different results each run?
- Generative video models sample randomly unless you lock the seed. To reproduce a winning take, copy the seed from the result you liked and reuse it on the next run.
- How do I write a Seedance prompt for a TikTok ad?
- Use 9:16 aspect, 3–5 second duration, and put the visual hook in the first three seconds of the prompt. Lead with the subject and one strong action, then add style and lighting.
- Do Seedance prompts work for image-to-video?
- Yes, with one rule: only describe what should change. Anything you do not mention is treated as fixed from the reference image. See the image-to-video category for runnable examples.
- Where can I learn more before writing my own prompt?
- Read the 5-part formula and parameter cheatsheet at the top of this page, then start by copying the closest prompt from the library and changing one slot at a time.
- How do I prompt Seedance for slow motion?
- Two levers. First, in the prompt clause for action, name the speed explicitly — for example "in slow motion" or "moves slowly". Second, lower motion strength one notch from your usual setting. Combining the two reads more cinematic than either alone. Avoid "ultra slow motion" or "bullet time" — Seedance does not have those as recognized cinematic vocabulary, and the prompt becomes noise.
- Can Seedance follow a multi-shot script in a single prompt?
- Up to about three shots, yes. Use a numbered shot list — "Shot 1: ... Shot 2: ... Shot 3: ..." — and keep the subject, lighting, and style consistent across shots so the model has anchors. Beyond three shots, generate each one separately and stitch them in your editor; longer single-prompt scripts increase the chance of identity drift between shots.
- How do I keep characters consistent across multiple Seedance clips?
- Three techniques. First, write a stable subject description block (age, hair, clothing, distinguishing feature) and reuse the exact wording across prompts. Second, lock the seed once you find a take you like and reuse it. Third, for higher fidelity, generate a key reference frame, then run image-to-video using that frame as the reference — this maintains identity better than text-only prompts.
- Why is my Seedance output flickering or jittery?
- Flicker usually traces to one of three causes: motion strength too high for the scene, two contradictory style anchors fighting each other (e.g. "cinematic" plus "anime"), or two camera moves named in one prompt. Drop motion one notch, keep two style anchors max, and pick one camera move per prompt. Lock the seed and re-run to confirm the fix.
- Can I copyright a video generated from a Seedance prompt?
- Copyright law on AI output varies by jurisdiction and is evolving rapidly through 2026. In the US, the Copyright Office has held that purely AI-generated work is not copyrightable; substantial human authorship in the editing or compositing process can confer copyright on the resulting work. In the EU and UK, similar principles apply. For commercial use, our paid plans grant a usage license — see the pricing page — but commercial license is distinct from copyright. Consult counsel for high-stakes uses.
- How do I prompt Seedance for a specific camera lens or shot type?
- Seedance recognizes common shot vocabulary: extreme close-up, close-up, medium close-up, medium, wide, establishing wide, low-angle, high-angle, top-down, over-the-shoulder. For lens feel, "shallow depth of field" or "macro lens" works better than "85mm lens" — the model understands optical effects more reliably than focal-length numerics. Pair the shot type with one camera move (static, slow dolly-in, handheld, orbit, tracking) for stability.
- Will these prompts work on older Seedance 1.5 Pro?
- Most will work but with reduced quality. The 5-part formula and parameter logic transfer cleanly to Seedance 1.5 Pro, but you should expect lower duration (capped at 8 seconds), no native audio, and weaker multi-shot coherence. Drop motion strength one notch when porting from 2.0 to 1.5 — the 1.5 model is more sensitive to high motion settings.
- How are these prompts tested?
- Each prompt in this library was run on the same Seedance 2.0 API endpoint that powers the generator embedded on this page. Prompts that produced unstable output across three test seeds were rewritten or replaced before publication. The library is re-tested whenever ByteDance ships a model update, with the dateModified field on the page reflecting the most recent verification pass.