TenCent’s DynamiCrafter demonstrates its narrow advantage over Stable Video Diffusion and Pika Labs

China’s Tencent, the home of WeChat, has announced a new version of its open source video generation model DynamiCrafter – visit the GitHub page. Like others, it uses the diffusion method to turn captions and still images into seconds-long videos.

What is the diffusion method?

It turns captions and still images into seconds-long videos. Diffusion models in machine learning are inspired by the natural phenomenon of diffusion in physics, where particles move from one area of high concentration to another of low concentration. They transform simple data into more complex and realistic data by adding noise to a dataset and then learning to reverse this process.

What’s new in DynamiCrafter?

DynamiCrafter’s pixel resolution is up from 320 x 640 in October 2023 to 640 x 1024 and the team behind DynamiCrafter say that their tool image animation technique can be applied to “more general visual content” than its competitors. “The key idea is to utilize the motion prior of text-to-video diffusion models by incorporating the image into the generative process as guidance,

In comparison, Traditional (techniques) mainly focus on animating natural scenes with stochastic dynamics (e.g. clouds and fluid) or domain-specific motions (e.g. human hair or body motions).”

The demo below shows DynamiCrafter, performing well – a bit more animation – alongside Stable Video Diffusion and Pika Labs.

It’s not just TenCent who are active in this field in China, either. Tiktok‘s parent ByteDance (with MagicVideo), Baidu (UniVG) and Alibaba, whose VGen is open source are all at it, too.

If you would like to speak to somebody about getting started with generative video for a forthcoming project, contact Colin Birch ( or John Rowe ( for an initial chat about what you might need.

Source: TechCrunch

Main image ℅ ScreenRant