Recent vision–language model (VLM)–based approaches have achieved impressive results on SVG generation. However, because they generate only text and lack visual signals during decoding, they often struggle with complex semantics and fail to produce visually appealing or geometrically coherent SVGs.
We introduce DuetSVG, a unified multimodal model that jointly generates image tokens and corresponding SVG tokens in an end-to-end manner. DuetSVG is trained on both image and SVG datasets. At inference, we apply a novel test-time scaling strategy that leverages the model’s native visual predictions as guidance to improve SVG decoding quality. Extensive experiments show that our method outperforms existing methods, producing visually faithful, semantically aligned, and syntactically clean SVGs across a wide range of applications.
As a unified model, DuetSVG accepts multimodal inputs, including text prompts, SVG code and raster images.
We use Janus-Pro text tokenizer for text prompts.
For images, an Understanding (Und.) Encoder extracts semantic features, while a Generation (Gen.) Encoder converts images into discrete visual tokens.
Two MLP aligners project the encoder outputs into the same feature space as the text embeddings.
A Generation (Gen.) head predicts image tokens, and a language modeling (LM) head predicts SVG tokens.
We begin with large-scale text-to-image (T2I) pretraining in Stage 1.
The objective of this stage is to strengthen the model's capacity to produce visually appealing and clean images characterized by clear geometric primitives and flat colors.
In Stage 2, we perform SFT across multiple tasks including T2I, T2SVG, and I2SVG under a unified next-token prediction objective with cross-entropy loss over interleaved multimodal outputs.