📝 Selected Publications
Remote Sensing Change Detection

A Novel Multi-Branch Self-Distillation Framework for Optimizing Remote Sensing Change Detection
Ziyuan Liu, Jiawei Zhang, Wenyu Wang, Yuantao Gu
- We propose the MBSD training framework, anovel and generalizable self-distillation learning paradigm designed for CD tasks.
- The proposed framework is single-stage, end-to-end, and incurs only a modest memory and time increase during training, with no extra cost during inference.

M$^2$CD: A Unified MultiModal Framework for Optical-SAR Change Detection with Mixture of Experts and Self-Distillation
Ziyuan Liu, Jiawei Zhang, Wenyu Wang, Yuantao Gu
- We propose a unified MultiModal CD framework (M$^2$CD), which is highly versatile and robust, compatible with various backbone architectures.
- By introducing modality-specialized MoE modules into the backbone and innovatively proposing an Optical-to-SAR transition path (O2SP) for self-distillation guidance, we reduce the feature space discrepancies between different modalities and alleviate the model’s burden in processing multimodal data.
- Extensive experiments on the CAU-Flood dataset demonstrate that M2CD outperforms all SOTA methods.

JL1-CD: A New Benchmark for Remote Sensing Change Detection and a Robust Multi-Teacher Knowledge Distillation Framework
Ziyuan Liu, Ruifei Zhu, Long Gao, Yuanxiu Zhou, Jingyu Ma, Yuantao Gu
- We introduce JL1-CD, a new sub-meter, all-inclusive open-source CD dataset comprising 5,000 pairs of remote sensing image patches with a resolution of 0.5–0.75 meters.
- We propose a multi-teacher knowledge distillation (MTKD) framework, which significantly improves the performance of CD models with various network architectures and parameter sizes without increasing any computational or time cost during inference.
SAR Image Processing

DSRKD: Joint Despecking and Super-Resolution of SAR Images via Knowledge Distillation
Ziyuan Liu, Shaoping Wang, Ying Li, Yuantao Gu, Quan Yu
- We propose the DSRKD network, with an encoder–upscaling–decoder architecture that allows for SR at various resolution scales. The encoder and decoder can also be replaced with various more complex modules, and various distillation methods can be easily added, making it a simple and efficient baseline for despeckling SR.
- We innovatively combine the SR of speckled images with denoising using knowledge distillation (KD), significantly improving the performance of the model without introducing any additional computational overhead during inference.

SAR Image Compression With Inherent Denoising Capability Through Knowledge Distillation
Ziyuan Liu, Shaoping Wang , Yuantao Gu
- We innovatively utilize a KD mechanism to combine compression with denoising for speckled images. Our method exhibits superior performance compared to other methods on both synthetic and SAR datasets.
- The distillation mechanism is applicable to various learned image compression algorithms and consistently enhances their performance.
- This enhancement is achieved without introducing extra complexity in terms of computational time and memory utilization during inference.
Others
- [Diffusion Model] Improving Diffusion-based Inverse Algorithms under Few-Step Constraint via Learnable Linear Extrapolation, Jiawei Zhang, Ziyuan Liu, Leon Yan, Gen Li, Yuantao Gu. arXiv 2025.
- [Open Source]* Open-CD:A Comprehensive Toolbox for Change Detection, Kaiyu Li, …, Ziyuan Liu, … . ACM MM 2025.