The code repository for "Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning" (CVPR 2025) in PyTorch. If you use any content of this repo for your work, please cite the following bib entry:
@inproceedings{zhou2025duct,
title={Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning},
author={Da-Wei Zhou and Zi-Wen Cai and Han-Jia Ye and Lijun Zhang and De-Chuan Zhan},
booktitle={CVPR},
year={2025},
}
[02/2025] Code has been released.
[02/2025] Accepted to CVPR 2025.
[10/2024] arXiv paper has been released.
Domain-Incremental Learning (DIL) involves the progressive adaptation of a model to new concepts across different domains. While recent advances in pre-trained models provide a solid foundation for DIL, learning new concepts often results in the catastrophic forgetting of pre-trained knowledge. Specifically, sequential model updates can overwrite both the representation and the classifier with knowledge from the latest domain. Thus, it is crucial to develop a representation and corresponding classifier that accommodate all seen domains throughout the learning process. To this end, we propose DUal ConsolidaTion (DUCT) to unify and consolidate historical knowledge at both the representation and classifier levels. By merging the backbone of different stages, we create a representation space suitable for multiple domains incrementally. The merged representation serves as a balanced intermediary that captures task-specific features from all seen domains. Additionally, to address the mismatch between consolidated embeddings and the classifier, we introduce an extra classifier consolidation process. Leveraging class-wise semantic information, we estimate the classifier weights of old domains within the latest embedding space. By merging historical and estimated classifiers, we align them with the consolidated embedding space, facilitating incremental classification. Extensive experimental results on four benchmark datasets demonstrate DUCT's state-of-the-art performance.
You can create a conda environment and run the following command to install the dependencies.
conda install --file requirements.txt
There are 4 datasets involved in the paper, CDDB, CORe50, DomainNet and Office-Home respectively. Follow the two-step guideline to prepare them for the reproduction.
- Download the datasets mannually according the recommended.
- Check if the dataset has been downloaded properly. The dataset directory is expected to have the following structure:
CDDB ├── biggan ├── crn ├── ... ├── wild CORe50 ├── s1 ├── ... ├── s11 ├── labels.pkl ├── LUP.pkl ├── paths.pkl DomainNet ├── clipart ├── ... ├── sketch ├── clipart_test.txt ├── clipart_train.txt ├── ... OfficeHome ├── Art ├── Clipart ├── Product ├── Real_World
- Specify the dataset path in the config file under the
configs
directory."data_path": "$Your/Dataset/Path/Here$"
python main.py --config configs/Template_domainnet_duct.json
The main results have been listed below.
This repo is based on S-Prompts and PILOT.
If you have any questions, please contact me via email or open an issue.