1

LiNeS: Post-training layer scaling prevents forgetting and enhances model merging
Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences
Localizing Task Information for Improved Model Merging and Compression