Johns Hopkins University
(LoRA) Weight matrices share the same universal subspace.
We can utilize publicly available pretrained loRAs to find this principal subspace.
Efficient Training and Inference for Diverse Tasks and modalities
A Single EigenLoRAx (bottom) can Replace Multiple LoRAs (Top), even for complex and multimodal tasks like Text-to-Image Generation without any significant drop in performance. This leads to significant training (100x) and memory savings (18x).
4 LoRAs vs 1 EigenLoRAx comparisonLoRA vs EigenLoRAx comparison.
We introduce EigenLoRAx, a parameter-efficient finetuning method that recycles existing adapters to create a principal subspace aligned with their shared domain knowledge which can be further augmented with orthogonal basis vectors in low-resource scenarios. This enables rapid adaptation to new tasks by learning only lightweight coefficients on the principal components of the subspace - eliminating the need to finetune entire adapters. EigenLoRAx requires significantly fewer parameters and memory, improving efficiency for both training and inference. Our method demonstrates strong performance across diverse domains and tasks, offering a scalable for edge-based applications, personalization, and equitable deployment of large models in resource-constrained environments.
Recycle Pretrained LoRAs for Increased efficiency
Low Rank Adapters (LoRA) uses low-rank matrices for task-specific finetuning. We observe that LoRA adapters share a principal subspace across task domains. By recycling pretrained adapters, we extract task-invariant principal components, enabling efficient representation of both existing and future LoRAs using compact task-specific coefficients. This improves training speed, parameter efficiency, and memory usage. In low-resource settings, where pretrained adapters are scarce, we augment the subspace with randomly initialized components, ensuring orthogonality via the Gram-Schmidt process, ensuring they complement the extracted subspace without redundancy.
LoRAs Share the same Principal Subspace !
The principal Subspace of 500 LoRAs trained on separate Tasks using the Mistral-7B-Instruct model. most of the information is stored in top 16 principal components.  
Upto 18x less memory needed

Replace Multiple LoRAs with a single EigenLoRAx

Upto 2x Faster Training

Works even in Sparse Domain Setup
Upto 100x less trainable parameters!
GLUE benchmark results. In all cases, higher values indicate better performance.
@misc{kaushik2025eigenloraxrecyclingadaptersprincipal,
      title={EigenLoRAx: Recycling Adapters to Find Principal Subspaces for Resource-Efficient Adaptation and Inference}, 
      author={Prakhar Kaushik and Ankit Vaidya and Shravan Chaudhari and Alan Yuille},
      year={2025},
      eprint={2502.04700},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2502.04700}, 
}