Speaker
Grigorii Davydenko
(Moscow Institute of Physics & Technology (MIPT))
Description
Language models have become central to many AI applications. Effective fine-tuning
is essential to adapt these models to specific tasks. Traditional methods like Low-Rank
Adaptation (LoRA) add fixed-rank adapters to all layers, often resulting in memory
inefficiency due to non-optimal layer selection. We propose SimplexLoRA, a novel
fine-tuning framework that adaptively scales adapter ranks using simplex-constrained
weighting, optimizing both memory usage and performance.
Primary authors
Grigorii Davydenko
(Moscow Institute of Physics & Technology (MIPT))
Igor Shalygin
(MIPT)
Co-author
Andrey Veprikov
(Moscow Institute of Physics & Technology (MIPT))