|
1 | | -# il-opensource-template |
2 | | - |
3 | | -[](https://scorecard.dev/viewer/?uri=github.com/IntelLabs/il-opensource-template) |
| 1 | +# Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration |
| 2 | + |
| 3 | +[](https://scorecard.dev/viewer/?uri=github.com/IntelLabs/C_PEAL) |
4 | 4 | <!-- UNCOMMENT AS NEEDED |
5 | 5 | [](https://github.com/IntelLabs/ConvAssist/actions/workflows/run_unittests.yaml) |
6 | 6 | [](https://pytorch.org/get-started/locally/) |
7 | 7 |  |
8 | 8 | --> |
| 9 | + |
| 10 | +This repository will host the code for the paper titled **"Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration"** |
| 11 | + |
| 12 | +Stay tuned! The code will be released soon. |
| 13 | + |
| 14 | +## Abstract |
| 15 | +Active Learning (AL) has emerged as a powerful approach for minimizing labeling costs by selectively sampling the most informative data for neural network model development. Effective AL for large-scale vision-language models necessitates addressing challenges in uncertainty estimation and efficient sampling given the vast number of parameters involved. In this work, we introduce a novel parameter-efficient learning methodology that incorporates uncertainty calibration loss within the AL framework. We propose a differentiable loss function that promotes uncertainty calibration for effectively selecting fewer and most informative data samples for fine-tuning. Through extensive experiments across several datasets and vision backbones, we demonstrate that our solution can match and exceed the performance of complex feature-based sampling techniques while being computationally very efficient. Additionally, we investigate the efficacy of Prompt learning versus Low-rank adaptation (LoRA) in sample selection, providing a detailed comparative analysis of these methods in the context of efficient AL. |
| 16 | +## Citation |
| 17 | +If you find this work useful, please consider citing our previous works: |
| 18 | + |
| 19 | +``` |
| 20 | +@article{narayanan2024parameter, |
| 21 | + title={Parameter-Efficient Active Learning for Foundational models}, |
| 22 | + author={Narayanan, Athmanarayanan Lakshmi and Krishnan, Ranganath and Machireddy, Amrutha and Subedar, Mahesh}, |
| 23 | + journal={arXiv preprint arXiv:2406.09296}, |
| 24 | + year={2024} |
| 25 | +} |
| 26 | +``` |
| 27 | + |
| 28 | +## License |
| 29 | +Details about the license will be provided upon release. |
0 commit comments