Skip to content

Commit 2da3462

Browse files
Update README.md
1 parent b58cf63 commit 2da3462

1 file changed

Lines changed: 24 additions & 3 deletions

File tree

README.md

Lines changed: 24 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,29 @@
1-
# il-opensource-template
2-
![GitHub License](https://img.shields.io/github/license/IntelLabs/il-opensource-template)
3-
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/IntelLabs/il-opensource-template/badge)](https://scorecard.dev/viewer/?uri=github.com/IntelLabs/il-opensource-template)
1+
# Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration
2+
![GitHub License](https://img.shields.io/github/license/IntelLabs/C_PEAL)
3+
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/IntelLabs/C_PEAL/badge)](https://scorecard.dev/viewer/?uri=github.com/IntelLabs/C_PEAL)
44
<!-- UNCOMMENT AS NEEDED
55
[![Unit Tests](https://github.com/IntelLabs/ConvAssist/actions/workflows/run_unittests.yaml/badge.svg?branch=covassist-cleanup)](https://github.com/IntelLabs/ConvAssist/actions/workflows/run_unittests.yaml)
66
[![pytorch](https://img.shields.io/badge/PyTorch-v2.4.1-green?logo=pytorch)](https://pytorch.org/get-started/locally/)
77
![python-support](https://img.shields.io/badge/Python-3.12-3?logo=python)
88
-->
9+
10+
This repository will host the code for the paper titled **"Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration"**
11+
12+
Stay tuned! The code will be released soon.
13+
14+
## Abstract
15+
Active Learning (AL) has emerged as a powerful approach for minimizing labeling costs by selectively sampling the most informative data for neural network model development. Effective AL for large-scale vision-language models necessitates addressing challenges in uncertainty estimation and efficient sampling given the vast number of parameters involved. In this work, we introduce a novel parameter-efficient learning methodology that incorporates uncertainty calibration loss within the AL framework. We propose a differentiable loss function that promotes uncertainty calibration for effectively selecting fewer and most informative data samples for fine-tuning. Through extensive experiments across several datasets and vision backbones, we demonstrate that our solution can match and exceed the performance of complex feature-based sampling techniques while being computationally very efficient. Additionally, we investigate the efficacy of Prompt learning versus Low-rank adaptation (LoRA) in sample selection, providing a detailed comparative analysis of these methods in the context of efficient AL.
16+
## Citation
17+
If you find this work useful, please consider citing our previous works:
18+
19+
```
20+
@article{narayanan2024parameter,
21+
title={Parameter-Efficient Active Learning for Foundational models},
22+
author={Narayanan, Athmanarayanan Lakshmi and Krishnan, Ranganath and Machireddy, Amrutha and Subedar, Mahesh},
23+
journal={arXiv preprint arXiv:2406.09296},
24+
year={2024}
25+
}
26+
```
27+
28+
## License
29+
Details about the license will be provided upon release.

0 commit comments

Comments
 (0)