This repository contains an implementation of an Unsupervised domain adaptation technique as practical part of the course "Machine Learning Module II" (Deep learning).
For reference, this paper has been taken into consideration.
In this assignment we are asked to build, train and evaluate a deep model that is able to counteract the negative impact of a domain shift when the training and test sets
have different distributions. This can be considered as a simulation of a realistic case where only the training dataset is available.
As the majority of UDA frameworks, the quality of our technique has been addressed by looking at the gain over a simple baseline (in this case a ResNet-34 has been used as reference).
In this work, we had at our disposal two different datasets (see Section Dataset for details) and we were expected to test our
technique in two different directions : once taking the first datasets source and the second one as test and vice versa. This allows to
address if the model is in any case biased on a specific direction or not.
The basic object recognition task consists in predicting correctly, given as input an image depicting an object, the label associated
with this object. In this specific case, the settings are a little bit different since we are considering a domain shift, i.e. we trained our
model supervisedly exploiting the ground truth labels and tested unsupervisedly on the target domain without explicitly using the labels
(labels in the target domains are used just to compute the cumulative accuracy).
Briefly, we were required to:
- address the performances of a simple model to use it as reference (baseline)
- implement a more sophisticated model to achieve a gain in performances
- test the baseline model exploiting also the labels on the test set to have an upper bound reference
All three steps needs to be performed using both datasets, in turn, as training and test sets.
The dataset used in this assignment is the Adaptiope
object recognition dataset.
This dataset includes images from
More in detail, for the sake of this assignment
In this repository, you can find the adopted solution in the 232088_229709.ipynb notebook.
| Model | Test Accuracy |
|---|---|
| Baseline | |
| Upper Bound | |
| SymNet |
| Model | Test Accuracy |
|---|---|
| Baseline | |
| Upper Bound | |
| SymNet |
| Model | Average Test Accuracy |
|
|---|---|---|
| Baseline | ||
| Upper Bound | ||
| SymNet |
Since the previous table showcase results obtained using a lower number of epochs in the different training steps (
| Model | Average Test Accuracy |
|
|---|---|---|
| Baseline | ||
| Upper Bound | ||
| SymNet |
The gain we obtain by using Symnet is
| Model | Average Test Accuracy |
|
|---|---|---|
| Baseline | ||
| Upper Bound | ||
| SymNet |
The gain we obtain by using Symnet is
Across the multiple run, our implementation, besides showing an improvement in accuracy, confirm to be more stable as well.
To further confirm the results obtained, a T-test has been performed. The latter shows a p-value of