- Create a virtual environment
conda create -n TAGSAM python=3.11
conda activate TAGSAM- Install pytorch manually
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu121
pip install torch_geometric
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.4.1+cu121.html- Install requirements
pip install -r requirements.txt- Login to wandb
wandb loginAll results will be logged to wandb.
If you do not want to use wandb, you can set WANDB_MODE to disabled in the config file.
export WANDB_MODE=disabledYou first need to train a teacher/expert model on the original TAG. This process is generally referred to as the buffer.
python buffer.py --dataset_name computerThen you can condense/distill TAG into a smaller one.
python distill.py --dataset_name computer --syn_size 200You can automatically perform asynchronous evaluation during the condensation process if you set async_eval to True.
Note that you need to ensure the setting eval_gpu is correct; otherwise, it may lead to issues such as the GPU not being available, reduced efficiency, and memory overflow (when gpu and eval_gpu are the same).
Additionally, you can also manually trigger the evaluation if needed.
python eval.py --dataset_name computer --syn_size 200