Skip to content

SundayVHan/TAGSAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TAGSAM

Preparation

  1. Create a virtual environment
conda create -n TAGSAM python=3.11
conda activate TAGSAM
  1. Install pytorch manually
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu121
pip install torch_geometric
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.4.1+cu121.html
  1. Install requirements
pip install -r requirements.txt
  1. Login to wandb
wandb login

All results will be logged to wandb.
If you do not want to use wandb, you can set WANDB_MODE to disabled in the config file.

export WANDB_MODE=disabled

Pre-process

You first need to train a teacher/expert model on the original TAG. This process is generally referred to as the buffer.

python buffer.py --dataset_name computer

Condensation

Then you can condense/distill TAG into a smaller one.

python distill.py --dataset_name computer --syn_size 200

Evaluation

You can automatically perform asynchronous evaluation during the condensation process if you set async_eval to True.

Note that you need to ensure the setting eval_gpu is correct; otherwise, it may lead to issues such as the GPU not being available, reduced efficiency, and memory overflow (when gpu and eval_gpu are the same).

Additionally, you can also manually trigger the evaluation if needed.

python eval.py --dataset_name computer --syn_size 200

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages