Unofficial PyTorch implementation of Spatial Sensitive Grad-CAM (SSGrad-CAM) and Spatial Sensitive Grad-CAM++ (SSGrad-CAM++) for generating instance-specific heat maps for object detectors.
This repository provides minimal reference code to generate heat maps.
For implementation, we heavily refer to the VX-CODE implementation (VX-CODE) and base our code on Detectron2. For DETR, we additionally refer to the official DETR repository.
generate_heatmap.py: CLI to run SSGrad-CAM / SSGrad-CAM++ and save heatmaps- Sample runner scripts for Faster R-CNN and DETR (
run_on_faster.sh,run_on_detr.sh) - Templates for configs (
configs/), model weights (models/), and dataset references (datasets/) - Helper script to download experiment weights:
weights_dl.sh - Helper script to download MS-COCO 2017 data:
coco_dl.sh
If the detectron2 submodule was not fetched at clone time, initialize/update it first:
git submodule update --init --recursive
- Set up a Python environment and install dependencies (PyTorch, etc.).
- Download weights with
bash weights_dl.shand place them undermodels/if needed. (The Google Drive files are the official weights provided by the original repositories.) - Download MS-COCO data (e.g., 2017 val images/annotations) into
datasets/with the expected detectron2 structure. You can usebash coco_dl.shto fetch val2017 + annotations by default. If COCO is not available, the scripts can still run on the sample images indatasets/test_imgs. - Run examples:
bash run_on_faster.shorbash run_on_detr.sh. Inside each script, switchMETHOD/OUTPUTDIRbetweenssgradcamandssgradcamppto choose SSGrad-CAM vs. SSGrad-CAM++. Output directories are set inside each script via--output-dir. - For direct CLI usage, check options with
python generate_heatmap.py --help.
Example heatmaps generated by running generate_heatmap.py. Warmer colors highlight regions that contribute most to the detector’s prediction.
For implementing SSGrad-CAM, we refer to the following ICIP paper and its extended journal version published in IEEE Access.
@INPROCEEDINGS{9897350,
author={Yamauchi, Toshinori and Ishikawa, Masayoshi},
booktitle={2022 IEEE International Conference on Image Processing (ICIP)},
title={Spatial Sensitive GRAD-CAM: Visual Explanations for Object Detection by Incorporating Spatial Sensitivity},
year={2022},
volume={},
number={},
pages={256-260},
doi={10.1109/ICIP46576.2022.9897350}}
@ARTICLE{11270886,
author={Yamauchi, Toshinori},
journal={IEEE Access},
title={Spatial Sensitive Grad-CAM: Toward Instance-Specific Explanations for Object Detectors by Incorporating Spatial Sensitivity},
year={2025},
volume={13},
number={},
pages={202086-202102},
doi={10.1109/ACCESS.2025.3637887}}
For implementing SSGrad-CAM++, we refer to the following CVPR workshop paper and its extended journal version published in CVIU.
@InProceedings{Yamauchi_2024_CVPR,
author = {Yamauchi, Toshinori},
title = {Spatial Sensitive Grad-CAM++: Improved Visual Explanation for Object Detectors via Weighted Combination of Gradient Map},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2024},
pages = {8164-8168}
}
@article{YAMAUCHI2026104658,
title = {Spatial Sensitive Grad-CAM++: Towards High-Quality Visual Explanations for Object Detectors via Weighted Combination of Gradient Maps},
journal = {Computer Vision and Image Understanding},
volume = {264},
pages = {104658},
year = {2026},
issn = {1077-3142},
doi = {https://doi.org/10.1016/j.cviu.2026.104658},
url = {https://www.sciencedirect.com/science/article/pii/S1077314226000251},
author = {Toshinori Yamauchi},
}
- ICIP 2022: https://ieeexplore.ieee.org/document/9897350
- IEEE Access (extended version): https://ieeexplore.ieee.org/document/11270886
