|
21 | 21 |
|
22 | 22 | Here are the three Markdown tables, each showing only the models you have already supported for each category: |
23 | 23 |
|
24 | | -Below are three Markdown tables that list only the supported models. Each table includes a column for the model name and a column for its reference. Since specific references were not provided, a placeholder "[Reference unknown]" is used. |
| 24 | +Below are three Markdown tables that list only the supported models. Each table includes a column for the model name and a column for its reference. Since specific references were not provided, a placeholder "[Reference unknown]" is used. |
25 | 25 |
|
26 | 26 | 1. **Predictive Models**: Done: GREA, SGIR, IRM, GIN/GCN w/ virtual, DIR. TODO: SMILES-based LSTM/Transformers, more |
27 | 27 | 2. **Generative Models**: Done: Graph DiT, GraphGA, DiGress. TODO:, GDSS, more |
28 | 28 | 3. **Representation Models**: Done: MoAMa, AttrMasking, ContextPred, EdgePred. Many pretrained models from HF. TODO: checkpoints, more |
29 | 29 |
|
30 | | -### Predictive Models |
31 | | - |
32 | | -| Model | Reference | |
33 | | -|----------------------|---------------------| |
34 | | -| SGIR | [Semi-Supervised Graph Imbalanced Regression. KDD 2023](https://dl.acm.org/doi/10.1145/3580305.3599497) | |
35 | | -| GREA | [Graph Rationalization with Environment-based Augmentations. KDD 2022](https://dl.acm.org/doi/abs/10.1145/3534678.3539347) | |
36 | | -| DIR | [Discovering Invariant Rationales for Graph Neural Networks. ICLR 2022](https://arxiv.org/abs/2201.12872) | |
37 | | -| SSR | [SizeShiftReg: a Regularization Method for Improving Size-Generalization in Graph Neural Networks. NeurIPS 2022](https://arxiv.org/abs/2206.07096) | |
38 | | -| IRM | [Invariant Risk Minimization](https://arxiv.org/abs/1907.02893) | |
39 | | -| RPGNN | [Relational Pooling for Graph Representations. ICLR 2019](https://arxiv.org/abs/1903.02541) | |
40 | | -| GNNs | [Graph Convolutional Networks. ICLR 2017](https://arxiv.org/abs/1609.02907) and [Graph Isomorphism Network. ICLR 2019](https://arxiv.org/abs/1810.00826) | |
41 | | -| Transformer (SMILES) | [Attention is All You Need. NeurIPS 2017](https://arxiv.org/abs/1706.03762) based on SMILES strings | |
42 | | -| LSTM (SMILES) | [Long short-term memory (Neural Computation 1997)](https://ieeexplore.ieee.org/abstract/document/6795963) based on SMILES strings | |
43 | | - |
44 | | -### Generative Models |
45 | | - |
46 | | -| Model | Reference | |
47 | | -|------------|---------------------| |
48 | | -| Graph DiT | [Graph Diffusion Transformers for Multi-Conditional Molecular Generation. NeurIPS 2024](https://openreview.net/forum?id=cfrDLD1wfO) | |
49 | | -| DiGress | [DiGress: Discrete Denoising Diffusion for Graph Generation. ICLR 2023](https://openreview.net/forum?id=UaAD-Nu86WX) | |
50 | | -| GDSS | [Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations. ICML 2022](https://proceedings.mlr.press/v162/jo22a/jo22a.pdf) | |
51 | | -| MolGPT | [MolGPT: Molecular Generation Using a Transformer-Decoder Model. Journal of Chemical Information and Modeling 2021](https://pubs.acs.org/doi/10.1021/acs.jcim.1c00600) | |
52 | | -| GraphGA | [A Graph-Based Genetic Algorithm and Its Application to the Multiobjective Evolution of Median Molecules. Journal of Chemical Information and Computer Sciences 2004](https://pubs.acs.org/doi/10.1021/ci034290p) | |
53 | | - |
54 | | -### Representation Models |
55 | | - |
56 | | -| Model | Reference | |
57 | | -|--------------|---------------------| |
58 | | -| MoAMa | [Motif-aware Attribute Masking for Molecular Graph Pre-training. LoG 2024](https://arxiv.org/abs/2309.04589) | |
59 | | -| AttrMasking | [Strategies for Pre-training Graph Neural Networks. ICLR 2020](https://arxiv.org/abs/1905.12265) | |
60 | | -| ContextPred | [Strategies for Pre-training Graph Neural Networks. ICLR 2020](https://arxiv.org/abs/1905.12265) | |
61 | | -| EdgePred | [Strategies for Pre-training Graph Neural Networks. ICLR 2020](https://arxiv.org/abs/1905.12265) | |
62 | | -| InfoGraph | [InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. ICLR 2020](https://arxiv.org/abs/1908.01000) | |
63 | | -| Supervised | Supervised pretraining | |
64 | | -| Pretrained | More than ten pretrained models from [Hugging Face](https://huggingface.co) | |
| 30 | +see the [Overview](#overview) section. |
65 | 31 |
|
66 | 32 | > **Note**: This project is in active development, and features may change. |
67 | 33 |
|
@@ -189,8 +155,43 @@ model.set_params(verbose=True) |
189 | 155 | predictions = model.predict(smiles_list) |
190 | 156 | ``` |
191 | 157 |
|
192 | | -<!-- ### Using Checkpoints for Benchmarking |
193 | | -_(Coming soon)_ --> |
| 158 | +## Overview |
| 159 | + |
| 160 | +### Predictive Models |
| 161 | + |
| 162 | +| Model | Reference | |
| 163 | +|----------------------|---------------------| |
| 164 | +| SGIR | [Semi-Supervised Graph Imbalanced Regression. KDD 2023](https://dl.acm.org/doi/10.1145/3580305.3599497) | |
| 165 | +| GREA | [Graph Rationalization with Environment-based Augmentations. KDD 2022](https://dl.acm.org/doi/abs/10.1145/3534678.3539347) | |
| 166 | +| DIR | [Discovering Invariant Rationales for Graph Neural Networks. ICLR 2022](https://arxiv.org/abs/2201.12872) | |
| 167 | +| SSR | [SizeShiftReg: a Regularization Method for Improving Size-Generalization in Graph Neural Networks. NeurIPS 2022](https://arxiv.org/abs/2206.07096) | |
| 168 | +| IRM | [Invariant Risk Minimization](https://arxiv.org/abs/1907.02893) | |
| 169 | +| RPGNN | [Relational Pooling for Graph Representations. ICLR 2019](https://arxiv.org/abs/1903.02541) | |
| 170 | +| GNNs | [Graph Convolutional Networks. ICLR 2017](https://arxiv.org/abs/1609.02907) and [Graph Isomorphism Network. ICLR 2019](https://arxiv.org/abs/1810.00826) | |
| 171 | +| Transformer (SMILES) | [Attention is All You Need. NeurIPS 2017](https://arxiv.org/abs/1706.03762) based on SMILES strings | |
| 172 | +| LSTM (SMILES) | [Long short-term memory (Neural Computation 1997)](https://ieeexplore.ieee.org/abstract/document/6795963) based on SMILES strings | |
| 173 | + |
| 174 | +### Generative Models |
| 175 | + |
| 176 | +| Model | Reference | |
| 177 | +|------------|---------------------| |
| 178 | +| Graph DiT | [Graph Diffusion Transformers for Multi-Conditional Molecular Generation. NeurIPS 2024](https://openreview.net/forum?id=cfrDLD1wfO) | |
| 179 | +| DiGress | [DiGress: Discrete Denoising Diffusion for Graph Generation. ICLR 2023](https://openreview.net/forum?id=UaAD-Nu86WX) | |
| 180 | +| GDSS | [Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations. ICML 2022](https://proceedings.mlr.press/v162/jo22a/jo22a.pdf) | |
| 181 | +| MolGPT | [MolGPT: Molecular Generation Using a Transformer-Decoder Model. Journal of Chemical Information and Modeling 2021](https://pubs.acs.org/doi/10.1021/acs.jcim.1c00600) | |
| 182 | +| GraphGA | [A Graph-Based Genetic Algorithm and Its Application to the Multiobjective Evolution of Median Molecules. Journal of Chemical Information and Computer Sciences 2004](https://pubs.acs.org/doi/10.1021/ci034290p) | |
| 183 | + |
| 184 | +### Representation Models |
| 185 | + |
| 186 | +| Model | Reference | |
| 187 | +|--------------|---------------------| |
| 188 | +| MoAMa | [Motif-aware Attribute Masking for Molecular Graph Pre-training. LoG 2024](https://arxiv.org/abs/2309.04589) | |
| 189 | +| AttrMasking | [Strategies for Pre-training Graph Neural Networks. ICLR 2020](https://arxiv.org/abs/1905.12265) | |
| 190 | +| ContextPred | [Strategies for Pre-training Graph Neural Networks. ICLR 2020](https://arxiv.org/abs/1905.12265) | |
| 191 | +| EdgePred | [Strategies for Pre-training Graph Neural Networks. ICLR 2020](https://arxiv.org/abs/1905.12265) | |
| 192 | +| InfoGraph | [InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. ICLR 2020](https://arxiv.org/abs/1908.01000) | |
| 193 | +| Supervised | Supervised pretraining | |
| 194 | +| Pretrained | More than ten pretrained models from [Hugging Face](https://huggingface.co) | |
194 | 195 |
|
195 | 196 | ## Project Structure |
196 | 197 |
|
|
0 commit comments