diff --git a/README.md b/README.md index 606aa9b..2938e30 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ We use a Graph Convolutional Network followed by a Feed Forward Network to predi ### Setup ```bash - conda install numpy pandas matplotlib + conda install numpy pandas matplotlib hyperopt tensorboardX torch_dct conda install pytorch torchvision cudatoolkit=10.2 -c pytorch conda install -c rdkit rdkit ``` @@ -15,7 +15,7 @@ We use a Graph Convolutional Network followed by a Feed Forward Network to predi We first try to find optimal hyperparameters using Bayesian Optimization. Currently, the hyperparameters that can be found using this are 1) depth of the GCN encoder 2) the dimensions of the message vectors 3) the number of layers in the Feed Forward Network 4) and the Dropout constant. The code can be run as follows:

`python hyperparameter_optimization.py --data_path --dataset_type --num_iters --config_save_path `

where \ is the path to csv file where the smiles and the corresponding property scores are stored, \ can be regression or dopamine which corresponds to using mse loss and adaptive robust loss respectively, \ is the number of epochs and \ is the path to json file where the configurations are to be saved. For example:

-`python hyperparamter_optimization.py --data_path data/dopamine_nodup.csv --dataset_type dopamine --num_iters 100 --config_save_path config.json` +`python hyperparameter_optimization.py --data_path data/dopamine_nodup.csv --dataset_type dopamine --num_iters 100 --config_save_path config.json` ### Training We can use the configurations obtained from Hyperparameter Optimization or directly train the model by running the following code:

@@ -37,29 +37,30 @@ We use Proximal Policy Optimization (PPO) as the Reinforcement Learning pathway ```bash pip install tensorflow conda install mpi4py -pip install networkx=1.11 +pip install networkx==1.11 ``` - Install OpenAI baseline dependencies ```bash +sudo apt-get install libosmesa6-dev cd rl-baselines -pip install -e +pip install -e . ``` - Install customized gym molecule env ```bash cd gym-molecule -pip install -e +pip install -e . ``` ### Run Experiments This section contains the code to run the 5 experiments as presented in the paper. The general command line argument for running the code is as follows:

`mpirun -np python run_molecule.py --is_conditional --reward_type --dataset --model_path [--model2_path --sa_ratio --gan_step_ratio --gan_final_ratio --conditional ]`

where \ is the number of parallel processes to be run, \ is 1 when the generative process is initialized with a molecule else 0, \ is pki for single-objective optimization else multi for multi-objective optimization, \ is zinc if taking ZINC as the expert dataset and dopamine if taking dopamine BindingDB as the expert dataset, \ is the path to trained model (.pt file), \ is the path to second trained model (.pt file, only valid when reward type is "multi"), \ is the weight of SA Score in the final reward, \ is the weight of stepwise discriminator reward in the final reward, \ is the weight of final discriminator reward in the final reward and \ is dopamine_25 or dopamine_15 which is valid only when \ is 1. - Experiment 1
-`mpirun -np 8 python run_molecule.py --is_conditional 0 --reward_type pki --dataset zinc --model_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

+`mpirun -np 6 python run_molecule.py --is_conditional 0 --reward_type pki --dataset zinc --model_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

- Experiment 2
-`mpirun -np 8 python run_molecule.py --is_conditional 0 --reward_type pki --dataset dopamine --model_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

+`mpirun -np 6 python run_molecule.py --is_conditional 0 --reward_type pki --dataset dopamine --model_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

- Experiment 3
-`mpirun -np 8 python run_molecule.py --is_conditional 1 --conditional dopamine_25 --reward_type pki --dataset zinc --model_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

+`mpirun -np 6 python run_molecule.py --is_conditional 1 --conditional dopamine_25 --reward_type pki --dataset zinc --model_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

- Experiment 4
-`mpirun -np 8 python run_molecule.py --is_conditional 1 --conditional dopamine_15 --reward_type pki --dataset zinc --model_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

+`mpirun -np 6 python run_molecule.py --is_conditional 1 --conditional dopamine_15 --reward_type pki --dataset zinc --model_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

- Experiment 5
-`mpirun -np 8 python run_molecule.py --is_conditional 1 --conditional dopamine_25 --reward_type multi --dataset zinc --model_path --model2_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`

+`mpirun -np 6 python run_molecule.py --is_conditional 1 --conditional dopamine_25 --reward_type multi --dataset zinc --model_path --model2_path --sa_ratio 2 --gan_step_ratio 2 --gan_final_ratio 3`