You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -87,25 +91,30 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
87
91
88
92
# ### Write the problem class
89
93
#
90
-
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in **PINA**:
94
+
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operator` module. Again, we'll consider Equation (1) and represent it in **PINA**:
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solvers`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightining` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
203
+
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solver`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightning` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callback.MetricTracker`.
trainer=Trainer(solver=pinn, max_epochs=1500, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
218
233
219
234
# train
220
235
trainer.train()
221
236
222
237
223
-
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`
238
+
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightning` loggers. The final loss can be accessed by `trainer.logged_metrics`
224
239
225
-
# In[7]:
240
+
# In[8]:
226
241
227
242
228
243
# inspecting final loss
229
244
trainer.logged_metrics
230
245
231
246
232
-
# By using the `Plotter` class from **PINA** we can also do some quatitative plots of the solution.
247
+
# By using `matplotlib` we can also do some qualitative plots of the solution.
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also plot easily the loss:
262
+
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also take a look at the loss using `TensorBoard`:
242
263
243
-
# In[9]:
264
+
# In[ ]:
265
+
266
+
267
+
print('\nTo load TensorBoard run load_ext tensorboard on your terminal')
268
+
print("To visualize the loss you can run tensorboard --logdir 'tutorial_logs' on your terminal\n")
269
+
270
+
271
+
# As we can see the loss has not reached a minimum, suggesting that we could train for longer! Alternatively, we can also take look at the loss using callbacks. Here we use `MetricTracker` from `pina.callback`:
# We will now focus on solving the KS equation using the `SupervisedSolver` class
204
207
# and the `AveragingNeuralOperator` model. As done in the [FNO tutorial](https://github.com/mathLab/PINA/blob/master/tutorials/tutorial5/tutorial.ipynb) we now create the `NeuralOperatorProblem` class with `AbstractProblem`.
# train, only CPU and avoid model summary at beginning of training (optional)
223
-
trainer=Trainer(solver=solver, max_epochs=40, accelerator='cpu', enable_model_summary=False, log_every_n_steps=-1, batch_size=5) # we train on CPU and avoid model summary at beginning of training (optional)
226
+
trainer=Trainer(solver=solver, max_epochs=40, accelerator='cpu', enable_model_summary=False, log_every_n_steps=-1, batch_size=5, # we train on CPU and avoid model summary at beginning of training (optional)
227
+
train_size=1.0,
228
+
val_size=0.0,
229
+
test_size=0.0)
224
230
trainer.train()
225
231
226
232
227
233
# We can now see some plots for the solutions
228
234
229
-
# In[7]:
235
+
# In[6]:
230
236
231
237
232
238
sample_number=2
@@ -236,13 +242,13 @@ class NeuralOperatorProblem(AbstractProblem):
236
242
no_sol=no_sol[5])
237
243
238
244
239
-
# As we can see we can obtain nice result considering the small trainint time and the difficulty of the problem!
240
-
# Let's see how the training and testing error:
245
+
# As we can see we can obtain nice result considering the small training time and the difficulty of the problem!
246
+
# Let's take a look at the training and testing error:
241
247
242
-
# In[8]:
248
+
# In[7]:
243
249
244
250
245
-
frompina.loss.loss_interfaceimportPowerLoss
251
+
frompina.lossimportPowerLoss
246
252
247
253
error_metric=PowerLoss(p=2) # we use the MSE loss
248
254
@@ -255,14 +261,14 @@ class NeuralOperatorProblem(AbstractProblem):
255
261
print(f'Testing error: {float(err_test):.3f}')
256
262
257
263
258
-
# as we can see the error is pretty small, which agrees with what we can see from the previous plots.
264
+
# As we can see the error is pretty small, which agrees with what we can see from the previous plots.
259
265
260
266
# ## What's next?
261
267
#
262
268
# Now you know how to solve a time dependent neural operator problem in **PINA**! There are multiple directions you can go now:
263
269
#
264
-
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
270
+
# 1. Train the network for longer or with different layer sizes and assert the final accuracy
265
271
#
266
-
# 2. We left a more challenging dataset [Data_KS2.mat](dat/Data_KS2.mat) where $A_k \in [-0.5, 0.5]$, $\ell_k \in [1, 2, 3]$, $\phi_k \in [0, 2\pi]$ for loger training
272
+
# 2. We left a more challenging dataset [Data_KS2.mat](dat/Data_KS2.mat) where $A_k \in [-0.5, 0.5]$, $\ell_k \in [1, 2, 3]$, $\phi_k \in [0, 2\pi]$ for longer training
267
273
#
268
274
# 3. Compare the performance between the different neural operators (you can even try to implement your favourite one!)
0 commit comments