Currently, the sensitivity matrix are computed using the np.sum(matrix**2, axis=0). This forces Numpy to first compute the matrix**2, which can be costly in terms of memory if matrix is large (as it usually is). Moreover, the matrix is usually computed as weights @ jacobian, another costly operation in terms of memory.
The alternative would be to always use np.einsum for it.
- Without weights:
np.einsum("ij,ij->j", jacobian, jacobian)
- With weights:
np.einsum("i,ij,ij->j", weights, jacobian, jacobian)
Currently, the sensitivity matrix are computed using the
np.sum(matrix**2, axis=0). This forces Numpy to first compute thematrix**2, which can be costly in terms of memory ifmatrixis large (as it usually is). Moreover, thematrixis usually computed asweights @ jacobian, another costly operation in terms of memory.The alternative would be to always use
np.einsumfor it.