Skip to content

Commit 1d51241

Browse files
authored
Add 1D divergence and laplacian operators (#247)
* refac: rename face_centered_vals -> cell_centered * fix(gfortran): work around gfortran issues This commit 1. Synchronizes a conditionally compiled gfortran workaround to match the updated alternative block. 2. Updates the unit test to work around a gfortran issue with `associate`. * test(gradient): add 2nd line differentiation case * fix(matvec): use size() to set loop limits This commit fixes issues with the matrix-vector multiplication function "matvec" to ensure infers the loop limits and array section bounds from the dimensions of the arrays accessed by the loop or array section. * refac(test): mv code, rename function * fix(grid): rm extra multiplicative factor of dx * test(grad): add passing test differentiataing line * refac(cell_centers_ex): domain(:) -> x_{min,max} * feat(cell_centers_ex): add cells_ component * refac(cell_centers_extended): rm grid_ component * test(gradient): unit test of d(parabola)/dx passes * refac(initializers): rm abstract initializer type * fix(cell_centers_extended_s): rm dead code * refac(gradient_t): mv to separate module/submodule * refac(cell_centers_extended_t): rename scalar_1D_t * refac(gradient_{t,m,s}):rename gradient_1D_{t,m,s} * refac(gradient_operator):name gradient_operator_1D * refac(mimetic_matrix}: append "_1D" * fix(scalar_1D_s): import operators * fix(mimetic_matrix_s): count upper/lower rows * refac(mimetic_matrix_s): simpler loop bound * fix(4th-order): flip some coefficient signs * feat(mimetic_matrix_1D): add file_t constructor This commit adds the ability to convert a mimetic matrix to a Julienne file_t object so that the matrix elements can be printed with code like the following type(mimetic_matrix_t) mimetic_matrix ! define matrix here associate(file => mimetic_matrix%to_file_t()) call file%write_lines() end associate * fix(mimetic_matrix_t): reshape upper block (A) Now all tests pass for computing 1D gradients of 0th, 1st, & 2nd-order polynomials, each with mimetic discretizations of 2nd- and 4th-order accuracy. * refac(vector): rename matrix-vector RHS scalar_1D * doc(gradient_operator_1D): clarify statement label * test(order): 2nd-/4th-order accuracy tests pass This commit adds tests of the ratio of the log of the maximum absolute error for gradients of a sinusoidal function. The tests verify that 2nd- and 4th-order mimetic discretizations converge to the expected gradient values at a rate proportional to dx raised to the 2nd and 4th powers of the error within a tolerance of 5%. * build(gfortran): work around concurrent type-spec * build(gfortran-14): add locality specifier macro To work around gfortran-14 not supporting `do concurrent` locality specifiers, this commit removes specifiers conditionally via a new macro: HAVE_LOCALITY_SPECIFIER_SUPPORT defined in include/mole-language-support.F90. * build(gfortran): use GCC ver to define local spec * test(CI): build/test gradient-operator branch * test(CI): reduce test matrix * refac(fortran):combine scalar_1D/vector_1D modules This commit combines scalar_1D_m and vector_1D_m into one new tensors_1D_m module. * refac(tensors_1D_t): build class hierarchy This commit establishes the following class hierarchy: vector_1D_t <|-- gradient_1D_t scalar_1D_t <|-- divergence_1D_t where <|-- denotes type extension with the parent type on the left and the child type on the right. * refactor(grid_s): gather grid funcs in new submod * refactor: rename procedures to facilitate disambig * refactor(M): lift matrix block to module for reuse * chore: blank-space edits * feat(vector_1D_t): type-bound .div. operator * test(julienne): update dependency version to 3.3.0 This update facilitates reducing test-file complexity by simplifying the workaround for a missing feature in gfortran versions older than 14.3. * chore(grad test): rm unused operator * test(divergence): 1st passing unit test * refactor(grid_s): distrib funcs to scalar/vector * WIP: print diagnostics in divergence test * fix(divergence_matrix_1D): delete rows of zeros * refactor(divergence_{,operator_}1D_s}): combine * refactor(gradient_{,operator_}1D_s}): combine * refactor(tensor_1D_t): define scalar/vector parent This commit reduces code duplication by 1. Defining an abstract parent tensor_1D_t type that non-abstract scalar_1D_t and vector_1D_t now extend and 2. Moving common components from the child types to the parent. Diagrammatically, ```mermaid classDiagram tensor_t <|-- scalar_1D_t tensor_t <|-- vector_1D_t class tensor_1D_t { <<abstract>> double precision : x_min_ double precision : x_max_ double precision : values_ integer : cells_ } class scalar_1D_t { gradient_1D_operator_t : gradient_1D_operator grad() gradient_1D_t } class vector_1D_t { divergence_1D_operator_t : divergence_1D_operator div() divergence_1D_t } ``` where x_min_, x_max_, and cells_ are scalar components and values_ is an allocaatable array component, and grad() and div() support defined operations .grad. and .div., respectively, implementing mimetic discrete approximations to the differential calculus operators: gradient and divergence. * refactor(tensor): mk nonabstract, construct parent This commit 1. Makes tensor_1D_t non-abstract, 2. Defines a tensor_1D_t user-defined constructor, and 3. Refactors the scalar_1D_t constructor to construct its tensor_1D_t parent component by assigning a whole object created by the tensor_1D_t constructor. TODO: find flang bug preventing similar refactoring of vector_1D_t * fix(.div.): 2nd/4th-order divergences Newly passing unit tests: A 1D mimetic divergence operator passes on computing 2nd-order .div.(.grad. s) for 1D scalar s with quadratic magnitude within 0.1000000000000E-11. passes on computing 4th-order .div.(.grad. s) for 1D scalar s with quadratic magnitude within 0.1000000000000E-11. 2 of 2 tests passed. 0 tests were skipped. * test(divergence): replace decl/def with associate * refactor(scalar,vector): uniform nomenclature For the scalar_1D_t and vector_1D_t derived types, this commit 1. Defines 1 public generic binding per type-bound procedure (TBP), 2. Makes all TBPs private, and 3. Applies uniform nomenclature as follows: a. scalar_1_D_t i. generic bindings: .grad., grid, values ii. TBPs: grad, scalar_1D_grid, scalar_1D_values a. vector_1_D_t i. generic bindings: .div., grid, values ii. TBPs: div, vector_1D_grid, vector_1D_values * test(.div.):order of accuracy for 2nd-order method * test(.div.):order of accuracy for 4th-order method TODO: Fix 2 test failures exposed by the convergence tests, wherein the diagnostics indicate that (1) The order of accuracy is 1 for both 2nd- and 4th-order divergences and (2) The calculated values are shifted by one grid location, which probably explains the order of accuracy of 1. For example, with this commit, executing fpm test --compiler flang-new --flag "-DASSERTIONS -O3" yields the following excerpted output: A 1D mimetic divergence operator passes on computing 2nd-order .div.(.grad. s) for 1D scalar s with quadratic magnitude within 0.1000000000000E-13. passes on computing 4th-order .div.(.grad. s) for 1D scalar s with quadratic magnitude within 0.1000000000000E-13. FAILS on computing convergence rate of 2 for 2nd-order .div. for 1D vector with sinusoidal magnitude within 0.1000000000000E-13. diagnostics: expected 2.000000000000 within a tolerance of 0.5000000000000E-01; actual value is 0.9999973302351 (2nd-order .div. [sin(x) + cos(x)] order of accuracy) FAILS on computing convergence rate of 4 for 4th-order .div. for 1D vector with sinusoidal magnitude within 0.1000000000000E-13. diagnostics: expected 1.000000000000 within a tolerance of 0.1000000000000E-01; actual value is 0.9680963154955 expected 0.9680958012876 within a tolerance of 0.1000000000000E-01; actual value is 0.9014535854427 expected 0.9014536512846 within a tolerance of 0.1000000000000E-01; actual value is 0.8312538148404 expected 0.8312538755549 within a tolerance of 0.1000000000000E-01; actual value is 0.7577734651947 * chore: blank-space edits, new associate, renamings * fix(.div.): locate scalars only at cell centers * chore(grad op test): blank-space edits * test(grad):uniformly tighter convergence criterion * chore: more blank-space ed, associate, renam/reorg * chore(divergence test):rm type not used explicitly * refactor: rm unnecessary gradient_1D_t child type * refactor: rm unneeded divbergence_1D_t child type * feat(.laplacian.): add & test for scalar operands This commit adds 1. A .laplacian. generic binding for scalar_1D_t operands and 2. Unit tests for the new .laplacian. operator. Thes L-infinity norm of the resulting error indicates that computing the Laplacian by composing divergence and gradient operators of a given order of accuracy produces a Laplacian operator with an order of accuracy that is one polynomial degree below the accuracy of the divergence and gradient operators: For a scalar s, * 2nd-order .div. (.grad. s) yields a 1st-order .laplacian. s * 4th-order .div. (.grad. s) yields a 3rd-order .laplacian. s * test(laplacian): conditionally write gnuplot file * fix(laplacian): adjust tolerance, test domain * refactor(tensor): mk separate scalar,vector module This commit increases modularity by moving scalar_1D_t and vector_1D_t related functions out of tensors_1D_m and into a new scalar_vector_1D_m. * refactor(tensor): separate mimetic_matrix module This commit further increases modularity by moving mimetic_matrix_1D_t, gradient_operator_1D_t, divergence_operator_1D_t, and related functions to a new mimetic_matrix_1D_m module. * refactor(tensors_1D_m): rename module * refactor(mimetic_matrix): mk operators child types This commit refactors gradient_operator_1D_t and divergence_operator_1D_t so that they extend mimetic_matrix_1D_t. * refactor(matvec): improve type safety Whereas the `matvec` matrix-vector multiply function previously accepted a polymorphic class(mimetic_matrix_t) dummy argument, this commit ensures that 1. Only a gradient_operator_1D_t child of a mimetic_matrix_1D_t can can be applied in the context wherein a gradient is being calculated and 2. Only a divergence_operator_1D_t child of a mimetic_matrix_1D_t can can be applied in the context wherein a divergence is being calculated. * refactor: renamings, rm unused vars, add comments * refactor: recombine tensor, scalar, vector modules * refactor(matvec): mk type-bound * refactor(mimetic ops): associate & blank-space eds * chore: work around gfortran bugs * feat(example): compute Laplacian = div grad * fix(scalar): constructor samples at cell centers * fix(divergence,laplacian): def separate types This commit disambiguates two quantities that behave mathematically like scalars in some circumstances but are treated differently in the discrete calculus of mimetic discretizations: * divergence_1D_t: stored at cell centers & support no operators * laplacian_1D_t: which extends divergence_1D_t because In both cases, when the operand is a mathematical scalar, the results is a scalar. However, in mimetic discretizations, the results differ from a scalar quantity in that there are no boundary values with the agove two types. With these changes, the following operators map from the listed operand type to the listed result type: * .div.: scalar_1D_t operand, divergence_1D_t result and * .laplacian.: scalar_1D_t, laplacian_1D_t result, where for an object `s` of type scalar_1D_t, .laplacian. s = .div. (.grad. s) * fix(example): correct column headings * test(laplacian): use {laplacian,divergence}_1D_t * fix(example): gfortran `associate` bugs workaround * test: work around gfortran bugs * chore(gfortran): less duplication, skip 2 fails This commit 1. Reduces the amount of code duplication related to working around gfortran issues with `associate` statements -- at the cost increasing the number of preprocessor macros. 2. Skips two known test failures (2nd- and 4th-order Laplacian operator convergence rate tests) and adds a new macro TEST_LAPLACIAN_CONVERGENCE that can be used to run the otherwise skipped tests. * refactor(example): redistrib macros, reform output * feat(assembly): divergence, gradient matrices This commit adds 1. generic `assemble` bindings to gradient_operator_t and divergence_operator_t that produce the complete operator matrix as a 2D array, including zero values. 2. example `print-assembled-1D-operators` demonstrating the use of `assemble` to print the assembled matrices. For a gradient operator matrix, G, for example, the `assemble` computs the matrix product GI = G, where is the idenity matrix. This approach a. works around the fact that the MOLE Fortran data structures store only the non-zero submatrix blocks. b. supports the verification of both the non-zero values (which were already available via `to_file_t` type-bound functions) and the mimetic-matrix multiplication functions. * fix(divergence): store zero rows * test(laplacian): convergence rate checks pass This commit reintroduces the execution of the Laplacian operator convergence-rate tests with all tests passing. * fix(example): div-grad example works for LLVM/GCC * fix(div): adjust constraint, unit vector dimension * fix(div): keep all of D * feat(print-assembled): add usage output * feat(example): add command-line flags Usage: fpm run \ --example print-assembled-1D-operators \ --compiler flang-new \ --flag "-O3" \ -- [--help|-h] | [--grad] [--div] [--order <integer>] where square brackets indicate optional arguments and angular brackets indicate user input values. * refactor(A,A'): rm top/bottom rows of zeros * fix(divergence): rm top/bottom after mat-vec prod * fix(laplacian test): fix array dimensions * test(div): assert divegerence operator mat-vec len * test(laplacian): sep boundary/internal convergence This commit 1. Adjusts the Laplacian convergence rate tests to check for the nominal convergence rate at internal grid points and a rate one order lower near boundaries, 2. Combines the Laplacian convergence checks int one function that can be called for mimetic schmes of any order, and 4. Refactors the 2nd- and 4th-order mimetic Laplacian convergence tests so that each calls the new unified function. * test(divergence): blank-space edit * fix(div,grad): work around nag compiler issue * feat(vector,divergence): component constructors * chore({gradient,scalar}_1D): rm unused files * doc(UML): Fortran class diagram * doc(UML): add operators * fix gfortran builds * test(CI): loosen 2 tolerances|run ifx single-image * build(gfortran-{13,14} on ubuntu): workaround This commit is an attempt to work around the issue demonstrate at https://github.com/rouson/mole/actions/runs/19996603010/job/57345130923 * build(gfortran-{13,14} on ubuntu): workaround This commit is an attempt to work around an issue demonstrated at https://github.com/rouson/mole/actions/runs/19996724722/job/57345438334 * build(gfortran-{13,14} on ubuntu): workaround This commit attempts to work around the issue demonstrated at https://github.com/rouson/mole/actions/runs/19996775690/job/57345568589
1 parent f109f4c commit 1d51241

23 files changed

Lines changed: 1782 additions & 466 deletions

.github/workflows/build.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ jobs:
152152
echo "FPM_FC=flang-new" >> "$GITHUB_ENV" ; \
153153
elif [[ "$FC" =~ "ifx" ]] ; then \
154154
echo "FPM_FC=ifx" >> "$GITHUB_ENV" ; \
155-
echo "FFLAGS=-fpp -coarray $FFLAGS" >> "$GITHUB_ENV" ; \
155+
echo "FFLAGS=-fpp -coarray -coarray-num-images=1 $FFLAGS" >> "$GITHUB_ENV" ; \
156156
: ls -al /opt/intel/oneapi/compiler/2025.*/bin/ ; \
157157
if type -p icpx ; then \
158158
echo "FPM_CC=icx" >> "$GITHUB_ENV" ; \

doc/fortran-classes.md

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
MOLE Fortran Class Diagram
2+
--------------------------
3+
4+
```mermaid
5+
6+
%%{init: { 'theme':'neo', "class" : {"hideEmptyMembersBox": true} } }%%
7+
8+
classDiagram
9+
10+
class tensor_1D_t
11+
class scalar_1D_t
12+
class vector_1D_t
13+
class divergence_1D_t
14+
class laplacian_1D_t
15+
class gradient_operator_1D_t
16+
class divergence_operator_1D_t
17+
class mimetic_matrix_1D_t
18+
19+
tensor_1D_t <|-- scalar_1D_t : is a
20+
tensor_1D_t <|-- vector_1D_t : is a
21+
tensor_1D_t <|-- divergence_1D_t : is a
22+
divergence_1D_t <|-- laplacian_1D_t : is a
23+
mimetic_matrix_1D_t <|-- gradient_operator_1D_t : is a
24+
mimetic_matrix_1D_t <|-- divergence_operator_1D_t : is a
25+
26+
class scalar_1D_t{
27+
- gradient_operator_1D_ : gradient_operator_1D_t
28+
+ operator(.grad.) vector_1D_t
29+
+ operator(.laplacian.) scalar_1D_t
30+
}
31+
32+
class vector_1D_t{
33+
- divergence_operator_1D_ : divergence_operator_1D_t
34+
+ operator(.div.) divergence_1D_t
35+
}
36+
37+
class mimetic_matrix_1D_t{
38+
- upper_ :: double precision
39+
- inner_ :: double precision
40+
- lower_ :: double precision
41+
}
42+
43+
class gradient_operator_1D_t{
44+
+ operator(.x.) double precision[]
45+
+ assemble() double precision[] "2D array"
46+
}
47+
48+
class divergence_operator_1D_t{
49+
+ operator(.x.) double precision[]
50+
+ assemble() double precision[] "2D array"
51+
}

example/div-grad-laplacian-1D.F90

Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
module functions_m
2+
implicit none
3+
4+
contains
5+
6+
pure function f(x)
7+
double precision, intent(in) :: x(:)
8+
double precision, allocatable :: f(:)
9+
f = (x**3)/6 + (x**2)/2 + 1
10+
end function
11+
12+
double precision elemental function df_dx(x)
13+
double precision, intent(in) :: x
14+
df_dx = (x**2)/2 + x
15+
end function
16+
17+
double precision elemental function d2f_dx2(x)
18+
double precision, intent(in) :: x
19+
d2f_dx2 = x + 1
20+
end function
21+
22+
end module functions_m
23+
24+
program div_grad_laplacian_1D
25+
use functions_m
26+
use julienne_m, only : file_t, string_t, operator(.separatedBy.)
27+
use mole_m, only : scalar_1D_t, scalar_1D_initializer_i
28+
#ifdef __GFORTRAN__
29+
use mole_m, only : vector_1D_t, laplacian_1D_t
30+
#endif
31+
implicit none
32+
33+
procedure(scalar_1D_initializer_i), pointer :: scalar_1D_initializer => f
34+
35+
print *,new_line('')
36+
print *," 2nd-order approximations"
37+
print *," ========================"
38+
39+
call output(order=2)
40+
41+
print *,new_line('')
42+
print *," 4th-order approximations"
43+
print *," ========================"
44+
45+
call output(order=4)
46+
47+
contains
48+
49+
#ifndef __GFORTRAN__
50+
51+
subroutine output(order)
52+
integer, intent(in) :: order
53+
54+
associate( s => scalar_1D_t(scalar_1D_initializer, order=order, cells=10, x_min=0D0, x_max=20D0))
55+
associate( grad_s => .grad. s &
56+
,laplacian_s => .laplacian. s)
57+
associate( s_grid => s%grid() &
58+
,grad_s_grid => grad_s%grid() &
59+
,laplacian_s_grid => laplacian_s%grid())
60+
associate( s_table => tabulate( &
61+
string_t([character(len=18)::"x", "f(x) exp" , "f(x) act" ]) &
62+
,s_grid, f(s_grid), s%values() &
63+
) &
64+
,grad_s_table => tabulate( &
65+
string_t([character(len=18)::"x", ".grad. f exp" , ".grad. f act" ]) &
66+
,grad_s_grid, df_dx(grad_s_grid), grad_s%values() &
67+
) &
68+
,laplacian_s_table => tabulate( &
69+
string_t([character(len=18)::"x", ".laplacian. f exp", ".laplacian. f act"]) &
70+
,laplacian_s_grid, d2f_dx2(laplacian_s_grid), laplacian_s%values()) &
71+
)
72+
call s_table%write_lines()
73+
call grad_s_table%write_lines()
74+
call laplacian_s_table%write_lines()
75+
end associate
76+
end associate
77+
end associate
78+
end associate
79+
end subroutine
80+
81+
#else
82+
83+
subroutine output(order)
84+
integer, intent(in) :: order
85+
86+
type(scalar_1D_t) s
87+
type(vector_1D_t) grad_s
88+
type(laplacian_1D_t) laplacian_s
89+
type(file_t) s_table, grad_s_table, laplacian_s_table
90+
double precision, allocatable,dimension(:) :: s_grid, grad_s_grid, laplacian_s_grid
91+
92+
s = scalar_1D_t(scalar_1D_initializer, order=order, cells=10, x_min=0D0, x_max=20D0)
93+
grad_s = .grad. s
94+
laplacian_s = .laplacian. s
95+
96+
s_grid = s%grid()
97+
grad_s_grid = grad_s%grid()
98+
laplacian_s_grid = laplacian_s%grid()
99+
100+
s_table = tabulate( &
101+
string_t([character(len=18)::"x", "f(x) exp." , "f(x) act." ]) &
102+
,s_grid, f(s_grid), s%values() &
103+
)
104+
grad_s_table = tabulate( &
105+
string_t([character(len=18)::"x", ".grad. f exp." , ".grad. f act." ]) &
106+
,grad_s_grid, df_dx(grad_s_grid), grad_s%values() &
107+
)
108+
laplacian_s_table = tabulate( &
109+
string_t([character(len=18)::"x", ".laplacian. f exp.", ".laplacian. f act."]) &
110+
,laplacian_s_grid, d2f_dx2(laplacian_s_grid), laplacian_s%values() &
111+
)
112+
call s_table%write_lines()
113+
call grad_s_table%write_lines()
114+
call laplacian_s_table%write_lines()
115+
end subroutine
116+
117+
#endif
118+
119+
pure function tabulate(headings, abscissa, expected, actual) result(file)
120+
double precision, intent(in), dimension(:) :: abscissa, expected, actual
121+
type(string_t), intent(in) :: headings(:)
122+
type(file_t) file
123+
integer line
124+
125+
file = file_t([ &
126+
string_t("") &
127+
,headings .separatedBy. " " &
128+
,string_t("----------------------------------------------------------") &
129+
,[( string_t(abscissa(line)) // " " // string_t(expected(line)) // " " // string_t(actual(line)), line = 1, size(abscissa))] &
130+
])
131+
end function
132+
133+
end program
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
program print_assembled_1D_operators
2+
!! Print fully assembled memetic 1D gradient, divergence, and Laplacian matrices,
3+
!! including the zero elements.
4+
use julienne_m, only : operator(.csv.), string_t, command_line_t
5+
use mimetic_operators_1D_m, only : gradient_operator_1D_t, divergence_operator_1D_t
6+
implicit none
7+
8+
type(command_line_t) command_line
9+
integer row
10+
11+
command_line_settings: &
12+
associate( &
13+
gradient => command_line%argument_present(["--grad" ]) &
14+
,divergence => command_line%argument_present(["--div" ]) &
15+
,order => command_line%flag_value("--order") &
16+
)
17+
18+
if (command_line%argument_present([character(len=len("--help")) :: ("--help"), "-h"])) then
19+
stop new_line('') // new_line('') &
20+
// 'Usage:' // new_line('') &
21+
// ' fpm run \' // new_line('') &
22+
// ' --example print-assembled-1D-operators \' // new_line('') &
23+
// ' --compiler flang-new \' // new_line('') &
24+
// ' --flag "-O3" \' // new_line('') &
25+
// ' -- [--help|-h] | [--grad] [--div] [--order <integer>]' // new_line('') // new_line('') &
26+
// 'where square brackets indicate optional arguments and angular brackets indicate user input values.' // new_line('')
27+
end if
28+
29+
default_usage: &
30+
associate(print_all => .not. any([gradient, divergence, len(order)/=0]))
31+
32+
if (print_all .or. (gradient .and. len(order)==0) .or. (gradient .and. order=="2")) call print_gradient_operator( k=2, dx=1D0, m=5)
33+
if (print_all .or. (divergence .and. len(order)==0) .or. (divergence .and. order=="2")) call print_divergence_operator(k=2, dx=1D0, m=5)
34+
if (print_all .or. (gradient .and. len(order)==0) .or. (gradient .and. order=="4")) call print_gradient_operator( k=4, dx=1D0, m=9)
35+
if (print_all .or. (divergence .and. len(order)==0) .or. (divergence .and. order=="4")) call print_divergence_operator(k=4, dx=1D0, m=9)
36+
37+
end associate default_usage
38+
end associate command_line_settings
39+
40+
contains
41+
42+
subroutine print_gradient_operator(k, dx, m)
43+
integer, intent(in) :: k, m
44+
double precision, intent(in) :: dx
45+
46+
print *, new_line(""), "Gradient operator: order = ", k, " | cells = ", m, " | dx = ", dx
47+
48+
associate(grad_op => gradient_operator_1D_t(k, dx, cells=m))
49+
associate(G => grad_op%assemble())
50+
do row = 1, size(G,1)
51+
associate(csv_row => .csv. string_t(G(row,:)))
52+
print '(a)', csv_row%string()
53+
end associate
54+
end do
55+
end associate
56+
end associate
57+
58+
end subroutine
59+
60+
subroutine print_divergence_operator(k, dx, m)
61+
integer, intent(in) :: k, m
62+
double precision, intent(in) :: dx
63+
64+
print *, new_line(""), "Divergence operator: order = ", k, " | cells = ", m, " | dx = ", dx
65+
66+
associate(div_op => divergence_operator_1D_t(k, dx, cells=m))
67+
associate(D => div_op%assemble())
68+
do row = 1, size(D,1)
69+
associate(csv_row => .csv. string_t(D(row,:)))
70+
print '(a)', csv_row%string()
71+
end associate
72+
end do
73+
end associate
74+
end associate
75+
76+
end subroutine
77+
78+
end program

fpm.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ name = "MOLE"
44
armadillo-code = {git = "https://gitlab.com/rouson/armadillo-code.git", tag = "fpm"}
55

66
[dev-dependencies]
7-
julienne = {git = "https://github.com/berkeleylab/julienne.git", tag = "3.1.5"}
7+
julienne = {git = "https://github.com/berkeleylab/julienne.git", tag = "3.3.0"}
88

99
[install]
1010
library = true

src/fortran/divergence_1D_s.F90

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
submodule(tensors_1D_m) divergence_1D_s
2+
implicit none
3+
4+
contains
5+
6+
#ifdef __GFORTRAN__
7+
8+
pure function cell_center_locations(x_min, x_max, cells) result(x)
9+
double precision, intent(in) :: x_min, x_max
10+
integer, intent(in) :: cells
11+
double precision, allocatable:: x(:)
12+
integer cell
13+
14+
associate(dx => (x_max - x_min)/cells)
15+
x = x_min + dx/2. + [((cell-1)*dx, cell = 1, cells)]
16+
end associate
17+
end function
18+
19+
#endif
20+
21+
module procedure construct_from_tensor
22+
divergence_1D%tensor_1D_t = tensor_1D
23+
end procedure
24+
25+
module procedure divergence_1D_values
26+
cell_centered_values = self%values_
27+
end procedure
28+
29+
module procedure divergence_1D_grid
30+
cell_centers = cell_center_locations(self%x_min_, self%x_max_, self%cells_)
31+
end procedure
32+
33+
end submodule divergence_1D_s

0 commit comments

Comments
 (0)