Skip to content

Commit 1306d14

Browse files
authored
Merge pull request #1167 from openfheorg/dev
Updates to v1.5.1
2 parents df495ba + 47becae commit 1306d14

142 files changed

Lines changed: 1025 additions & 1107 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

CMakeLists.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ project (OpenFHE C CXX)
2929

3030
set(OPENFHE_VERSION_MAJOR 1)
3131
set(OPENFHE_VERSION_MINOR 5)
32-
set(OPENFHE_VERSION_PATCH 0)
32+
set(OPENFHE_VERSION_PATCH 1)
3333
set(OPENFHE_VERSION ${OPENFHE_VERSION_MAJOR}.${OPENFHE_VERSION_MINOR}.${OPENFHE_VERSION_PATCH})
3434

3535
set(CMAKE_CXX_STANDARD 17)

benchmark/src/ckks-bootstrapping.cpp

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,8 @@ struct boot_config {
7373
{ 1 << 17, 1 << 5, 59, 60, 0, 10, 1, {1, 1}, SPARSE_ENCAPSULATED, FLEXIBLEAUTO},
7474
{ 1 << 17, 1 << 16, 59, 60, 0, 10, 2, {4, 4}, SPARSE_ENCAPSULATED, FLEXIBLEAUTO},
7575
{ 1 << 17, 1 << 5, 59, 60, 0, 10, 2, {1, 1}, SPARSE_ENCAPSULATED, FLEXIBLEAUTO},
76+
{ 1 << 16, 1 << 15, 55, 60, 3, 1, 1, {3, 3}, UNIFORM_TERNARY, FLEXIBLEAUTO}, // GPU0
77+
{ 1 << 16, 1 << 14, 50, 53, 7, 10, 1, {3, 3}, SPARSE_TERNARY, FLEXIBLEAUTO}, // GPU1
7678
// TODO: enable following once STC Composite Scaling operational
7779
// { 1 << 17, 1 << 16, 78, 96, 0, 10, 2, {4, 4}, SPARSE_TERNARY, COMPOSITESCALINGAUTO},
7880
};

docs/sphinx_rsts/modules/core/math/sampling.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ Generic Sampler
200200
}
201201
202202
/*Create the sampler object*/
203-
int base = std::log(CENTER_COUNT)/std::log(2);
203+
int base = std::log2(CENTER_COUNT);
204204
DiscreteGaussianGeneratorGeneric dggGeneric(peikert_samplers,stdBase,base,SMOOTHING_PARAMETER);
205205
206206
/*Generate Integer */

docs/static_docs/Best_Performance.md

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
# Building OpenFHE for Best Performance
22

33
The default build configuration of OpenFHE focuses on portability and ease of installation.
4-
As a result, the runtime performace for the default configuration is often significantly worse than for the optimal configuration.
4+
As a result, runtime performace for the default configuration is often significantly worse than for the optimal configuration.
55

6-
There are three important CMake flags that affect the runtime performance:
6+
There are three important CMake flags that affect runtime performance:
77
* `WITH_NATIVEOPT` allows the user to turn on/off machine-specific optimizations. By default, it is set to OFF for maximum portability of generated binaries.
88
* `NATIVE_SIZE` specifies the word size used internally for "small" integers. By default, it is set to 64. However, when used moduli are 28 bits or below,
99
it is more efficient to set it to 32.
10-
* `WITH_OPENMP` allows the user to turn on multithreading using OpenMP. By default, it is set to ON, and all threads are available for OpenMP multithreading. The OMP_NUM_THREADS environment variable can be used to set the number of threads available in parallel regions.
10+
* `WITH_OPENMP` allows the user to turn on/off multithreading using OpenMP. By default, it is set to ON, and all threads are available for OpenMP multithreading. The `OMP_NUM_THREADS` environment variable can be used to limit the number of threads available in parallel regions.
1111

1212
The compiler used is also important. We recommend using more recent compiler versions to achieve the best runtime performance.
1313

@@ -33,16 +33,15 @@ Typically, the default configuration for schemes in the `pke` module is only to
3333

3434
# Multithreading Configuration using OpenMP
3535

36-
OpenFHE uses loop parallelization via OpenMP to speed up some lower-level (mostly polynomial) operations. This loop parallelization gives the biggest improvement in the `pke` module and only provides modest speed-up in the `binfhe` module.
37-
38-
From a bird's eye view, the built-in OpenFHE loop parallelization is applied at the following levels:
36+
OpenFHE uses loop parallelization via OpenMP to speed up some lower-level (mostly polynomial) operations. From a bird's eye view, built-in OpenFHE loop parallelization is applied at the following levels:
3937
* For many Double-CRT operations (used for BGV, BFV, and CKKS implemented using RNS in OpenFHE), loop parallelization over the number of RNS limbs is automatically applied. The biggest benefit is seen when the multiplicative depth is not small (in deeper computations). For BGV and CKKS, the number of RNS limbs is roughly the same as the multiplicative depth set by the user (it is 1 or 2 larger). In BFV, it gets more complicated, but the number of RNS limbs is still proportional to the multiplicative depth.
4038
* A higher-level loop parallelization is employed for CKKS bootstrapping and scheme switching between CKKS and FHEW/TFHE.
41-
* Loop parallelization is also used for all schemes during key generation (but this does not have effect on the online operations).
39+
* Loop parallelization is also used for all schemes during key generation (but this does not have effect on online operations).
4240

43-
When developing C++ applications based on OpenFHE, it is advised to use OpenMP parallelization at the application level, e.g., when independent operations on multiple ciphertexts are performed, application-level OpenMP loop parallelization can be turned on. The scaling of performance with the number of cores in this setup can approach the "ideal" linear scaling if the dimension of the loop is comparable to the number of cores. Note that turning on OpenMP parallelization at the application level typically turns off the lower-level OpenMP loop parallelization (i.e., we do not use nested loop parallelization in OpenMP), so application-level loop parallelization should be used only when you know that the application loop dimension is higher than what is expected for built-in OpenFHE OpenMP loop parallization.
41+
When developing C++ applications based on OpenFHE, it is advised to use OpenMP parallelization at the application level, e.g., when independent operations on multiple ciphertexts are performed, application-level OpenMP loop parallelization can be turned on.
42+
The scaling of performance with the number of cores in this setup can approach the "ideal" linear scaling if the dimension of the loop is comparable to the number of cores. Note that turning on OpenMP parallelization at the application level typically turns off the lower-level OpenMP loop parallelization (i.e., we do not use nested loop parallelization in OpenMP), so application-level loop parallelization should be used only when you know that the application loop dimension is higher than what is expected for built-in OpenFHE OpenMP loop parallization.
4443

45-
Within OpenFHE, the use of hyperthreading can lead to decreased performance so the `OMP_NUM_THREADS` environment variable should not be set higher than the number of physical cores.
44+
Within OpenFHE, the use of hyperthreading can lead to decreased performance so the `OMP_NUM_THREADS` environment variable should be set no higher than the number of physical cores (usually half the number of logical cores). Performance also depends heavily on the execution environment. While a dedicated server allows OpenFHE to utilize all physical cores in isolation, running on a personal laptop often introduces contention with background processes and OS tasks. In such contentious environments, using every available thread can lead to frequent context switching and cache thrashing, ultimately degrading performance. To mitigate this on a general-purpose machine, it may be beneficial to further limit `OMP_NUM_THREADS` to a number slightly less than the number of physical cores, leaving sufficient headroom for the rest of the system. Benchmarking with a varying number of threads should be performed to determine the optimal `OMP_NUM_THREADS` value for your given system and workload.
4645

4746
If an alternative parallelization mechanism is used, e.g., pthreads, C++11 threads, or multiprocessing, OpenMP should be turned off by setting the `WITH_OPENMP` CMake flag to OFF.
4847

docs/static_docs/Release_Notes.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,14 @@
1+
04/10/2026: OpenFHE 1.5.1 (stable) is released
2+
3+
* Fixes the compilation error when OpenMP is disabled using the CMake flag (#1131)
4+
* Fixes BGV/BFV ring dimension underestimation for certain multiparty scenarios (#1157)
5+
* Fixes a BFV decryption error for HPS* variants in the multiparty setting (#1139)
6+
* Improves (slightly, i.e., by ~1 bit) the precision of Chebyshev series coefficients for CKKS functional bootstrapping (#1156)
7+
* Makes the unit test battery run faster on desktop machines (#1138)
8+
* Code cleanup and minor bug fixes
9+
10+
The detailed list of changes is available at https://github.com/openfheorg/openfhe-development/issues?q=is%3Aissue+milestone%3A%22Release+1.5.1%22
11+
112
02/26/2026: OpenFHE 1.5.0 (development) is released
213

314
* Optimizes CKKS bootstrapping and polynomial evaluation for multithreaded scenarios (#908)

src/binfhe/examples/boolean-serial-binary-dynamic-large-precision.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ int main() {
5050
uint32_t Q = 1 << logQ;
5151

5252
int q = 4096; // q
53-
int factor = 1 << int(logQ - log2(q)); // Q/q
53+
int factor = 1 << int(logQ - std::log2(q)); // Q/q
5454
int p = cc1.GetMaxPlaintextSpace().ConvertToInt() * factor; // Obtain the maximum plaintext space
5555

5656
std::cout << "Generating keys." << std::endl;

src/binfhe/examples/boolean-serial-json-dynamic-large-precision.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ int main() {
5050
uint32_t Q = 1 << logQ;
5151

5252
int q = 4096; // q
53-
int factor = 1 << int(logQ - log2(q)); // Q/q
53+
int factor = 1 << int(logQ - std::log2(q)); // Q/q
5454
int p = cc1.GetMaxPlaintextSpace().ConvertToInt() * factor; // Obtain the maximum plaintext space
5555

5656
std::cout << "Generating keys." << std::endl;

src/binfhe/examples/eval-decomp.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ int main() {
5555
uint32_t Q = 1 << logQ;
5656

5757
int q = 4096; // q
58-
int factor = 1 << int(logQ - log2(q)); // Q/q
58+
int factor = 1 << int(logQ - std::log2(q)); // Q/q
5959
uint64_t P = cc.GetMaxPlaintextSpace().ConvertToInt() * factor; // Obtain the maximum plaintext space
6060

6161
// Sample Program: Step 2: Key Generation

src/binfhe/examples/eval-sign.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ int main() {
5555
uint32_t Q = 1 << logQ;
5656

5757
int q = 4096; // q
58-
int factor = 1 << int(logQ - log2(q)); // Q/q
58+
int factor = 1 << int(logQ - std::log2(q)); // Q/q
5959
int p = cc.GetMaxPlaintextSpace().ConvertToInt() * factor; // Obtain the maximum plaintext space
6060

6161
// Sample Program: Step 2: Key Generation

src/binfhe/include/binfhe-constants.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -69,10 +69,10 @@ enum BINFHE_PARAMSET {
6969
STD256Q, // more than 256 bits of security for quantum attacks : 2^(-70)
7070
STD256Q_3, // STD256Q for 3 binary inputs : 2^(-65)
7171
STD256Q_4, // STD256Q for 4 binary inputs : 2^(-50)
72-
STD128_LMKCDEY, // STD128 optimized for LMKCDEY (using Gaussian secrets) : 2^(-60)
72+
STD128_LMKCDEY, // STD128 optimized for LMKCDEY : 2^(-60)
7373
STD128_3_LMKCDEY, // STD128_LMKCDEY for 3 binary inputs : 2^(-90)
7474
STD128_4_LMKCDEY, // STD128_LMKCDEY for 4 binary inputs : 2^(-60)
75-
STD128Q_LMKCDEY, // STD128Q optimized for LMKCDEY (using Gaussian secrets) : 2^(-60)
75+
STD128Q_LMKCDEY, // STD128Q optimized for LMKCDEY : 2^(-60)
7676
STD128Q_3_LMKCDEY, // STD128Q_LMKCDEY for 3 binary inputs : 2^(-95)
7777
STD128Q_4_LMKCDEY, // STD128Q_LMKCDEY for 4 binary inputs : 2^(-55)
7878
STD192_LMKCDEY, // STD192 optimized for LMKCDEY (using Gaussian secrets) : 2^(-65)
@@ -81,10 +81,10 @@ enum BINFHE_PARAMSET {
8181
STD192Q_LMKCDEY, // STD192Q optimized for LMKCDEY (using Gaussian secrets) : 2^(-65)
8282
STD192Q_3_LMKCDEY, // STD192Q_LMKCDEY for 3 binary inputs : 2^(-65)
8383
STD192Q_4_LMKCDEY, // STD192Q_LMKCDEY for 4 binary inputs : 2^(-105)
84-
STD256_LMKCDEY, // STD256 optimized for LMKCDEY (using Gaussian secrets) : 2^(-60)
84+
STD256_LMKCDEY, // STD256 optimized for LMKCDEY : 2^(-60)
8585
STD256_3_LMKCDEY, // STD256_LMKCDEY for 3 binary inputs : 2^(-80)
8686
STD256_4_LMKCDEY, // STD256_LMKCDEY for 4 binary inputs : 2^(-70)
87-
STD256Q_LMKCDEY, // STD256Q optimized for LMKCDEY (using Gaussian secrets) : 2^(-80)
87+
STD256Q_LMKCDEY, // STD256Q optimized for LMKCDEY : 2^(-80)
8888
STD256Q_3_LMKCDEY, // STD256Q_LMKCDEY for 3 binary inputs : 2^(-70)
8989
STD256Q_4_LMKCDEY, // STD256Q_LMKCDEY for 4 binary inputs : 2^(-50)
9090
LPF_STD128, // STD128 configured with lower probability of failures : 2^(-135)

0 commit comments

Comments
 (0)