Skip to content

Commit 1d8a9a1

Browse files
authored
Merge pull request #14760 from DefectDojo/release/2.57.3
Release: Merge release into master from: release/2.57.3
2 parents 1a8b491 + 6113e53 commit 1d8a9a1

33 files changed

Lines changed: 2450 additions & 240 deletions

components/package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "defectdojo",
3-
"version": "2.57.2",
3+
"version": "2.57.3",
44
"license" : "BSD-3-Clause",
55
"private": true,
66
"dependencies": {

docs/content/releases/pro/changelog.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,32 @@ For Open Source release notes, please see the [Releases page on GitHub](https://
1212

1313
## Apr 2026: v2.57
1414

15+
### Apr 20, 2026: v2.57.2
16+
17+
* **(Pro UI)** Search and filter state is now preserved when closing a Finding from a Finding list, so you don't lose your place after editing.
18+
* **(Risk Acceptance)** Bulk Edit no longer leaves Simple Risk Acceptance findings in an inconsistent "Active + Risk Accepted" state. Reactivating a previously risk-accepted Finding now behaves correctly.
19+
* **(Risk SLA)** Creating a Risk SLA no longer silently coerces unchecked `enforce_*_risk` options to `True`.
20+
* **(Surveys)** Fixed survey access for both authenticated users and anonymous links.
21+
* **(Universal Parser)** Non-ASCII scan names no longer cause a `UnicodeEncodeError` on import. CSV files with `""`-escaped quotes in multiline fields now parse correctly.
22+
* **(API)** Import/Reimport now validates consistency between ID-based and name-based identifiers, catching mismatched payloads earlier.
23+
* **(Permissions)** Moving an Engagement between Products now requires appropriate permission on both the source and target Product.
24+
* **(Reports)** Fixed a CSS overflow issue in rendered reports. Cleaned up endpoint template rendering for user fields.
25+
* **(Tools)** `govulncheck` parser now records `fix_available` and `fix_version`. Risk Recon parser now validates URLs via a shared SSRF utility. Added Mozilla Foundation security advisories as a supported Vulnerability ID source.
26+
27+
### Apr 13, 2026: v2.57.1
28+
29+
* **(Pro UI)** Object-level history views no longer default to a 31-day date filter, so the full history is visible on load.
30+
* **(Pro UI)** Audit Log "changes" filter now searches only the names of changed fields, reducing false matches.
31+
* **(Pro UI)** Predefined Finding filters now sync UI state correctly, so the active filter indicator reflects the applied filter.
32+
* **(Deduplication)** Added a UI for global component deduplication settings, behind a feature flag.
33+
* **(Rules Engine)** Fixed a preview timeout that occurred when rules were previewed against large Finding sets.
34+
* **(Universal Parser)** CSV/XML query path now displays correctly in the Universal Parser UI.
35+
* **(Import)** Additional parameters are now stored in import settings, making them available for reuse on reimport.
36+
* **(Tools)** Wazuh 4.8 parser now correctly attaches endpoints and locations to findings.
37+
* **(Tools)** Invicti parser now uses `FirstSeenDate` when populating Finding dates when `DD_USE_FIRST_SEEN` is enabled.
38+
* **(Tools)** `govulncheck` parser fixed for NDJSON output.
39+
* **(Tools)** Added CNNVD as a supported Vulnerability ID source.
40+
1541
### Apr 7, 2026: v2.57.0
1642

1743
* **(Custom Enrichment)** On-prem administrators can now configure custom URLs for EPSS and KEV enrichment data sources under **Settings → Finding Enrichment Settings**. Each source (EPSS scores and CISA Known Exploited Vulnerabilities) can be independently enabled and pointed to an internal mirror or proxy. A **Test Configuration** button validates connectivity before saving. Findings with CVE IDs are automatically enriched with EPSS score/percentile and KEV status during enrichment runs.
Lines changed: 174 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,174 @@
1+
---
2+
title: "Adding the dd-orch Database on Upgrade"
3+
toc_hide: true
4+
weight: -20260501
5+
description: "Provisioning the dojodb-ddorch PostgreSQL database and pointing DefectDojo Pro at it on an existing self-hosted installation."
6+
audience: pro
7+
---
8+
9+
Starting with 2.57.3, DefectDojo Pro requires a second PostgreSQL database, `dojodb-ddorch`, used by the new `ddorch` orchestrator service. The existing `dojodb` database continues to be used by the main Django application.
10+
11+
This guide walks through adding `dojodb-ddorch` to an existing self-hosted PostgreSQL instance and pointing DefectDojo at it.
12+
13+
## Prerequisites
14+
15+
- PostgreSQL 16 is already installed and running on the DB server.
16+
- The `dojodbusr` role already exists with a known password.
17+
- `dojodb` is already created and reachable from the DefectDojo app server.
18+
- `listen_addresses` in `postgresql.conf` is already configured for remote access.
19+
- You have upgraded to the release that ships the `ddorch` and `ddorch-workers` services.
20+
21+
> **A note on the database name:** `dojodb-ddorch` contains a hyphen, so it must be double-quoted in every SQL statement (`"dojodb-ddorch"`).
22+
23+
## Part 1: Provision the Database
24+
25+
### 1. Create the new database
26+
27+
On the PostgreSQL server, open a `psql` session as the `postgres` superuser:
28+
29+
```bash
30+
sudo -i -u postgres psql --username postgres
31+
```
32+
33+
Create the database, grant privileges to the existing `dojodbusr` role, and transfer ownership:
34+
35+
```sql
36+
CREATE DATABASE "dojodb-ddorch";
37+
GRANT ALL PRIVILEGES ON DATABASE "dojodb-ddorch" TO dojodbusr;
38+
ALTER DATABASE "dojodb-ddorch" OWNER TO dojodbusr;
39+
\q
40+
```
41+
42+
**Example session:**
43+
44+
```
45+
root@dbserver:~# sudo -i -u postgres psql --username postgres
46+
psql (16.8)
47+
Type "help" for help.
48+
49+
postgres=# CREATE DATABASE "dojodb-ddorch";
50+
CREATE DATABASE
51+
postgres=# GRANT ALL PRIVILEGES ON DATABASE "dojodb-ddorch" TO dojodbusr;
52+
GRANT
53+
postgres=# ALTER DATABASE "dojodb-ddorch" OWNER TO dojodbusr;
54+
ALTER DATABASE
55+
postgres=# \q
56+
```
57+
58+
> **PostgreSQL 15+ note:** Ownership covers schema rights for the owner, but if you ever connect as a non-owner role you will also need to grant schema privileges inside the new database:
59+
>
60+
> ```sql
61+
> \c "dojodb-ddorch"
62+
> GRANT ALL ON SCHEMA public TO dojodbusr;
63+
> ```
64+
65+
### 2. Allow the app server to connect
66+
67+
Edit `/etc/postgresql/16/main/pg_hba.conf` and add a new line for `dojodb-ddorch` next to the existing `dojodb` entry.
68+
69+
**(a) Preferred — restrict to the DefectDojo app server's IP.**
70+
71+
Supposing the app server's IP is `9.9.9.9`, add:
72+
73+
```
74+
host dojodb-ddorch dojodbusr 9.9.9.9/32 scram-sha-256
75+
host postgres dojodbusr 9.9.9.9/32 scram-sha-256
76+
```
77+
78+
**(b) Alternative — allow from any host.**
79+
80+
```
81+
host dojodb-ddorch dojodbusr 0.0.0.0/0 scram-sha-256
82+
host postgres dojodbusr 0.0.0.0/0 scram-sha-256
83+
```
84+
85+
> **Note:** The lines in `pg_hba.conf` are whitespace-delimited. The easiest way to add this line is to copy/paste the existing `dojodb` line and change the database name.
86+
87+
**Alternative using `echo` (if no text editor is available):**
88+
89+
```bash
90+
# For specific IP (replace 9.9.9.9 with your app server IP):
91+
echo "host dojodb-ddorch dojodbusr 9.9.9.9/32 scram-sha-256" | sudo tee -a /etc/postgresql/16/main/pg_hba.conf
92+
echo "host postgres dojodbusr 9.9.9.9/32 scram-sha-256" | sudo tee -a /etc/postgresql/16/main/pg_hba.conf
93+
94+
# OR for all hosts:
95+
echo "host dojodb-ddorch dojodbusr 0.0.0.0/0 scram-sha-256" | sudo tee -a /etc/postgresql/16/main/pg_hba.conf
96+
echo "host postgres dojodbusr 0.0.0.0/0 scram-sha-256" | sudo tee -a /etc/postgresql/16/main/pg_hba.conf
97+
98+
```
99+
100+
### 3. Reload PostgreSQL
101+
102+
Changes to `pg_hba.conf` only require a reload — no restart is needed:
103+
104+
```bash
105+
sudo systemctl reload postgresql
106+
```
107+
108+
Verify the reload was picked up:
109+
110+
```bash
111+
sudo systemctl status postgresql
112+
```
113+
114+
### 4. Verify connectivity from the app server
115+
116+
From the **DefectDojo app server**, confirm `dojodbusr` can reach the new database. Replace `<db-server-ip>` with your DB server's IP and `<password>` with the password set for `dojodbusr`:
117+
118+
```bash
119+
psql "host=<db-server-ip> dbname=dojodb-ddorch user=dojodbusr password=<password>" -c "SELECT 1;"
120+
```
121+
122+
A successful response of `?column?` with a value of `1` confirms the database is reachable and the credentials are valid.
123+
124+
## Part 2: Point DefectDojo at the New Database
125+
126+
Only the `ddorch` service connects to the new database directly. The main Django application reaches the orchestrator over gRPC, so `DD_DATABASE_URL` does **not** change.
127+
128+
### 1. Set the orchestrator database URL
129+
130+
The `ddorch` service reads its connection string from the `DD_DATABASE_URL` environment variable and **automatically appends `-ddorch` to the database name** in whatever URL you pass it. This means you can reuse the same connection string you already use for the main Django application — no need to construct a second URL by hand.
131+
132+
On startup, ddorch rewrites the database name in this URL from `dojodb` to `dojodb-ddorch` and connects to the database you created in Part 1.
133+
134+
### 2. Restart the orchestrator services
135+
136+
From the deployment directory, recreate the two orchestrator containers so they pick up the new environment:
137+
138+
```bash
139+
docker compose up -d ddorch ddorch-workers
140+
```
141+
142+
Docker Compose will detect the environment change and recreate the containers. The `ddorch` service runs its own schema migrations against `dojodb-ddorch` on startup — no manual migration command is required.
143+
144+
### 3. Verify ddorch connected and migrated the new database
145+
146+
The most direct signal that the database is correctly wired up is the ddorch startup log. Check the last hundred lines:
147+
148+
```bash
149+
docker compose logs ddorch --tail=100
150+
```
151+
152+
Look for three log lines in sequence:
153+
154+
```
155+
{"level":"INFO","msg":"Appending database name to DATABASE_URL","from":"dojodb","to":"dojodb-ddorch"}
156+
INFO Running migrations current_schema_version=<N> next_version=<M> migrations_to_apply=<K>
157+
{"level":"INFO","msg":"starting server","port":9871}
158+
```
159+
160+
What each line proves:
161+
162+
- **`Appending database name to DATABASE_URL ... to: dojodb-ddorch`** — ddorch received your URL and derived the orch database name correctly.
163+
- **`Running migrations ... migrations_to_apply=0`** — ddorch connected to `dojodb-ddorch` and found the schema at the expected version. On a first-ever boot against a fresh database you may see `migrations_to_apply=<N>` with a non-zero value and no subsequent error — this means ddorch just created the tables from scratch. Both outcomes indicate success.
164+
- **`starting server ... port:9871`** — ddorch is up and listening.
165+
166+
If instead you see an error such as `FATAL: password authentication failed`, `no pg_hba.conf entry for host`, or `database "dojodb-ddorch" does not exist`, the database is not reachable — revisit Part 1 before proceeding.
167+
168+
Also confirm both orchestrator containers are running:
169+
170+
```bash
171+
docker compose ps ddorch ddorch-workers
172+
```
173+
174+
Both should report `Up`. With ddorch migrated and the workers container running, your installation is now using the new `dojodb-ddorch` database.

docs/content/triage_findings/finding_deduplication/PRO__deduplication_tuning.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ To adjust Same Tool Deduplication:
3434

3535
### Available Deduplication Algorithms
3636

37-
DefectDojo Pro offers three deduplication methods for same-tool deduplication:
37+
DefectDojo Pro offers the following deduplication methods for same-tool deduplication:
3838

3939
#### Hash Code
4040
Uses a combination of selected fields to generate a unique hash. When selected, a third dropdown will appear showing the fields being used to calculate the hash.
@@ -47,6 +47,9 @@ This algorithm can be useful when working with SAST scanners, or situations wher
4747
#### Unique ID From Tool or Hash Code
4848
Attempts to use the tool's unique ID first, then falls back to the hash code if no unique ID is available. This provides the most flexible deduplication option.
4949

50+
#### Global Component
51+
Matches findings by component name and version across **all Products** in the instance, rather than within a single Product or Engagement. Intended for SCA tools where the same vulnerable dependency appears in many Products. This algorithm is off by default and must be enabled by DefectDojo Support. See [Global Component Deduplication](/triage_findings/finding_deduplication/pro__global_component_deduplication/) for details.
52+
5053
## Cross Tool Deduplication
5154

5255
Cross Tool Deduplication is disabled by default, as deduplication between different security tools requires careful configuration due to variations in how tools report the same vulnerabilities.
@@ -59,7 +62,7 @@ To enable Cross Tool Deduplication:
5962
2. Change the **Deduplication Algorithm** from "Disabled" to "Hash Code"
6063
3. Select which fields should be used for generating the hash in the **Hash Code Fields** dropdown
6164

62-
Unlike Same Tool Deduplication, Cross Tool Deduplication only supports the Hash Code algorithm, as different tools rarely share compatible unique identifiers.
65+
Cross Tool Deduplication supports the Hash Code algorithm, which is suitable for most workflows, as different tools rarely share compatible unique identifiers. For SCA tools reporting the same dependencies, [Global Component Deduplication](/triage_findings/finding_deduplication/pro__global_component_deduplication/) is also available as a cross-tool option (off by default).
6366

6467
## Reimport Deduplication
6568

@@ -76,7 +79,7 @@ When configuring Reimport Deduplication:
7679
1. Select the **Security Tool** (Universal or Generic Parser)
7780
2. Choose the appropriate **Deduplication Algorithm**
7881

79-
The same three algorithm options are available for Reimport Deduplication as for Same Tool Deduplication:
82+
The following algorithm options are available for Reimport Deduplication:
8083
- Hash Code
8184
- Unique ID From Tool
8285
- Unique ID From Tool or Hash Code
Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
---
2+
title: "Global Component Deduplication (Pro)"
3+
description: "Deduplicate Software Composition Analysis Findings by component name and version across all Products"
4+
weight: 5
5+
audience: pro
6+
---
7+
8+
Global Component Deduplication is a DefectDojo Pro algorithm that identifies duplicate Findings across **all Products** based on the component name and version they reference. It is intended for Software Composition Analysis (SCA) tools, where the same vulnerable dependency (for example, `timespan@2.3.0`) may appear in many Products, and you want DefectDojo to treat those occurrences as duplicates of a single original Finding.
9+
10+
Unlike the other deduplication algorithms, Global Component matching is **not scoped to a single Product or Engagement**. A Finding imported into Product B can be marked as a duplicate of an older Finding in Product A, even if the two Products are unrelated.
11+
12+
## Enabling the Global Component Algorithm
13+
14+
Global Component Deduplication is gated behind a feature flag and is **off by default**. To request that it be enabled on your instance, contact [DefectDojo Support](mailto:support@defectdojo.com).
15+
16+
Once the feature is enabled, **Global Component** will become available as an option in the **Deduplication Algorithm** dropdown for both Same Tool and Cross Tool Deduplication settings in the Tuner.
17+
18+
## Configuring Global Component Deduplication
19+
20+
Global Component can be applied to Same-Tool Deduplication, Cross-Tool Deduplication, or both, and is configured per security tool from **Settings > Pro Settings > Deduplication Settings**.
21+
22+
### Same-Tool
23+
24+
Use Same-Tool Deduplication with the Global Component algorithm when you want to deduplicate findings from a single SCA tool across multiple Products.
25+
26+
1. Open the **Same Tool Deduplication** tab.
27+
2. Select the SCA tool from the **Security Tool** dropdown (for example, `Dependency Track Finding Packaging Format (FPF) Export`).
28+
3. Set the **Deduplication Algorithm** to **Global Component**.
29+
4. Submit the form.
30+
31+
Hash Code Fields are not used by this algorithm and are hidden when it is selected.
32+
33+
### Cross-Tool
34+
35+
Use Cross-Tool Deduplication with the Global Component algorithm when you want to deduplicate findings of the same component across different SCA tools and Products.
36+
37+
Cross-tool matching requires Global Component to be configured on **each** tool that should participate.
38+
39+
1. Open the **Cross Tool Deduplication** tab.
40+
2. For each tool to include: select it from the **Security Tool** dropdown, set the algorithm to **Global Component**, and submit.
41+
42+
## How Matching Works
43+
44+
A new Finding is marked as a duplicate of an existing Finding when:
45+
46+
- The component name and component version match exactly, **and**
47+
- An older Finding with the same component name and version exists anywhere in the DefectDojo instance — in any Product or Engagement.
48+
49+
Component version matching is exact. A Finding for `timespan@2.3.0` will **not** deduplicate against one for `timespan@2.3.1`.
50+
51+
The Engagement-scoped deduplication setting is ignored for this algorithm; matching is always global.
52+
53+
## Example
54+
55+
Assume Global Component is enabled on `Dependency Track Finding Packaging Format (FPF) Export` (Same Tool) and on a Generic Findings Import tool (Cross Tool):
56+
57+
| Step | Import | Into Product | Result |
58+
| --- | --- | --- | --- |
59+
| 1 | Dependency Track scan for `timespan@2.3.0` | Application 0 | 1 active Finding created |
60+
| 2 | Same Dependency Track scan | Application 1 | 1 Finding created, marked as duplicate of the Application 0 Finding |
61+
| 3 | Generic Findings Import for `timespan@2.3.0` | Application 2 | 1 Finding created, marked as duplicate of the Application 0 Finding (cross-tool match) |
62+
| 4 | Dependency Track scan for `timespan@2.3.1` | Application 3 | 1 active Finding created — different version, no match |
63+
64+
Each duplicate Finding shows its original at the bottom of the Finding page in the duplicate chain.
65+
66+
## Cross-Product Visibility
67+
68+
Because Global Component matching crosses Product boundaries, the original Finding in a duplicate chain may live in a Product that the user viewing the duplicate does not have permission to access.
69+
70+
In that case, the Finding is visible and labelled as a duplicate, but the user will not be able to open or navigate to the original. Consider this before enabling Global Component on tools whose Findings are sensitive to Product-level access controls.
71+
72+
## Reverting
73+
74+
To stop using Global Component for a given tool, open its Deduplication Settings and switch the algorithm back to one of the scoped options.
75+
76+
For **Same Tool** Deduplication:
77+
78+
- Hash Code
79+
- Unique ID From Tool
80+
- Unique ID From Tool or Hash Code
81+
82+
For **Cross Tool** Deduplication:
83+
84+
- Hash Code
85+
- Disabled
86+
87+
Changing the algorithm triggers a background recalculation of deduplication hashes for the tool's existing Findings.

docs/content/triage_findings/finding_deduplication/about_deduplication.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -145,4 +145,5 @@ Sometimes, Deduplication does not work as expected. Here are some examples of w
145145
| Reimport closes an old Finding and creates a new one when only the line number changed | Reimport matching uses unstable fields (for example, line number) | <strong>Reimport Deduplication</strong> (prefer stable IDs or stable hash fields) |
146146
| Multiple Findings are created in the same Test that you believe should be duplicates | Deduplication matching is not configured for that tool or scope | <strong>Same Tool Deduplication</strong> (and consider “Delete Deduplicate Findings” behavior) |
147147
| Duplicates are created across different tools | Cross-tool matching is disabled or too strict | <strong>Cross Tool Deduplication (Pro only)</strong> (hash-based matching) |
148+
| The same SCA dependency imported into multiple Products creates separate Findings instead of duplicates | Deduplication is scoped per Product by default | <strong>Global Component Deduplication (Pro only)</strong> ([enable for your SCA tools](/triage_findings/finding_deduplication/pro__global_component_deduplication/)) |
148149
| Excess duplicates of the same Finding are being created, across Tests | Asset Hierarchy is not set up correctly | [Consider Reimport for continual testing](/triage_findings/finding_deduplication/avoid_excess_duplicates/) |

docs/layouts/_partials/head/script-header.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<!-- Insert scripts NOT needed by stylesheets here -->
22
<!-- Start of Reo Javascript -->
33
<script type="text/javascript">
4-
!function () { var e, t, n; e = "a92cfcfa51eca96", t = function () { Reo.init({ clientID: "a92cfcfa51eca96" }) }, (n = document.createElement("script")).src = "https://static.reo.dev/" + e + "/reo.js", n.async = !0, n.onload = t, document.head.appendChild(n) }();
4+
!function(){var e,t,n;e="a92cfcfa51eca96",t=function(){Reo.init({clientID:"a92cfcfa51eca96"})},(n=document.createElement("script")).src="https://static.reo.dev/"+e+"/reo.js",n.defer=!0,n.onload=t,document.head.appendChild(n)}();
55
</script>
66
<!-- End of Reo Javascript -->
77
<script>function initApollo() {

0 commit comments

Comments
 (0)