Skip to content

Commit 2b7cee8

Browse files
authored
Merge branch 'dev' into patch-1
2 parents a33a33c + 6f90561 commit 2b7cee8

46 files changed

Lines changed: 142408 additions & 357 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/integration-tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ jobs:
7676
# "tests/import_scanner_test.py",
7777
# "tests/zap.py",
7878
]
79-
os: [alpine, debian]
79+
os: [debian]
8080
v3_feature_locations: [true, false]
8181
exclude:
8282
# standalone create endpoint page is gone in v3

.github/workflows/performance-tests.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,13 +23,13 @@ jobs:
2323
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
2424
with:
2525
path: built-docker-image
26-
pattern: built-docker-image-django-alpine-linux-amd64
26+
pattern: built-docker-image-django-debian-linux-amd64
2727
merge-multiple: true
2828

2929
- name: Load docker images
3030
timeout-minutes: 10
3131
run: |
32-
docker load -i built-docker-image/django-alpine-linux-amd64_img
32+
docker load -i built-docker-image/django-debian-linux-amd64_img
3333
docker images
3434
3535
- name: Set unit-test mode
@@ -45,7 +45,7 @@ jobs:
4545
-f docker/docker-compose.override.performance_tests_cicd.yml \
4646
up -d --no-deps uwsgi
4747
env:
48-
DJANGO_VERSION: alpine
48+
DJANGO_VERSION: debian
4949

5050
- name: Run performance tests (auto-update counts)
5151
timeout-minutes: 15

.github/workflows/renovate.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,4 +21,4 @@ jobs:
2121
uses: suzuki-shunsuke/github-action-renovate-config-validator@ee9f69e1f683ed0d08225086482b34fc9abe9300 # v2.1.0
2222
with:
2323
strict: "true"
24-
validator_version: 43.91.2 # renovate: datasource=github-releases depName=renovatebot/renovate
24+
validator_version: 43.102.8 # renovate: datasource=github-releases depName=renovatebot/renovate

.github/workflows/rest-framework-tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ jobs:
1616
runs-on: ${{ inputs.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }}
1717
strategy:
1818
matrix:
19-
os: [alpine, debian]
19+
os: [debian]
2020

2121
steps:
2222
# Replace slashes so we can use this in filenames

docs/content/get_started/open_source/installation.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,23 @@ See instructions in [DOCKER.md](<https://github.com/DefectDojo/django-DefectDojo
1818

1919
[SaaS link](https://defectdojo.com/platform)
2020

21+
---
22+
## **Docker Image Variants**
23+
---
24+
25+
DefectDojo publishes Docker images in multiple variants:
26+
27+
| | AMD64 | ARM64 |
28+
|---|---|---|
29+
| **Debian** | ✅ Supported | ⚠️ Unit tested |
30+
| **Alpine** | ⚠️ Community | ⚠️ Community |
31+
32+
**Debian on AMD64** is the officially supported and tested configuration. All CI tests (unit, integration, and performance) run against this combination.
33+
34+
**Debian on ARM64** is built and covered by unit tests in CI, but integration and performance tests are not run against it.
35+
36+
The **Alpine** variants are built and published but are not covered by any automated testing. Use them at your own risk.
37+
2138
---
2239
## **Options for the brave (not officially supported)**
2340
---
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
title: 'Upgrading to DefectDojo Version 2.56.4'
3+
toc_hide: true
4+
weight: -20260319
5+
description: JFrog Xray API Summary Artifact parser deduplication
6+
---
7+
8+
## JFrog Xray API Summary Artifact parser deduplication
9+
Deduplication of JFrog Xray API Summary Artifact findings is improved for newly imported findings.
10+
11+
To apply this on existing data, you need to recompute the hashes for this specific parser [see docs](https://docs.defectdojo.com/triage_findings/finding_deduplication/os__deduplication_tuning/#after-changing-deduplication-settings).

docs/content/supported_tools/parsers/file/anchore_grype.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -203,3 +203,31 @@ By default, DefectDojo identifies duplicate Findings using these [hashcode field
203203
- severity
204204
- component name
205205
- component version
206+
207+
### Anchore Grype Detailed
208+
209+
Both scan types accept the same JSON report format. The difference is in how Findings are deduplicated:
210+
211+
- **`Anchore Grype`** — Aggregates all matches for the same CVE, component name, and version into a single Finding, regardless of file path. Deduplication is based on hashcode fields (`title`, `severity`, `component_name`, `component_version`).
212+
- **`Anchore Grype detailed`** — Creates a separate Finding for each unique file path. Deduplication is based on `unique_id_from_tool`, composed as `{vuln_id}|{component_name}|{component_version}|{file_path}`.
213+
214+
A typical case is a package installed at multiple paths in a container image (e.g., /usr/lib/x86_64-linux-gnu/libc.so.6 and /lib/x86_64-linux-gnu/libc.so.6) — the same CVE would produce one Finding in default mode and two in detailed mode.
215+
216+
**Field mapping:**
217+
218+
| Finding Field | Grype JSON Source |
219+
|---|---|
220+
| `title` | `{vulnerability.id} in {artifact.name}:{artifact.version}` |
221+
| `severity` | `vulnerability.severity` (mapped: `Unknown`/`Negligible``Info`) |
222+
| `description` | `vulnerability.namespace`, `vulnerability.description`, `matchDetails[].matcher`, `artifact.purl` |
223+
| `component_name` | `artifact.name` |
224+
| `component_version` | `artifact.version` |
225+
| `file_path` | `artifact.locations[0].path` |
226+
| `vuln_id_from_tool` | `vulnerability.id` |
227+
| `unique_id_from_tool` | `vuln_id\|component_name\|component_version\|file_path` (detailed mode only) |
228+
| `references` | `vulnerability.dataSource`, `vulnerability.urls`, `relatedVulnerabilities[0].dataSource`, `relatedVulnerabilities[0].urls` |
229+
| `mitigation` | `vulnerability.fix.versions` |
230+
| `fix_available` | `true` if `vulnerability.fix.versions` is non-empty |
231+
| `fix_version` | `vulnerability.fix.versions[0]` (or comma-joined if multiple) |
232+
| `cvssv3` | `vulnerability.cvss` or `relatedVulnerabilities[0].cvss` |
233+
| `epss_score` / `epss_percentile` | `vulnerability.epss` or `relatedVulnerabilities[0].epss` |

docs/package-lock.json

Lines changed: 9 additions & 9 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

dojo/api_v2/serializers.py

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3102,6 +3102,26 @@ def validate(self, data):
31023102
return data
31033103

31043104

3105+
class CeleryStatusSerializer(serializers.Serializer):
3106+
worker_status = serializers.BooleanField(read_only=True)
3107+
broker_status = serializers.BooleanField(read_only=True)
3108+
queue_length = serializers.IntegerField(allow_null=True, read_only=True)
3109+
task_time_limit = serializers.IntegerField(allow_null=True, read_only=True)
3110+
task_soft_time_limit = serializers.IntegerField(allow_null=True, read_only=True)
3111+
task_default_expires = serializers.IntegerField(allow_null=True, read_only=True)
3112+
3113+
3114+
class CeleryQueueTaskDetailSerializer(serializers.Serializer):
3115+
task_name = serializers.CharField(read_only=True)
3116+
count = serializers.IntegerField(read_only=True)
3117+
oldest_position = serializers.IntegerField(read_only=True)
3118+
newest_position = serializers.IntegerField(read_only=True)
3119+
oldest_eta = serializers.CharField(allow_null=True, read_only=True)
3120+
newest_eta = serializers.CharField(allow_null=True, read_only=True)
3121+
earliest_expires = serializers.CharField(allow_null=True, read_only=True)
3122+
latest_expires = serializers.CharField(allow_null=True, read_only=True)
3123+
3124+
31053125
class FindingNoteSerializer(serializers.Serializer):
31063126
note_id = serializers.IntegerField()
31073127

dojo/api_v2/views.py

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -183,9 +183,14 @@
183183
from dojo.utils import (
184184
async_delete,
185185
generate_file_response,
186+
get_celery_queue_details,
187+
get_celery_queue_length,
188+
get_celery_worker_status,
186189
get_setting,
187190
get_system_setting,
188191
process_tag_notifications,
192+
purge_celery_queue,
193+
purge_celery_queue_by_task_name,
189194
)
190195

191196
logger = logging.getLogger(__name__)
@@ -3260,6 +3265,79 @@ def get_queryset(self):
32603265
return System_Settings.objects.all().order_by("id")
32613266

32623267

3268+
class CeleryViewSet(viewsets.ViewSet):
3269+
permission_classes = (permissions.IsSuperUser, DjangoModelPermissions)
3270+
queryset = System_Settings.objects.none()
3271+
3272+
@extend_schema(
3273+
responses=serializers.CeleryStatusSerializer,
3274+
summary="Get Celery worker and queue status",
3275+
description=(
3276+
"Returns Celery worker liveness, pending queue length, and the active task "
3277+
"timeout/expiry configuration. Uses the Celery control channel (pidbox) for "
3278+
"worker status so it works correctly even when the task queue is clogged."
3279+
),
3280+
)
3281+
@action(detail=False, methods=["get"], url_path="status")
3282+
def status(self, request):
3283+
queue_length = get_celery_queue_length()
3284+
data = {
3285+
"worker_status": get_celery_worker_status(),
3286+
"broker_status": queue_length is not None,
3287+
"queue_length": queue_length,
3288+
"task_time_limit": getattr(settings, "CELERY_TASK_TIME_LIMIT", None),
3289+
"task_soft_time_limit": getattr(settings, "CELERY_TASK_SOFT_TIME_LIMIT", None),
3290+
"task_default_expires": getattr(settings, "CELERY_TASK_DEFAULT_EXPIRES", None),
3291+
}
3292+
return Response(serializers.CeleryStatusSerializer(data).data)
3293+
3294+
@extend_schema(
3295+
request=None,
3296+
responses={200: {"type": "object", "properties": {"purged": {"type": "integer"}}}},
3297+
summary="Purge all pending Celery tasks from the queue",
3298+
description=(
3299+
"Removes all pending tasks from the default Celery queue. Tasks already being "
3300+
"executed by workers are not affected. Note: if deduplication tasks were queued, "
3301+
"you may need to re-run deduplication manually via `python manage.py dedupe`."
3302+
),
3303+
)
3304+
@action(detail=False, methods=["post"], url_path="queue/purge")
3305+
def queue_purge(self, request):
3306+
purged = purge_celery_queue()
3307+
return Response({"purged": purged})
3308+
3309+
@extend_schema(
3310+
responses=serializers.CeleryQueueTaskDetailSerializer(many=True),
3311+
summary="Get per-task breakdown of the Celery queue",
3312+
description=(
3313+
"Scans every message in the queue (O(N)) and returns task name, count, and "
3314+
"oldest/newest queue positions. May be slow for large queues."
3315+
),
3316+
)
3317+
@action(detail=False, methods=["get"], url_path="queue/details")
3318+
def queue_details(self, request):
3319+
details = get_celery_queue_details()
3320+
if details is None:
3321+
return Response({"error": "Unable to read queue details."}, status=503)
3322+
return Response(serializers.CeleryQueueTaskDetailSerializer(details, many=True).data)
3323+
3324+
@extend_schema(
3325+
request={"application/json": {"type": "object", "properties": {"task_name": {"type": "string"}}, "required": ["task_name"]}},
3326+
responses={200: {"type": "object", "properties": {"purged": {"type": "integer"}}}},
3327+
summary="Purge all queued tasks with a given task name",
3328+
description="Removes all pending tasks matching the given task name from the default Celery queue.",
3329+
)
3330+
@action(detail=False, methods=["post"], url_path="queue/task/purge")
3331+
def queue_task_purge(self, request):
3332+
task_name = request.data.get("task_name", "").strip()
3333+
if not task_name:
3334+
return Response({"error": "task_name is required."}, status=400)
3335+
purged = purge_celery_queue_by_task_name(task_name)
3336+
if purged is None:
3337+
return Response({"error": "Unable to purge tasks."}, status=503)
3338+
return Response({"purged": purged})
3339+
3340+
32633341
# Authorization: superuser
32643342
@extend_schema_view(**schema_with_prefetch())
32653343
class NotificationsViewSet(

0 commit comments

Comments
 (0)