You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/network-observability-dns-tracking.adoc
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,11 +36,12 @@ spec:
36
36
type: eBPF
37
37
ebpf:
38
38
features:
39
-
- DNSTracking <1>
40
-
sampling: 1 <2>
39
+
- DNSTracking
40
+
sampling: 1
41
41
----
42
-
<1> You can set the `spec.agent.ebpf.features` parameter list to enable DNS tracking of each network flow in the web console.
43
-
<2> You can set `sampling` to a value of `1` for more accurate metrics and to capture *DNS latency*. For a `sampling` value greater than 1, you can observe flows with *DNS Response Code* and *DNS Id*, and it is unlikely that *DNS Latency* can be observed.
42
+
+
43
+
* You can set the `spec.agent.ebpf.features` parameter list to enable DNS tracking of each network flow in the web console.
44
+
* You can set `sampling` to a value of `1` for more accurate metrics and to capture *DNS latency*. For a `sampling` value greater than 1, you can observe flows with *DNS Response Code* and *DNS Id*, and it is unlikely that *DNS Latency* can be observed.
44
45
45
46
. When you refresh the *Network Traffic* page, there are new DNS representations you can choose to view in the *Overview* and *Traffic Flow* views and new filters you can apply.
46
47
.. Select new DNS choices in *Manage panels* to display graphical visualizations and DNS metrics in the *Overview*.
Copy file name to clipboardExpand all lines: modules/network-observability-flowcollector-example.adoc
+30-53Lines changed: 30 additions & 53 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,61 +20,38 @@ metadata:
20
20
spec:
21
21
namespace: netobserv
22
22
deploymentModel: Service
23
+
networkPolicy:
24
+
enable: true
23
25
agent:
24
-
type: eBPF <1>
26
+
type: eBPF
25
27
ebpf:
26
-
sampling: 50 <2>
27
-
logLevel: info
28
+
sampling: 50
28
29
privileged: false
29
-
resources:
30
-
requests:
31
-
memory: 50Mi
32
-
cpu: 100m
33
-
limits:
34
-
memory: 800Mi
35
-
processor: <3>
36
-
logLevel: info
37
-
resources:
38
-
requests:
39
-
memory: 100Mi
40
-
cpu: 100m
41
-
limits:
42
-
memory: 800Mi
43
-
logTypes: Flows
44
-
advanced:
45
-
conversationEndTimeout: 10s
46
-
conversationHeartbeatInterval: 30s
47
-
loki: <4>
48
-
mode: LokiStack <5>
30
+
features: []
31
+
processor:
32
+
addZone: false
33
+
subnetLabels:
34
+
openShiftAutoDetect: true
35
+
customLabels: []
36
+
consumerReplicas: 3
37
+
loki:
38
+
enable: true
39
+
mode: LokiStack
40
+
lokiStack:
41
+
name: loki
42
+
namespace: netobserv-loki
49
43
consolePlugin:
50
-
register: true
51
-
logLevel: info
52
-
portNaming:
53
-
enable: true
54
-
portNames:
55
-
"3100": loki
56
-
quickFilters: <6>
57
-
- name: Applications
58
-
filter:
59
-
src_namespace!: 'openshift-,netobserv'
60
-
dst_namespace!: 'openshift-,netobserv'
61
-
default: true
62
-
- name: Infrastructure
63
-
filter:
64
-
src_namespace: 'openshift-,netobserv'
65
-
dst_namespace: 'openshift-,netobserv'
66
-
- name: Pods network
67
-
filter:
68
-
src_kind: 'Pod'
69
-
dst_kind: 'Pod'
70
-
default: true
71
-
- name: Services network
72
-
filter:
73
-
dst_kind: 'Service'
44
+
enable: true
45
+
exporters: []
74
46
----
75
-
<1> The Agent specification, `spec.agent.type`, must be `EBPF`. eBPF is the only {product-title} supported option.
76
-
<2> You can set the Sampling specification, `spec.agent.ebpf.sampling`, to manage resources. By default, eBPF sampling is set to `50`, so a flow has a 1 in 50 chance of being sampled. A lower sampling interval value requires more computational, memory, and storage resources. A value of `0` or `1` means all flows are sampled. It is recommended to start with the default value and refine it empirically to determine the optimal setting for your cluster.
77
-
<3> The Processor specification `spec.processor.` can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The `spec.processor.logTypes` value is `Flows`. The `spec.processor.advanced` values are `Conversations`, `EndedConversations`, or `ALL`. Storage requirements are highest for `All` and lowest for `EndedConversations`.
78
-
<4> The Loki specification, `spec.loki`, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install.
79
-
<5> The `LokiStack` mode automatically sets a few configurations: `querierUrl`, `ingesterUrl` and `statusUrl`, `tenantID`, and corresponding TLS configuration. Cluster roles and a cluster role binding are created for reading and writing logs to Loki. And `authToken` is set to `Forward`. You can set these manually using the `Manual` mode.
80
-
<6> The `spec.quickFilters` specification defines filters that show up in the web console. The `Application` filter keys,`src_namespace` and `dst_namespace`, are negated (`!`), so the `Application` filter shows all traffic that _does not_ originate from, or have a destination to, any `openshift-` or `netobserv` namespaces. For more information, see Configuring quick filters below.
47
+
48
+
where:
49
+
50
+
`spec.agent.type`:: Must be `eBPF` as eBPF is the only {product-title} supported option.
51
+
`spec.agent.ebpf.sampling`:: Specifies the sampling interval. By default, eBPF sampling is set to `50`, so a packet has a 1 in 50 chance of being sampled. A lower sampling interval value requires more computational, memory, and storage resources. A value of `0` or `1` means all packets are sampled. It is recommended to start with the default value and refine it empirically to determine the optimal setting for your cluster.
52
+
`spec.agent.ebpf.privileged`:: Specifies if the eBPF agent pods should run as privileged. Running as privileged is required for several features, such as monitoring non-default networks and tracking packet drops. For security, in accordance with the principle of least privilege, it should only be enabled when some of those features are desired. A warning will be displayed if you enabled a feature requiring privileged mode without setting it to true explicitly.
53
+
`spec.processor.addZone`:: Used to inject cloud availability zones in network flows.
54
+
`spec.processor.subnetLabels`:: Specifies a list of customized labels to inject in network flows, based on CIDR matching.
55
+
`spec.processor.consumerReplicas`:: Specifies the number of replicas for the processor pods (flowlogs-pipeline). Refer to the Resource management and performance considerations section for recommendations based on the cluster size.
56
+
`spec.loki.mode`:: Specifies how to configure the connection to Loki, depending on its installation mode. If you use the install paths described in "Installing the Loki Operator", the mode must be set to `LokiStack`, and `spec.loki.lokiStack` should refer to the installed `LokiStack` resource name and namespace.
57
+
`spec.loki.lokistack.namespace`:: Specifies the namespace for the `LokiStack` resource. This value must match the `metadata.namespace` defined in the `LokiStack` custom resource. While this example uses `netobserv-loki`, you can use a different namespace for different components.
Copy file name to clipboardExpand all lines: modules/network-observability-flowcollector-kafka-config.adoc
+15-11Lines changed: 15 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,10 +9,10 @@
9
9
[role="_abstract"]
10
10
Configure the `FlowCollector` resource to use Kafka for high-throughput and low-latency data feeds.
11
11
12
-
A Kafka instance needs to be running, and a Kafka topic dedicated to {product-title} Network Observability must be created in that instance. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
12
+
You must have a running Kafka instance and create a Kafka topic in that instance dedicated to {product-title} Network Observability. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
13
13
14
14
.Prerequisites
15
-
* Kafka is installed. RedHat supports Kafka with AMQ Streams Operator.
15
+
* You have installed Kafka. Red{nbsp}Hat supports Kafka with AMQ Streams Operator.
16
16
17
17
.Procedure
18
18
. In the web console, navigate to *Ecosystem*->*Installed Operators*.
@@ -21,7 +21,7 @@ A Kafka instance needs to be running, and a Kafka topic dedicated to {product-ti
21
21
22
22
. Select the cluster and then click the *YAML* tab.
23
23
24
-
. Modify the `FlowCollector` resource for {product-title} Network Observability Operator to use Kafka, as shown in the following sample YAML:
24
+
. Change the `FlowCollector` resource for {product-title} Network Observability Operator to use Kafka, as shown in the following sample YAML:
25
25
+
26
26
.Sample Kafka configuration in `FlowCollector` resource
<1> Set `spec.deploymentModel` to `Kafka` instead of `Direct` to enable the Kafka deployment model.
42
-
<2> `spec.kafka.address` refers to the Kafka bootstrap server address. You can specify a port if needed, for instance `kafka-cluster-kafka-bootstrap.netobserv:9093` for using TLS on port 9093.
43
-
<3> `spec.kafka.topic` should match the name of a topic created in Kafka.
44
-
<4> `spec.kafka.tls` can be used to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: `netobserv`) and where the eBPF agents are deployed (default: `netobserv-privileged`). It must be referenced with `spec.kafka.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.kafka.tls.userCert`.
41
+
+
42
+
where:
43
+
44
+
`spec.deploymentModel`:: Specifies the deployment model. Set to `Kafka` instead of `Service`
45
+
to enable the Kafka deployment model.
46
+
`spec.kafka.address`:: Specifies the Kafka bootstrap server address. You can specify a port if needed, for instance `kafka-cluster-kafka-bootstrap.netobserv:9093` for using TLS on port 9093.
47
+
`spec.kafka.topic`:: Specifies the name of the topic created in Kafka. It should match the name of a topic created in Kafka.
48
+
`spec.kafka.tls`:: Specifies communication encryption. Use this setting to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret in both namespaces: the namespace where you deploy the `flowlogs-pipeline` processor component (default: `netobserv`) and the namespace where you deploy the eBPF agents (default: `netobserv-privileged`). Reference the certificate by using `spec.kafka.tls.caCert`. When you use mTLS, make the client secrets available in these namespaces as well. You can generate the secrets by using the Red Hat AMQ Streams User Operator. Reference the secrets by using `spec.kafka.tls.userCert`.
<1> The installation examples in this documentation use the same namespace, `netobserv`, across all components. You can optionally use a different namespace for the different components
36
+
+
37
+
where:
38
+
39
+
`metadata.namespace`:: Specifies the namespace for the Loki S3 secret. While this example uses `netobserv-loki`, you can use a different namespace for different components.
40
+
`stringData.access_key_id`:: Specifies the access key ID for the S3 bucket.
41
+
`stringData.access_key_secret`:: Specifies the secret access key for the S3 bucket.
42
+
`stringData.bucketnames`:: Specifies the name of the S3 bucket.
43
+
`stringData.endpoint`:: Specifies the endpoint URL for the S3 service.
44
+
`stringData.region`:: Specifies the AWS region where the bucket is located.
37
45
38
46
.Verification
39
47
* After you create the secret, you view the secret listed under *Workloads*->*Secrets* in the web console.
Copy file name to clipboardExpand all lines: modules/network-observability-lokistack-create.adoc
+10-9Lines changed: 10 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,40 +18,41 @@ You can deploy a `LokiStack` custom resource (CR) to create a namespace or new p
18
18
. Click *Create LokiStack*.
19
19
. Ensure the following fields are specified in either *Form View* or *YAML view*:
20
20
+
21
-
--
22
21
[source,yaml]
23
22
----
24
23
apiVersion: loki.grafana.com/v1
25
24
kind: LokiStack
26
25
metadata:
27
26
name: loki
28
-
namespace: netobserv # <1>
27
+
namespace: netobserv-loki
29
28
spec:
30
-
size: 1x.small # <2>
29
+
size: 1x.small
31
30
storage:
32
31
schemas:
33
32
- version: v13
34
33
effectiveDate: '2022-06-01'
35
34
secret:
36
35
name: loki-s3
37
36
type: s3
38
-
storageClassName: gp3 # <3>
37
+
storageClassName: gp3
39
38
tenants:
40
39
mode: openshift-network
41
40
----
42
-
<1> The installation examples in this documentation use the same namespace, `netobserv`, across all components. You can optionally use a different namespace.
43
-
<2> Specify the deployment size. In the {loki-op} 5.8 and later versions, the supported size options for production instances of Loki are `1x.extra-small`, `1x.small`, or `1x.medium`.
41
+
+
42
+
where:
43
+
44
+
`metadata.namespace`:: Specifies the namespace for the `LokiStack` resource. While this example uses `netobserv-loki`, you can use a different namespace for different components.
45
+
`spec.size`:: Specifies the deployment size. In {loki-op} 5.8 and later versions, the supported size options for production instances of Loki are `1x.extra-small`, `1x.small`, or `1x.medium`.
44
46
+
45
47
[IMPORTANT]
46
48
====
47
49
It is not possible to change the number `1x` for the deployment size.
48
50
====
49
-
<3> Use a storage class name that is available on the cluster for `ReadWriteOnce` access mode. For best performance, specify a storage class that allocates block storage. You can use `oc get storageclasses` to see what is available on your cluster.
51
+
`spec.storageClassName`:: Specifies a storage class name that is available on the cluster for `ReadWriteOnce` access mode. For best performance, specify a storage class that allocates block storage. Use the `oc get storageclasses`command to see available storage classes on your cluster.
50
52
+
51
53
[IMPORTANT]
52
54
====
53
-
You must not reuse the same `LokiStack`CR that is used for {logging}.
55
+
You must not reuse the same `LokiStack`custom resource that is used for {logging}.
Copy file name to clipboardExpand all lines: modules/network-observability-packet-drops.adoc
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,11 +36,14 @@ spec:
36
36
type: eBPF
37
37
ebpf:
38
38
features:
39
-
- PacketDrop <1>
40
-
privileged: true <2>
39
+
- PacketDrop
40
+
privileged: true
41
41
----
42
-
<1> You can start reporting the packet drops of each network flow by listing the `PacketDrop` parameter in the `spec.agent.ebpf.features` specification list.
43
-
<2> The `spec.agent.ebpf.privileged` specification value must be `true` for packet drop tracking.
42
+
+
43
+
where:
44
+
45
+
`spec.agent.ebpf.features`:: Specifies the features to enable. Include `PacketDrop` to start reporting packet drops for each network flow.
46
+
`spec.agent.ebpf.privileged`:: Specifies whether privileged mode is enabled. Must be set to `true` for packet drop tracking.
44
47
45
48
.Verification
46
49
* When you refresh the *Network Traffic* page, the *Overview*, *Traffic Flow*, and *Topology* views display new information about packet drops:
Copy file name to clipboardExpand all lines: modules/network-observability-packet-translation.adoc
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,9 +31,10 @@ spec:
31
31
type: eBPF
32
32
ebpf:
33
33
features:
34
-
- PacketTranslation <1>
34
+
- PacketTranslation
35
35
----
36
-
<1> You can start enriching network flows with translated packet information by listing the `PacketTranslation` parameter in the `spec.agent.ebpf.features` specification list.
36
+
+
37
+
** You can start enriching network flows with translated packet information by listing the `PacketTranslation` parameter in the `spec.agent.ebpf.features` specification list.
37
38
+
38
39
. Refresh the *Network Traffic* page to filter for information about translated packets:
39
40
.. Filter the network flow data based on *Destination kind: Service*.
Copy file name to clipboardExpand all lines: modules/network-observability-tcp-flag-syn-flood.adoc
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ metadata:
61
61
message: |-
62
62
{{ $labels.job }}: incoming SYN-flood attack suspected to Host={{ $labels.DstK8S_HostName}}, Namespace={{ $labels.DstK8S_Namespace }}, Resource={{ $labels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports.
63
63
summary: "Incoming SYN-flood"
64
-
expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags="2"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 <1>
64
+
expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags="2"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300
65
65
for: 15s
66
66
labels:
67
67
severity: warning
@@ -71,14 +71,15 @@ metadata:
71
71
message: |-
72
72
{{ $labels.job }}: outgoing SYN-flood attack suspected from Host={{ $labels.SrcK8S_HostName}}, Namespace={{ $labels.SrcK8S_Namespace }}, Resource={{ $labels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports.
73
73
summary: "Outgoing SYN-flood"
74
-
expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags="2"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 <1>
74
+
expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags="2"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300
75
75
for: 15s
76
76
labels:
77
77
severity: warning
78
78
app: netobserv
79
79
# ...
80
80
----
81
-
<1> In this example, the threshold for the alert is `300`; however, you can adapt this value empirically. A threshold that is too low might produce false-positives, and if it's too high it might miss actual attacks.
81
+
+
82
+
In this example, the threshold for the alert is `300`; however, you can adapt this value empirically. A threshold that is too low might produce false-positives, and if it's too high it might miss actual attacks.
82
83
83
84
.Verification
84
85
. In the web console, click *Manage Columns* in the *Network Traffic* table view and click *TCP flags*.
0 commit comments