Skip to content

Commit e589be5

Browse files
authored
Merge pull request #107644 from gwynnemonahan/OSDOCS-14480
OSDOCS-14480 [NETOBSERV] Update callouts for DITA
2 parents 9f35e9a + 33d6a4a commit e589be5

14 files changed

Lines changed: 135 additions & 120 deletions

modules/network-observability-SRIOV-configuration.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ spec:
3434
agent:
3535
type: eBPF
3636
ebpf:
37-
privileged: true <1>
37+
privileged: true
3838
----
39-
<1> The `spec.agent.ebpf.privileged` field value must be set to `true` to enable SR-IOV monitoring.
39+
+
40+
** The `spec.agent.ebpf.privileged` field value must be set to `true` to enable SR-IOV monitoring.

modules/network-observability-dns-tracking.adoc

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,12 @@ spec:
3636
type: eBPF
3737
ebpf:
3838
features:
39-
- DNSTracking <1>
40-
sampling: 1 <2>
39+
- DNSTracking
40+
sampling: 1
4141
----
42-
<1> You can set the `spec.agent.ebpf.features` parameter list to enable DNS tracking of each network flow in the web console.
43-
<2> You can set `sampling` to a value of `1` for more accurate metrics and to capture *DNS latency*. For a `sampling` value greater than 1, you can observe flows with *DNS Response Code* and *DNS Id*, and it is unlikely that *DNS Latency* can be observed.
42+
+
43+
* You can set the `spec.agent.ebpf.features` parameter list to enable DNS tracking of each network flow in the web console.
44+
* You can set `sampling` to a value of `1` for more accurate metrics and to capture *DNS latency*. For a `sampling` value greater than 1, you can observe flows with *DNS Response Code* and *DNS Id*, and it is unlikely that *DNS Latency* can be observed.
4445
4546
. When you refresh the *Network Traffic* page, there are new DNS representations you can choose to view in the *Overview* and *Traffic Flow* views and new filters you can apply.
4647
.. Select new DNS choices in *Manage panels* to display graphical visualizations and DNS metrics in the *Overview*.

modules/network-observability-flowcollector-example.adoc

Lines changed: 30 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -20,61 +20,38 @@ metadata:
2020
spec:
2121
namespace: netobserv
2222
deploymentModel: Service
23+
networkPolicy:
24+
enable: true
2325
agent:
24-
type: eBPF <1>
26+
type: eBPF
2527
ebpf:
26-
sampling: 50 <2>
27-
logLevel: info
28+
sampling: 50
2829
privileged: false
29-
resources:
30-
requests:
31-
memory: 50Mi
32-
cpu: 100m
33-
limits:
34-
memory: 800Mi
35-
processor: <3>
36-
logLevel: info
37-
resources:
38-
requests:
39-
memory: 100Mi
40-
cpu: 100m
41-
limits:
42-
memory: 800Mi
43-
logTypes: Flows
44-
advanced:
45-
conversationEndTimeout: 10s
46-
conversationHeartbeatInterval: 30s
47-
loki: <4>
48-
mode: LokiStack <5>
30+
features: []
31+
processor:
32+
addZone: false
33+
subnetLabels:
34+
openShiftAutoDetect: true
35+
customLabels: []
36+
consumerReplicas: 3
37+
loki:
38+
enable: true
39+
mode: LokiStack
40+
lokiStack:
41+
name: loki
42+
namespace: netobserv-loki
4943
consolePlugin:
50-
register: true
51-
logLevel: info
52-
portNaming:
53-
enable: true
54-
portNames:
55-
"3100": loki
56-
quickFilters: <6>
57-
- name: Applications
58-
filter:
59-
src_namespace!: 'openshift-,netobserv'
60-
dst_namespace!: 'openshift-,netobserv'
61-
default: true
62-
- name: Infrastructure
63-
filter:
64-
src_namespace: 'openshift-,netobserv'
65-
dst_namespace: 'openshift-,netobserv'
66-
- name: Pods network
67-
filter:
68-
src_kind: 'Pod'
69-
dst_kind: 'Pod'
70-
default: true
71-
- name: Services network
72-
filter:
73-
dst_kind: 'Service'
44+
enable: true
45+
exporters: []
7446
----
75-
<1> The Agent specification, `spec.agent.type`, must be `EBPF`. eBPF is the only {product-title} supported option.
76-
<2> You can set the Sampling specification, `spec.agent.ebpf.sampling`, to manage resources. By default, eBPF sampling is set to `50`, so a flow has a 1 in 50 chance of being sampled. A lower sampling interval value requires more computational, memory, and storage resources. A value of `0` or `1` means all flows are sampled. It is recommended to start with the default value and refine it empirically to determine the optimal setting for your cluster.
77-
<3> The Processor specification `spec.processor.` can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The `spec.processor.logTypes` value is `Flows`. The `spec.processor.advanced` values are `Conversations`, `EndedConversations`, or `ALL`. Storage requirements are highest for `All` and lowest for `EndedConversations`.
78-
<4> The Loki specification, `spec.loki`, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install.
79-
<5> The `LokiStack` mode automatically sets a few configurations: `querierUrl`, `ingesterUrl` and `statusUrl`, `tenantID`, and corresponding TLS configuration. Cluster roles and a cluster role binding are created for reading and writing logs to Loki. And `authToken` is set to `Forward`. You can set these manually using the `Manual` mode.
80-
<6> The `spec.quickFilters` specification defines filters that show up in the web console. The `Application` filter keys,`src_namespace` and `dst_namespace`, are negated (`!`), so the `Application` filter shows all traffic that _does not_ originate from, or have a destination to, any `openshift-` or `netobserv` namespaces. For more information, see Configuring quick filters below.
47+
48+
where:
49+
50+
`spec.agent.type`:: Must be `eBPF` as eBPF is the only {product-title} supported option.
51+
`spec.agent.ebpf.sampling`:: Specifies the sampling interval. By default, eBPF sampling is set to `50`, so a packet has a 1 in 50 chance of being sampled. A lower sampling interval value requires more computational, memory, and storage resources. A value of `0` or `1` means all packets are sampled. It is recommended to start with the default value and refine it empirically to determine the optimal setting for your cluster.
52+
`spec.agent.ebpf.privileged`:: Specifies if the eBPF agent pods should run as privileged. Running as privileged is required for several features, such as monitoring non-default networks and tracking packet drops. For security, in accordance with the principle of least privilege, it should only be enabled when some of those features are desired. A warning will be displayed if you enabled a feature requiring privileged mode without setting it to true explicitly.
53+
`spec.processor.addZone`:: Used to inject cloud availability zones in network flows.
54+
`spec.processor.subnetLabels`:: Specifies a list of customized labels to inject in network flows, based on CIDR matching.
55+
`spec.processor.consumerReplicas`:: Specifies the number of replicas for the processor pods (flowlogs-pipeline). Refer to the Resource management and performance considerations section for recommendations based on the cluster size.
56+
`spec.loki.mode`:: Specifies how to configure the connection to Loki, depending on its installation mode. If you use the install paths described in "Installing the Loki Operator", the mode must be set to `LokiStack`, and `spec.loki.lokiStack` should refer to the installed `LokiStack` resource name and namespace.
57+
`spec.loki.lokistack.namespace`:: Specifies the namespace for the `LokiStack` resource. This value must match the `metadata.namespace` defined in the `LokiStack` custom resource. While this example uses `netobserv-loki`, you can use a different namespace for different components.

modules/network-observability-flowcollector-kafka-config.adoc

Lines changed: 15 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@
99
[role="_abstract"]
1010
Configure the `FlowCollector` resource to use Kafka for high-throughput and low-latency data feeds.
1111

12-
A Kafka instance needs to be running, and a Kafka topic dedicated to {product-title} Network Observability must be created in that instance. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
12+
You must have a running Kafka instance and create a Kafka topic in that instance dedicated to {product-title} Network Observability. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
1313

1414
.Prerequisites
15-
* Kafka is installed. Red Hat supports Kafka with AMQ Streams Operator.
15+
* You have installed Kafka. Red{nbsp}Hat supports Kafka with AMQ Streams Operator.
1616
1717
.Procedure
1818
. In the web console, navigate to *Ecosystem* -> *Installed Operators*.
@@ -21,7 +21,7 @@ A Kafka instance needs to be running, and a Kafka topic dedicated to {product-ti
2121

2222
. Select the cluster and then click the *YAML* tab.
2323

24-
. Modify the `FlowCollector` resource for {product-title} Network Observability Operator to use Kafka, as shown in the following sample YAML:
24+
. Change the `FlowCollector` resource for {product-title} Network Observability Operator to use Kafka, as shown in the following sample YAML:
2525
+
2626
.Sample Kafka configuration in `FlowCollector` resource
2727
[source, yaml]
@@ -31,14 +31,18 @@ kind: FlowCollector
3131
metadata:
3232
name: cluster
3333
spec:
34-
deploymentModel: Kafka <1>
34+
deploymentModel: Kafka
3535
kafka:
36-
address: "kafka-cluster-kafka-bootstrap.netobserv" <2>
37-
topic: network-flows <3>
36+
address: "kafka-cluster-kafka-bootstrap.netobserv"
37+
topic: network-flows
3838
tls:
39-
enable: false <4>
39+
enable: false
4040
----
41-
<1> Set `spec.deploymentModel` to `Kafka` instead of `Direct` to enable the Kafka deployment model.
42-
<2> `spec.kafka.address` refers to the Kafka bootstrap server address. You can specify a port if needed, for instance `kafka-cluster-kafka-bootstrap.netobserv:9093` for using TLS on port 9093.
43-
<3> `spec.kafka.topic` should match the name of a topic created in Kafka.
44-
<4> `spec.kafka.tls` can be used to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: `netobserv`) and where the eBPF agents are deployed (default: `netobserv-privileged`). It must be referenced with `spec.kafka.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.kafka.tls.userCert`.
41+
+
42+
where:
43+
44+
`spec.deploymentModel`:: Specifies the deployment model. Set to `Kafka` instead of `Service`
45+
to enable the Kafka deployment model.
46+
`spec.kafka.address`:: Specifies the Kafka bootstrap server address. You can specify a port if needed, for instance `kafka-cluster-kafka-bootstrap.netobserv:9093` for using TLS on port 9093.
47+
`spec.kafka.topic`:: Specifies the name of the topic created in Kafka. It should match the name of a topic created in Kafka.
48+
`spec.kafka.tls`:: Specifies communication encryption. Use this setting to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret in both namespaces: the namespace where you deploy the `flowlogs-pipeline` processor component (default: `netobserv`) and the namespace where you deploy the eBPF agents (default: `netobserv-privileged`). Reference the certificate by using `spec.kafka.tls.caCert`. When you use mTLS, make the client secrets available in these namespaces as well. You can generate the secrets by using the Red Hat AMQ Streams User Operator. Reference the secrets by using `spec.kafka.tls.userCert`.

modules/network-observability-includelist-example.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,11 +34,12 @@ spec:
3434
message: |-
3535
{{ $labels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ $labels.DstK8S_OwnerType }} {{ $labels.DstK8S_OwnerName }} ({{ $labels.DstK8S_Namespace }}).
3636
summary: "High incoming traffic."
37-
expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace="openshift-ingress"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 <1>
37+
expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace="openshift-ingress"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000
3838
for: 30s
3939
labels:
4040
severity: warning
4141
----
42-
<1> The `netobserv_workload_ingress_bytes_total` metric is enabled by default in `spec.processor.metrics.includeList`.
42+
+
43+
** The `netobserv_workload_ingress_bytes_total` metric is enabled by default in `spec.processor.metrics.includeList`.
4344

4445
. Click *Create* to apply the configuration file to the cluster.

modules/network-observability-loki-secret.adoc

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,15 +25,23 @@ apiVersion: v1
2525
kind: Secret
2626
metadata:
2727
name: loki-s3
28-
namespace: netobserv <1>
28+
namespace: netobserv-loki
2929
stringData:
3030
access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK
3131
access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=
3232
bucketnames: s3-bucket-name
3333
endpoint: https://s3.eu-central-1.amazonaws.com
3434
region: eu-central-1
3535
----
36-
<1> The installation examples in this documentation use the same namespace, `netobserv`, across all components. You can optionally use a different namespace for the different components
36+
+
37+
where:
38+
39+
`metadata.namespace`:: Specifies the namespace for the Loki S3 secret. While this example uses `netobserv-loki`, you can use a different namespace for different components.
40+
`stringData.access_key_id`:: Specifies the access key ID for the S3 bucket.
41+
`stringData.access_key_secret`:: Specifies the secret access key for the S3 bucket.
42+
`stringData.bucketnames`:: Specifies the name of the S3 bucket.
43+
`stringData.endpoint`:: Specifies the endpoint URL for the S3 service.
44+
`stringData.region`:: Specifies the AWS region where the bucket is located.
3745

3846
.Verification
3947
* After you create the secret, you view the secret listed under *Workloads* -> *Secrets* in the web console.

modules/network-observability-lokistack-create.adoc

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -18,40 +18,41 @@ You can deploy a `LokiStack` custom resource (CR) to create a namespace or new p
1818
. Click *Create LokiStack*.
1919
. Ensure the following fields are specified in either *Form View* or *YAML view*:
2020
+
21-
--
2221
[source,yaml]
2322
----
2423
apiVersion: loki.grafana.com/v1
2524
kind: LokiStack
2625
metadata:
2726
name: loki
28-
namespace: netobserv # <1>
27+
namespace: netobserv-loki
2928
spec:
30-
size: 1x.small # <2>
29+
size: 1x.small
3130
storage:
3231
schemas:
3332
- version: v13
3433
effectiveDate: '2022-06-01'
3534
secret:
3635
name: loki-s3
3736
type: s3
38-
storageClassName: gp3 # <3>
37+
storageClassName: gp3
3938
tenants:
4039
mode: openshift-network
4140
----
42-
<1> The installation examples in this documentation use the same namespace, `netobserv`, across all components. You can optionally use a different namespace.
43-
<2> Specify the deployment size. In the {loki-op} 5.8 and later versions, the supported size options for production instances of Loki are `1x.extra-small`, `1x.small`, or `1x.medium`.
41+
+
42+
where:
43+
44+
`metadata.namespace`:: Specifies the namespace for the `LokiStack` resource. While this example uses `netobserv-loki`, you can use a different namespace for different components.
45+
`spec.size`:: Specifies the deployment size. In {loki-op} 5.8 and later versions, the supported size options for production instances of Loki are `1x.extra-small`, `1x.small`, or `1x.medium`.
4446
+
4547
[IMPORTANT]
4648
====
4749
It is not possible to change the number `1x` for the deployment size.
4850
====
49-
<3> Use a storage class name that is available on the cluster for `ReadWriteOnce` access mode. For best performance, specify a storage class that allocates block storage. You can use `oc get storageclasses` to see what is available on your cluster.
51+
`spec.storageClassName`:: Specifies a storage class name that is available on the cluster for `ReadWriteOnce` access mode. For best performance, specify a storage class that allocates block storage. Use the `oc get storageclasses` command to see available storage classes on your cluster.
5052
+
5153
[IMPORTANT]
5254
====
53-
You must not reuse the same `LokiStack` CR that is used for {logging}.
55+
You must not reuse the same `LokiStack` custom resource that is used for {logging}.
5456
====
55-
--
5657

5758
. Click *Create*.

modules/network-observability-packet-drops.adoc

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,14 @@ spec:
3636
type: eBPF
3737
ebpf:
3838
features:
39-
- PacketDrop <1>
40-
privileged: true <2>
39+
- PacketDrop
40+
privileged: true
4141
----
42-
<1> You can start reporting the packet drops of each network flow by listing the `PacketDrop` parameter in the `spec.agent.ebpf.features` specification list.
43-
<2> The `spec.agent.ebpf.privileged` specification value must be `true` for packet drop tracking.
42+
+
43+
where:
44+
45+
`spec.agent.ebpf.features`:: Specifies the features to enable. Include `PacketDrop` to start reporting packet drops for each network flow.
46+
`spec.agent.ebpf.privileged`:: Specifies whether privileged mode is enabled. Must be set to `true` for packet drop tracking.
4447

4548
.Verification
4649
* When you refresh the *Network Traffic* page, the *Overview*, *Traffic Flow*, and *Topology* views display new information about packet drops:

modules/network-observability-packet-translation.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,10 @@ spec:
3131
type: eBPF
3232
ebpf:
3333
features:
34-
- PacketTranslation <1>
34+
- PacketTranslation
3535
----
36-
<1> You can start enriching network flows with translated packet information by listing the `PacketTranslation` parameter in the `spec.agent.ebpf.features` specification list.
36+
+
37+
** You can start enriching network flows with translated packet information by listing the `PacketTranslation` parameter in the `spec.agent.ebpf.features` specification list.
3738
+
3839
. Refresh the *Network Traffic* page to filter for information about translated packets:
3940
.. Filter the network flow data based on *Destination kind: Service*.

modules/network-observability-tcp-flag-syn-flood.adoc

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ metadata:
6161
message: |-
6262
{{ $labels.job }}: incoming SYN-flood attack suspected to Host={{ $labels.DstK8S_HostName}}, Namespace={{ $labels.DstK8S_Namespace }}, Resource={{ $labels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports.
6363
summary: "Incoming SYN-flood"
64-
expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags="2"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 <1>
64+
expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags="2"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300
6565
for: 15s
6666
labels:
6767
severity: warning
@@ -71,14 +71,15 @@ metadata:
7171
message: |-
7272
{{ $labels.job }}: outgoing SYN-flood attack suspected from Host={{ $labels.SrcK8S_HostName}}, Namespace={{ $labels.SrcK8S_Namespace }}, Resource={{ $labels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports.
7373
summary: "Outgoing SYN-flood"
74-
expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags="2"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 <1>
74+
expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags="2"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300
7575
for: 15s
7676
labels:
7777
severity: warning
7878
app: netobserv
7979
# ...
8080
----
81-
<1> In this example, the threshold for the alert is `300`; however, you can adapt this value empirically. A threshold that is too low might produce false-positives, and if it's too high it might miss actual attacks.
81+
+
82+
In this example, the threshold for the alert is `300`; however, you can adapt this value empirically. A threshold that is too low might produce false-positives, and if it's too high it might miss actual attacks.
8283

8384
.Verification
8485
. In the web console, click *Manage Columns* in the *Network Traffic* table view and click *TCP flags*.

0 commit comments

Comments
 (0)