Logging version 6.0 is a major change from earlier releases and is the realization of several longstanding goals.
The following documentation is intended to assist administrators in converting exising ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io specifications to the new observability API.
We’ve provided an overview of the changes, as well as complete ClusterLogForwarder resource examples for several common use cases.
-
No automated upgrade from v5.x to v6.0 The new operator must be installed separately.
-
New 'ClusterLogForwarder' resource uses the new 'observability' API
ClusterLogForwarder.observability.openshift.io -
replaces both
ClusterLogging.logging.openshift.ioandClusterLogForwarder.logging.openshift.ioresources -
The Cluster Logging Operator no longer manages log storage or visualization of any kind, including the LokiStack resource and Elasticsearch and Kibana
-
CLO has removed support of Fluentd log collector implementation
Two 'logging' resources:
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
...apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
...Replaced by a single custom 'observability' resource:
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
...|
Note
|
Distinct operators and resources now support the other logging components separately (e.g. storage, visualization) |
Given the numerous combinations in which the logging solution can be configured, there is no automated upgrade provided by the Cluster Logging Operator. Newly created custom resources are required for v6.0, and the new operator is published under a separate channel. The operator can be updated by changing the subscription channel in the console, or by uninstalling.
|
Note
|
Manually changing the operator channel to stable-6.0 under the Subscription tab in the console, will trigger the olm process to remove v5 and install v6.0. Following this process, your existing v5 resources will continue to run, but are no longer managed by your operator. These unmanaged resources can be removed once your new resources are ready to be created.
|
- Important
-
If using the OCP console to uninstall the v5 operator, you can continue to collect and forward logs if you DO NOT check the box
Delete all operand instances for this operatorwhen uninstalling. This will allow your existing collector pods to continue to run until you are ready to remove them.
The Cluster Logging Operator no longer provides a "one click" logging installation, in favor of administrators having more granular control over individual components. Administrators must now explicitly deploy an operator for each component (log storage, visualization and collection)
- General Steps
-
-
Deploy the Red Hat Loki Operator
-
Create an instance of LokiStack in the openshift-logging namespace
-
Deploy the Red Hat Cluster Observability Operator
-
Create an instance of UIPlugin resource for visualization in the console
-
Deploy the Red Hat OpenShift Logging Operator
-
Create an instance of the new ClusterLogForwarder.observability.openshift.io resource
-
|
Note
|
Please refer to individual operator documentation for install instructions. A more detailed summary of steps have been included in the lokistack administration doc. |
LokiStack is the only managed log storage solution available for this release. It is based upon the loki-operator and has been available in prior releases as the preferred alternative to the managed Elasticsearch offering. The deployment of this solution remains unchanged from previous releases. Read the official product documentation for more information.
|
Note
|
To continue to use an existing Red Hat managed Elasticsearch deployment provided by the elasticsearch-operator, remove the owner references from the Elasticsearch resource named 'elasticsearch' in the 'openshift-logging' namespace before removing the ClusterLogging resourced named 'instance' in the 'openshift-logging' namespace |
oc patch -n openshift-logging Elasticsearch elasticsearch --type=merge -p '{"metadata": {"ownerReferences": [], "labels": {"pod-template-hash":null}}}'The OpenShift console UI plugin that provides visualization was moved to the cluster-observability-operator from the cluster-logging-operator. Read the official product documentation for more information.
|
Note
|
To continue to use an existing Red Hat managed Kibana deployment provided by the elasticsearch-operator, remove the owner references from the Kibana resource named 'kibana' in the 'openshift-logging' namespace before removing the ClusterLogging resourced named 'instance' in the 'openshift-logging' namespace |
oc patch -n openshift-logging Kibana kibana --type=merge -p '{"metadata": {"ownerReferences": [], "labels": {"pod-template-hash":null}}}'Log collection and forwarding configuration is spec’d from a new API that is included in the API group observability.openshift.io. The following sections highlight the differences from the old API resource.
|
Note
|
Vector is the only supported collector implementation. |
Configuration of the management state, collection resource limits and requests, tolerations, and node selection have moved to the new ClusterLogForwarder API.
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
spec:
managementState: Managed
collection:
type: vector
resources:
limits:
cpu: 500m
requests:
memory: 1Gi
nodeSelector:
node-role.kubernetes.io/worker: ""
tolerations:
- key: logging
operator: Exists
...apiVersion: observability.openshift.io/v1 # (1)
kind: ClusterLogForwarder # (2)
metadata:
name: my-forwarder
spec:
managementState: Managed
collector: # (3)
resources: # (4)
requests:
cpu: 500m
memory: 64Mi
limits:
cpu: 6000m
memory: 1024Mi
nodeSelector:
node-role.kubernetes.io/worker: ""
tolerations:
- key: logging
operator: Exists
...- Snippet highlights
-
-
apiVersion must now be observability.openshift.io/v1
-
kind is now the ClusterLogForwarder spec
-
spec collector now includes resources, nodeSelector and tolerations
-
default values shown for requests and limits
-
- CPU and memory limits
-
As with all cluster resources, use the values shown above as a reference point and adjust as necessary. If your pipeline is complex, you may need more collector resources; if you have a more straightforward pipeline, you may need less.
-
resources.limitsdescribes the maximum amount of compute resources allowed -
resources.requestsdescribes the minimum amount of compute resources required. Defaults to values of limits if not specified
-
The ClusterLogForwarder now requires a cluster administrator to provide a service account, with correct RBAC permissions. This service account is now a required part of the configuration.
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: my-forwarder
spec:
serviceAccount:
name: logging-admin
...Administrators are required to explicitly grant log collection permissions to the service account referenced in the ClusterLogForwarder. There are 3 cluster roles that can be bound to: collect-application-logs, collect-infrastructure-logs and collect-audit-logs.
oc adm policy add-cluster-role-to-user collect-application-logs -z logging-admin oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-admin
Additionally, if collecting audit logs:
oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-admin
If your previous forwarder is deployed in the namespace openshift-logging and named instance, then you’ve likely been using the service account logcollector created by earlier versions of the operator. You can optionally grant the new RBAC permissions to this SA.
|
Important
|
To continue using the logcollector service account, you still MUST explicitly grant log collection permissions by creating a ClusterRoleBinding to the necessary roles. |
The input spec is an optional part of the ClusterLogForwarder spec where administrators can continue to use the pre-defined values of application, infrastructure, and audit to collect those sources. See the Input Spec document for definitions of these values. The spec, otherwise, has largely remained unchanged.
Simplified namespace and container inclusion and exclusions are now collapsed into a single field
...
spec:
inputs:
- name: app-logs
type: application
application:
namespaces:
- foo
- bar
includes:
- namespace: my-important
container: main
excludes:
- container: too-verbose
......
spec:
inputs:
- name: app-logs
type: application
application:
includes:
- namespace: foo
- namespace: bar
- namespace: my-important
container: main
excludes:
- container: too-verbose
...|
Note
|
application, infrastructure, and audit are reserved words and can not be used for the name when defining an input |
Input receivers now require explicit configuration of the type and port at the receiver level
...
spec:
inputs:
- name: an-http
receiver:
http:
port: 8443
format: kubeAPIAudit
- name: a-syslog
receiver:
type: syslog
syslog:
port: 9442
......
spec:
inputs:
- name: an-http
type: receiver
receiver:
type: http
port: 8443
http:
format: kubeAPIAudit
- name: a-syslog
type: receiver
receiver:
type: syslog
port: 9442
...The high-level output spec changes:
-
Moves URL to each output type spec
-
Moves tuning to each output type spec
-
Separates TLS from authentication
-
Requires explicit configuration of keys and secret/configmap for TLS and authentication
Secrets and TLS configuration are separated into authentication and tls configuration for each output.
They are now explicitly defined instead of relying upon administrators to specify secrets with recognized keys.
|
Note
|
The new configuration requires administrators to understand the previously recognized keys in order to continue to use the existing secrets. |
...
spec:
outputs:
- name: my-output
type: http
http:
url: https://my-secure-output:8080
authentication:
password:
key: pass
secretName: my-secret
username:
key: user
secretName: my-secret
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
......
spec:
outputs:
- name: my-output
type: http
http:
url: https://my-secure-output:8080
authentication:
token:
from: serviceAccount
tls:
ca:
key: service-ca.crt
configMapName: openshift-service-ca.crt
...All attributes of pipelines in previous releases have been converted to filters in this release. Individual filters are defined in the "filters" spec and referenced by a pipeline
...
spec:
pipelines:
- name: app-logs
detectMultilineErrors: true
parse: json
labels:
foo: bar
......
spec:
filters:
- name: my-multiline
type: detectMultilineException
- name: my-parse
type: parse
- name: my-labels
type: openshiftLabels
openshiftLabels:
foo: bar
pipelines:
- name: app-logs
filterRefs:
- my-multiline
- my-parse
- my-labels
...|
Note
|
Drop filter, Prune filter and KubeAPIAudit filters remain unchanged |
...
spec:
filters:
- name: drop-debug-logs
type: drop
drop:
- test:
- field: .level
matches: debug
- name: prune-fields
type: prune
prune:
in:
- .kubernetes.labels.foobar
notIn:
- .message
- name: audit-logs
type: kubeAPIAudit
kubeAPIAudit:
omitResponseCodes:
- 404
- 409
...Most validations are now enforced when a resource is created or updated which provides immediate feedback. This is a departure from previous releases where all validation occurred post creation requiring inspection of the resource status location. Some validation still occurs post resource creation for cases where is not possible to do so at creation or update time.
Instances of the ClusterLogForwarder.observability.openshift.io must satisfy the following before the operator will deploy the log collector:
-
Resource Status Conditions:
Authorized, Valid, Ready -
Spec Validations:
Filters, Inputs, Outputs, Pipelines
All must evaluate to status: "True"
...
status:
conditions:
- message: "permitted to collect log types: [application]"
reason: ClusterRoleExists
status: "True"
type: observability.openshift.io/Authorized
- message: ""
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/Valid
- message: ""
status: "True"
type: observability.openshift.io/Ready
filterConditions:
- message: filter "my-parse" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidFilter-my-parse
inputConditions:
- message: input "application" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidInput-application
outputConditions:
- message: output "rh-loki" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidOutput-rh-loki
pipelineConditions:
- message: pipeline "app-logs" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidPipeline-app-logs
...|
Note
|
Conditions that have a "status" other than "True" will provide information identifying the failure. |
...
status:
conditions:
- message: insufficient permissions on service account, not authorized to collect 'application' logs
reason: ClusterRoleMissing
status: "False"
type: observability.openshift.io/Authorized
- message: ""
reason: ValidationFailure
status: "False"
type: Ready
...apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: my-forwarder
spec:
serviceAccount:
name: logging-admin
outputs:
- name: my-cw
type: cloudwatch
cloudwatch:
groupName: my-cluster-{.log_type||"unknown"}
region: us-east-1
authentication:
type: awsAccessKey
awsAccessKey:
keyId:
secretName: cw-secret
key: aws_access_key_id
keySecret:
secretName: cw-secret
key: aws_secret_access_key
pipelines:
- name: my-cw-logs
inputRefs:
- application
- infrastructure
outputRefs:
- my-cw...
cloudwatch:
authentication:
type: iamRole
iamRole:
roleARN:
secretName: role-for-sts
key: credentials
token:
from: serviceAccount
......
cloudwatch:
authentication:
type: iamRole
iamRole:
roleARN:
secretName: role-for-sts
key: credentials
token:
from: secret
secret:
key: token
name: cw-token
...apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: my-forwarder
spec:
serviceAccount:
name: logging-admin # (1)
outputs:
- name: default-lokistack
type: lokiStack
lokiStack:
target:
name: logging-loki # (2)
namespace: openshift-logging
authentication:
token:
from: serviceAccount
tls:
ca:
key: service-ca.crt # (3)
configMapName: openshift-service-ca.crt
pipelines:
- name: my-pipeline
outputRefs:
- default-lokistack
inputRefs:
- application
- infrastructure-
serviceAccount.namemust have permissions to collect AND write to loki gateway -
lokiStack.targetname and namespace must match your loki instance name -
TLS configuration
keyandconfigMapNamecan use the existing openshift service config map
collect-application-logs collect-infrastructure-logs collect-audit-logs
cluster-logging-write-application-logs cluster-logging-write-infrastructure-logs cluster-logging-write-audit-logs
oc adm policy add-cluster-role-to-user <cluster_role> -z logging-admin|
Note
|
The -z flag used above creates a cluster role binding to the service account in the current namespace. Use oc create clusterolebinding -h for more explicit options when creating bindings
|
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: my-forwarder
spec:
serviceAccount:
name: logging-admin
outputs:
- name: es-external
type: elasticsearch
elasticsearch:
url: https://external-es-service:9200
version: 8
index: '{.log_type||"nologformat"}-write'
tls:
ca:
key: bundle.crt
secretName: my-tls-secret
certificate:
key: tls.crt
secretName: my-tls-secret
key:
key: tls.key
secretName: my-tls-secret
filters:
- name: my-parse
type: parse
pipelines:
- name: my-pipeline
inputRefs:
- application
- infrastructure
filterRefs:
- my-parse
outputRefs:
- es-externalindex can be a combination of dynamic and static values. Dynamic values are enclosed in curly brackets {}
and MUST end with a "quoted" static fallback value separated with ||.
More details use: oc explain clf.spec.outputs.elasticsearch.index
|
Note
|
In this example, application logs are written to the 'application-write' and 'infrastructure-write' index.
Previous versions without the index spec, would have instead written to 'app-write' and 'infra-write'.
|
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: my-forwarder
spec:
serviceAccount:
name: logcollector # (1)
outputs:
- name: es-app-output # (2)
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: 'app-write' # (3)
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
- name: es-infra-output # (2)
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: 'infra-write' # (3)
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
- name: es-audit-output # (2)
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: 'audit-write' # (3)
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
pipelines:
- name: my-app # (4)
inputRefs:
- application
outputRefs:
- es-app-output
- name: my-infra # (5)
inputRefs:
- infrastructure
outputRefs:
- es-infra-output
- name: my-audit # (6)
inputRefs:
- audit
outputRefs:
- es-audit-output-
service account
logcollectormust have the correct permissions (see Service Accounts above) -
es-app-output,es-infra-outputandes-audit-outputare the outputs used in pipelines for route logs by log type -
indexmust follow naming schemeapp-,infra-oraudit-* -
pipeline
my-appincludes application logs and route them to thees-app-output -
pipeline
my-infraincludes infrastructure logs and route them to thees-infra-output -
pipeline
my-auditincludes audit logs and route them to thees-audit-output
|
Note
|
In order to forward logs to the default RH-managed Elasticsearch, the index values must be one of app-write, infra-write or audit-write.
|
Custom ES indices in v5.9 was achieved via structuredTypeKey and structuredTypeName options
...
spec:
outputs:
- name: default
type: elasticsearch
elasticsearch:
structuredTypeKey: log_type
structuredTypeName: unknown
......
spec:
outputs:
- name: es-output
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: '{.log_type||"unknown"}' # (1)
...-
indexis set to read the field value.log_typeand falls back to "unknown" if not found
|
Note
|
a string fallback is always required to ensure a valid index |