Skip to content

Commit f9e304b

Browse files
authored
Merge pull request #110669 from openshift-cherrypick-robot/cherry-pick-109155-to-enterprise-4.22
[enterprise-4.22] OSDOCS-16871-1-abstracts: SCALE-1: Core Scalability Planning and Reso…
2 parents f12e380 + 092dfa3 commit f9e304b

19 files changed

Lines changed: 23 additions & 21 deletions

modules/adding-bare-metal-host-to-cluster-using-web-console.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Adding a bare metal host to the cluster using the web console
88

99
[role="_abstract"]
10-
To integrate physical hardware into your cluster, you can add bare-metal hosts by using the web console. By adding these hosts, you can provision and manage these nodes directly through the web console.
10+
You can add bare-metal hosts to the cluster by using the web console.
1111

1212
.Prerequisites
1313

@@ -20,13 +20,13 @@ To integrate physical hardware into your cluster, you can add bare-metal hosts b
2020

2121
. Select *Add Host* -> *New with Dialog*.
2222

23-
. Specify a unique name for the new bare metal host.
23+
. Specify a unique name for the new bare-metal host.
2424

2525
. Set the *Boot MAC address*.
2626

2727
. Set the *Baseboard Management Console (BMC) Address*.
2828

29-
. Enter the user credentials for the host's baseboard management controller (BMC).
29+
. Enter the user credentials for the baseboard management controller (BMC) of the host.
3030

3131
. Select to power on the host after creation, and select *Create*.
3232

modules/admin-quota-limits.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Limit ranges in a LimitRange object
88

99
[role="_abstract"]
10-
To define compute resource constraints at the object level, create a `LimitRange` object. By creating this object, you can specify the exact amount of resources that an individual pod, container, image, or persistent volume claim can consume.
10+
To define compute resource constraints at the object level, create a `LimitRange` object. By creating this object, you can specify the exact amount of resources that an individual pod, container, image, image stream, or persistent volume claim can consume.
1111

1212
All requests to create and modify resources are evaluated against each `LimitRange` object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. If the resource does not set an explicit value, and if the constraint supports a default value, the default value is applied to the resource.
1313

modules/admin-quota-usage.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Admin quota usage
88

99
[role="_abstract"]
10-
To ensure projects remain within defined constraints, monitor admin quota usage. By tracking the aggregate consumption of compute resources and storage, you can identify when `ResourceQuota` limits are reached or approached.
10+
To ensure projects remain within defined constraints, monitor admin quota usage. After a resource quota for a project is first created, the project restricts the ability to create any new resources that can violate a quota constraint until it has calculated updated usage statistics.
1111

1212
Quota enforcement::
1313
+

modules/configure-guest-caching-for-disk.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Configure guest caching for disk
88

99
[role="_abstract"]
10-
To ensure that the guest manages caching instead of the host, configure your disk devices. This setting shifts caching responsibility to the guest operating system, preventing the host from caching disk operations.
10+
To ensure that the guest manages caching instead of the host, configure your disk devices.
1111

1212
Ensure that the driver element of the disk device includes the `cache="none"` and `io="native"` parameters.
1313

modules/configuring-huge-pages.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
= Configuring huge pages at boot time
99

1010
[role="_abstract"]
11-
To ensure nodes in your {product-title} cluster pre-allocate memory for specific workloads, reserve huge pages at boot time. This configuration sets aside memory resources during system startup, offering a distinct alternative to run-time allocation.
11+
To ensure nodes in your {product-title} cluster pre-allocate memory for specific workloads, reserve huge pages at boot time.
1212

1313
There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot-time allocation of huge pages on specific nodes.
1414

modules/configuring-quota-synchronization-period.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Configuring quota synchronization period
88

99
[role="_abstract"]
10-
To control the synchronization time frame when resources are deleted, configure the `resource-quota-sync-period` setting. This parameter in the `/etc/origin/master/master-config.yaml` file determines how frequently the system updates usage statistics to reflect deleted resources.
10+
When a set of resources are deleted, the synchronization time frame of resources is determined by the `resource-quota-sync-period` setting in the `/etc/origin/master/master-config.yaml` file. You can change the `resource-quota-sync-period` setting to have the set of resources regenerate in the needed amount of time (in seconds) for the resources to be once again available.
1111

1212
[NOTE]
1313
====

modules/consuming-huge-pages-resource-using-the-downward-api.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
= Consuming huge pages resources using the Downward API
1010

1111
[role="_abstract"]
12-
To inject information about the huge pages resources consumed by a container, use the Downward API. This configuration enables applications to retrieve and use their own memory usage data directly.
12+
To inject information about the huge pages resources consumed by a container, use the Downward API.
1313

1414
You can inject the resource allocation as environment variables, a volume plugin, or both. Applications that you develop and run in the container can determine the resources that are available by reading the environment variables or files in the specified volumes.
1515

modules/create-perf-profile-workload-partitioning.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Performance profiles and workload partitioning
88

99
[role="_abstract"]
10-
To enable workload partitioning, apply a performance profile. This configuration specifies the isolated and reserved CPUs, ensuring that customer workloads run on dedicated cores without interruption from platform processes.
10+
To enable workload partitioning, apply a performance profile.
1111

1212
An appropriately configured performance profile specifies the `isolated` and `reserved` CPUs. Create a performance profile by using the Performance Profile Creator (PPC) tool.
1313

modules/disabling-the-cpuset-cgroup-controller.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Disabling the cpuset cgroup controller
88

99
[role="_abstract"]
10-
To allow the kernel scheduler to freely distribute processes across all available resources, disable the `cpuset` cgroup controller. This configuration prevents the system from enforcing processor affinity constraints, ensuring that tasks can use any available CPU or memory node.
10+
You can disable the cpuset cgroup controller. Disabling the controller requires a restart of the libvirtd daemon.
1111

1212
[NOTE]
1313
====

modules/enabling-workload-partitioning.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Enabling workload partitioning
88

99
[role="_abstract"]
10-
To partition cluster management pods into a specified CPU affinity, enable workload partitioning. This configuration ensures that management pods operate within the reserved CPU limits defined in your Performance Profile, preventing them from consuming resources intended for customer workloads.
10+
To partition cluster management pods into a specified CPU affinity, enable workload partitioning. This configuration ensures that management pods operate within the reserved CPU limits defined in your Performance Profile.
1111

1212
Consider additional post-installation Operators that use workload partitioning when calculating how many reserved CPU cores to set aside for the platform.
1313

0 commit comments

Comments
 (0)