Skip to content

Commit 638d0f5

Browse files
committed
docs: fix typos in docs
Signed-off-by: Paiman <wwk187@haw-hamburg.de>
1 parent 7e1fe36 commit 638d0f5

5 files changed

Lines changed: 18 additions & 18 deletions

File tree

content/en/docs/operating/architecture.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ Introducing a new separation of duties can lead to a significant paradigm shift.
1212

1313
The answer to this question may be influenced by the following aspects:
1414

15-
* **Are the Cluster Adminsitrators willing to grant permissions to Tenant Owners**?
16-
* _You might have a problem with know-how and probably your organisation is not yet pushing Kubernetes itself enough as a key strategic plattform. The key here is enabling Plattform Users through good UX and know-how transfers_
15+
* **Are the Cluster Administrators willing to grant permissions to Tenant Owners**?
16+
* _You might have a problem with know-how and probably your organisation is not yet pushing Kubernetes itself enough as a key strategic platform. The key here is enabling Platform Users through good UX and know-how transfers_
1717

1818
* **Who is responsible for the deployed workloads within the Tenants?**?
1919
* _If Platform Administrators are still handling this, a true “shift left” has not yet been achieved._
@@ -58,7 +58,7 @@ Any entity which needs to interact with tenants and their namespaces must be def
5858
**Every Tenant Owner must be a [Capsule User](#capsule-users)**
5959

6060

61-
They manage the namespaces within their tenants and perform administrative tasks confined to their tenant boundaries. This delegation allows teams to operate more autonomously while still adhering to organizational policies. Tenant Owners can be used to shift reposnsability of one tenant towards this user group. promoting them to the SPOC of all namespaces within the tenant.
61+
They manage the namespaces within their tenants and perform administrative tasks confined to their tenant boundaries. This delegation allows teams to operate more autonomously while still adhering to organizational policies. Tenant Owners can be used to shift responsibility of one tenant towards this user group, promoting them to the SPOC of all namespaces within the tenant.
6262

6363
Tenant Owners can:
6464

@@ -71,11 +71,11 @@ Capsule provides robust tools to strictly enforce tenant boundaries, ensuring th
7171

7272
## Layouts
7373

74-
Let's dicuss different Tenant Layouts which could be used . These are just approaches we have seen, however you might also find a combination of these which fits your use-case.
74+
Let's discuss different Tenant Layouts which could be used. These are just approaches we have seen, however you might also find a combination of these which fits your use-case.
7575

7676
### Tenant As A Service
7777

78-
With this approach you essentially just provide your Customers with the Tenant on your cluster. The rest is their responsability. This concludes to a shared responsibility model. This can be achieved when also the Tenant Owners are responsible for everything they are provisiong within their Tenant's namespaces.
78+
With this approach you essentially just provide your Customers with the Tenant on your cluster. The rest is their responsibility. This concludes to a shared responsibility model. This can be achieved when also the Tenant Owners are responsible for everything they are provisioning within their Tenant's namespaces.
7979

8080
![Resourcepool Dashboard](/images/content/architecture/layout-taas.drawio.png)
8181

content/en/docs/operating/best-practices/networking.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ spec:
6868
6969
### Deny Namespace Metadata
7070
71-
In the above example we allow traffic from namespaces with the label `company.com/system: "true"`. This is meant for Kubernetes Operators to eg. scrape the workloads within a tenant. However without further enforcement any namespace can set this label and therefor gain access to any tenant namespace. To prevent this, we must restrict, who can declare this label on namespaces.
71+
In the above example we allow traffic from namespaces with the label `company.com/system: "true"`. This is meant for Kubernetes Operators to eg. scrape the workloads within a tenant. However without further enforcement any namespace can set this label and thereby gain access to any tenant namespace. To prevent this, we must restrict who can declare this label on namespaces.
7272

7373
We can deny such labels on tenant basis. So in this scenario every tenant should disallow the use of these labels on namespaces:
7474

content/en/docs/operating/setup/configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@ description: >
55
Understand the Capsule configuration options and how to use them.
66
---
77

8-
The configuration for the capsule controller is done via it's dedicated configration Custom Resource. You can explain the configuration options and how to use them:
8+
The configuration for the capsule controller is done via its dedicated configuration Custom Resource. You can explain the configuration options and how to use them:
99

1010
## CapsuleConfiguration
1111

12-
The configuration for Capsule is done via it's dedicated configration Custom Resource. You can explain the configuration options and how to use them:
12+
The configuration for Capsule is done via its dedicated configuration Custom Resource. You can explain the configuration options and how to use them:
1313

1414
```shell
1515
kubectl explain capsuleConfiguration.spec
@@ -74,7 +74,7 @@ manager:
7474
```
7575

7676
### `nodeMetadata`
77-
Allows to set the forbidden metadata for the worker nodes that could be patched by a Tenant. This applies only if the Tenant has an active NodeSelector, and the Owner have right to patch their nodes.
77+
Allows to set the forbidden metadata for the worker nodes that could be patched by a Tenant. This applies only if the Tenant has an active NodeSelector, and the Owners have the right to patch their nodes.
7878

7979
```yaml
8080
manager:
@@ -108,7 +108,7 @@ manager:
108108

109109
### `allowServiceAccountPromotion`
110110

111-
ServiceAccounts within tenant namespaces can be promoted to owners of the given tenant this can be achieved by labeling the serviceaccount and then they are considered owners. This can only be done by other owners of the tenant. However ServiceAccounts which have been promoted to owner can not promote further serviceAccounts.
111+
ServiceAccounts within tenant namespaces can be promoted to owners of the given tenant; this can be achieved by labeling the ServiceAccount and then they are considered owners. This can only be done by other owners of the tenant. However ServiceAccounts which have been promoted to owner cannot promote further ServiceAccounts.
112112

113113
[Read More](/docs/tenants/permissions/#serviceaccount-promotion)
114114

content/en/docs/resourcepools/_index.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -196,8 +196,8 @@ spec:
196196
This allows users to essentially schedule anything in the namespace:
197197

198198
```shell
199-
NAME AGE REQUEST LIMIT
200-
capsule-pool-exmaple 2m47s
199+
NAME AGE REQUEST LIMIT
200+
capsule-pool-example 2m47s
201201
```
202202

203203
To prevent this, you might consider using the [DefaultsZero option](#defaultszero). This option can also be combined with setting other defaults, not part of the `.spec.quota.hard`. Here we are additionally restricting the creation of `persistentvolumeclaims`:
@@ -238,7 +238,7 @@ Options that can be defined on a per-`ResourcePool` basis and influence the gene
238238

239239
#### OrderedQueue
240240

241-
When `ResourecePoolClaims` are allocated to a pool, they are placed in a queue. The pool attempts to allocate claims in the order of their [creation timestamps](#priority). However, even if a claim was created earlier, if it requests more resources than are currently available, it will remain in the queue. Meanwhile, a lower-priority claim that fits within the available resources may still be allocated—despite its lower priority.
241+
When `ResourcePoolClaims` are allocated to a pool, they are placed in a queue. The pool attempts to allocate claims in the order of their [creation timestamps](#priority). However, even if a claim was created earlier, if it requests more resources than are currently available, it will remain in the queue. Meanwhile, a lower-priority claim that fits within the available resources may still be allocated—despite its lower priority.
242242

243243
Enabling this option enforces strict ordering: claims cannot be skipped, even if they block other claims from being fulfilled due to resource exhaustion. The `CreationTimestamp` is strictly respected, meaning that once a claim is queued, no subsequent claim can bypass it—even if it requires fewer resources.
244244

@@ -292,7 +292,7 @@ spec:
292292

293293
## ResourcePoolClaims
294294

295-
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, it's requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the corresponding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
295+
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, its requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the corresponding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
296296

297297
```yaml
298298
apiVersion: capsule.clastix.io/v1beta2
@@ -345,7 +345,7 @@ kubectl annotate resourcepoolclaim skip-the-line -n solar-prod projectcapsule.d
345345

346346
#### Immutable
347347

348-
Once a `ResourcePoolClaim` has successfully claimed resources from a `ResourcePool`, the claim is immutable. This means that the claim cannot be modified or deleted until the resources have been released back to the `ResourcePool`. This means `ResourcePoolClaim` can not be expanded or shrunk, without [releasing](#release).
348+
Once a `ResourcePoolClaim` has successfully claimed resources from a `ResourcePool`, the claim is immutable. This means that the claim cannot be modified or deleted until the resources have been released back to the `ResourcePool`. This means a `ResourcePoolClaim` cannot be expanded or shrunk without [releasing](#release).
349349

350350
### Queue
351351

@@ -648,9 +648,9 @@ Success 🍀
648648

649649
This part should provide you with a little bit of back story, as to why this implementation was done the way it currently is. Let's start.
650650

651-
Since the begining of capsule we are struggeling with a concurrency probelm regarding `ResourcesQuotas`, this was already early detected in [Issue 49](https://github.com/projectcapsule/capsule/issues/49). Let's quickly recap what really the problem is with the current `ResourceQuota` centric approach.
651+
Since the beginning of Capsule we are struggling with a concurrency problem regarding `ResourceQuotas`; this was already early detected in [Issue 49](https://github.com/projectcapsule/capsule/issues/49). Let's quickly recap what really the problem is with the current `ResourceQuota` centric approach.
652652

653-
With the current `ResourceQuota` with `Scope: Tenant` we encounter the problem, that resourcequotas spread across multiple namespaces refering to one tenant quota can be overprovisioned, if an operation is executed in parallel (eg. total is `services/count: 3`, in each namespace you could then create 3 services, leading to a possible overprovision of hard `* amount-namespaces`). The Problem in this approach is, that we are not doing anything with Webhooks, therefor we rely on the speed of the controller, where this entire construct becomes a matter of luck and racing conditions.
653+
With the current `ResourceQuota` with `Scope: Tenant` we encounter the problem that resource quotas spread across multiple namespaces referring to one tenant quota can be overprovisioned if an operation is executed in parallel (eg. total is `services/count: 3`, in each namespace you could then create 3 services, leading to a possible overprovision of hard `* amount-namespaces`). The problem in this approach is that we are not doing anything with Webhooks; therefore we rely on the speed of the controller, where this entire construct becomes a matter of luck and racing conditions.
654654

655655
So, there needs to be change. But times have also changed and we have listened to our users, so the new approach to `ResourceQuotas` should:
656656

content/en/docs/tenants/permissions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ To explain these entries, let's inspect one of them:
4242
4343
* `kind`: It can be [User](#users), [Group](#groups) or [ServiceAccount](#serviceaccounts)
4444
* `name`: Is the reference name of the user, group or serviceaccount we want to bind
45-
* `clusterRoles`: ClusterRoles which are bound for each namespace of teh tenant to the owner. By default, Capsule assigns `admin` and `capsule-namespace-deleter` roles to each owner, but you can customize them as explained in [Owner Roles](#owner-roles) section.
45+
* `clusterRoles`: ClusterRoles which are bound for each namespace of the tenant to the owner. By default, Capsule assigns `admin` and `capsule-namespace-deleter` roles to each owner, but you can customize them as explained in [Owner Roles](#owner-roles) section.
4646

4747
With this information available you
4848

0 commit comments

Comments
 (0)