Skip to content

Commit 157f923

Browse files
committed
DOC-3498: Expand acronyms on first prose use across on-premises pages
Expand 18 acronyms (OCI, JWT, LLM, SSE, TLS, CORS, MCP, NTP, HPA, OTLP, IRSA, ADC, SSR, CSP, SIEM, PII, HA, mTLS) on first prose occurrence per page for readers unfamiliar with the terms.
1 parent b06e673 commit 157f923

10 files changed

Lines changed: 51 additions & 51 deletions

modules/ROOT/pages/tinymceai-on-premises-advanced.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ The assistant calls the `search_knowledge_base` tool, retrieves the relevant pol
150150

151151
== Multi-tenant SaaS platform
152152

153-
*Use case:* A SaaS platform provides AI writing features to customers. Each customer gets isolated conversations, separate LLM budgets, and per-tenant configuration.
153+
*Use case:* A SaaS platform provides AI writing features to customers. Each customer gets isolated conversations, separate large language model (LLM) budgets, and per-tenant configuration.
154154

155155
=== Architecture
156156

@@ -171,7 +171,7 @@ Each environment provides:
171171
* Customer B -> Environment `env-customer-b`
172172
* Customer C -> Environment `env-customer-c`
173173

174-
. *Token server generates JWTs with the correct environment:*
174+
. *Token server generates JSON Web Tokens (JWTs) with the correct environment:*
175175
+
176176
.Multi-tenant JWT generation
177177
[%collapsible]

modules/ROOT/pages/tinymceai-on-premises-database.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
:keywords: AI, on-premises, database, MySQL, PostgreSQL, Redis, Docker, Podman, file storage, S3, Azure Blob
55

66
This page covers the data layer: the SQL database, Redis, and file storage.
7-
For container runtimes, reverse proxies, TLS, Kubernetes, and ECS deployment, see the xref:tinymceai-on-premises-production.adoc[Production deployment guide].
7+
For container runtimes, reverse proxies, Transport Layer Security (TLS), Kubernetes, and ECS deployment, see the xref:tinymceai-on-premises-production.adoc[Production deployment guide].
88

99
== Supported versions
1010

@@ -439,7 +439,7 @@ docker run --add-host=host.docker.internal:host-gateway ...
439439

440440
== Redis
441441

442-
Every AI service instance must reach Redis. Redis holds session coordination, SSE delivery, and rate-limiting state. A temporary Redis outage degrades streaming but does not destroy persistent data.
442+
Every AI service instance must reach Redis. Redis holds session coordination, Server-Sent Events (SSE) delivery, and rate-limiting state. A temporary Redis outage degrades streaming but does not destroy persistent data.
443443

444444
=== Setup
445445

modules/ROOT/pages/tinymceai-on-premises-frameworks.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,10 @@
77
This page covers the *editor-side* configuration that connects TinyMCE to the on-premises AI service. It assumes:
88

99
* The AI service is already running. See xref:tinymceai-on-premises-getting-started.adoc[Getting started] for setup instructions.
10-
* A token endpoint exists that signs JWTs for the AI service. See xref:tinymceai-on-premises-jwt.adoc[JWT authentication] for back-end implementations.
10+
* A token endpoint exists that signs JSON Web Tokens (JWTs) for the AI service. See xref:tinymceai-on-premises-jwt.adoc[JWT authentication] for back-end implementations.
1111
* The TinyMCE API key has the AI feature enabled. Retrieve or upgrade a key at https://www.tiny.cloud/my-account/integrate/.
1212
13-
For general framework setup (installing wrappers, component structure, SSR patterns), see the existing integration guides:
13+
For general framework setup (installing wrappers, component structure, server-side rendering (SSR) patterns), see the existing integration guides:
1414

1515
* xref:react-cloud.adoc[React]
1616
* xref:vue-cloud.adoc[Vue.js]
@@ -151,7 +151,7 @@ This pattern avoids cookies entirely and works well for cross-origin setups.
151151

152152
== Cross-origin requests to the AI service
153153

154-
When `tinymceai_service_url` points to a different origin from the page (the common production case), the AI service must return CORS headers permitting the editor origin. The service reads the `ALLOWED_ORIGINS` environment variable for this.
154+
When `tinymceai_service_url` points to a different origin from the page (the common production case), the AI service must return Cross-Origin Resource Sharing (CORS) headers permitting the editor origin. The service reads the `ALLOWED_ORIGINS` environment variable for this.
155155

156156
To verify CORS from a terminal:
157157

@@ -167,7 +167,7 @@ The response should include `Access-Control-Allow-Origin: \https://app.yourcompa
167167

168168

169169

170-
== Content Security Policy
170+
== Content Security Policy (CSP)
171171

172172
If the application sets a `Content-Security-Policy` header, allow the AI service origin in `connect-src`:
173173

@@ -197,7 +197,7 @@ If using the Tiny CDN instead of self-hosted assets, also add `\https://cdn.tiny
197197
|Confirm the fetch sends the session cookie (`credentials: 'include'`) or `Authorization` header that the back end expects.
198198

199199
|AI responses hang then time out
200-
|Reverse proxy is buffering SSE
200+
|Reverse proxy is buffering Server-Sent Events (SSE)
201201
|Disable proxy buffering. See xref:tinymceai-on-premises-production.adoc[Production deployment].
202202

203203
|Browser console shows a CORS error on `/v1/conversations`
@@ -217,6 +217,6 @@ For other issues, see xref:tinymceai-on-premises-troubleshooting.adoc[Troublesho
217217

218218
* xref:tinymceai-on-premises-getting-started.adoc[Getting started]
219219
* xref:tinymceai-on-premises-jwt.adoc[JWT authentication]
220-
* xref:tinymceai-on-premises-providers.adoc[LLM providers]
220+
* xref:tinymceai-on-premises-providers.adoc[large language model (LLM) providers]
221221
* xref:tinymceai-on-premises-production.adoc[Production deployment]
222222
* xref:tinymceai-on-premises-troubleshooting.adoc[Troubleshooting]

modules/ROOT/pages/tinymceai-on-premises-getting-started.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -238,7 +238,7 @@ Always create environments through the Management Panel UI. Environments created
238238

239239
=== Create the token server
240240

241-
The token server signs JWTs for the editor. The Node.js example below is for the demo only; the xref:tinymceai-on-premises-jwt.adoc[JWT authentication] guide contains production-ready endpoints in 8 languages (Node, Django, Flask, Laravel, Rails, .NET, Go, Spring Boot).
241+
The token server signs JSON Web Tokens (JWTs) for the editor. The Node.js example below is for the demo only; the xref:tinymceai-on-premises-jwt.adoc[JWT authentication] guide contains production-ready endpoints in 8 languages (Node, Django, Flask, Laravel, Rails, .NET, Go, Spring Boot).
242242

243243
Create `package.json`:
244244

@@ -351,7 +351,7 @@ npm start
351351

352352
=== Open the demo
353353

354-
Open *http://localhost:3000* in a browser. The editor loads with the AI toolbar. Select text and try the AI features. Responses stream in real time from the chosen LLM provider, processed entirely within the local infrastructure.
354+
Open *http://localhost:3000* in a browser. The editor loads with the AI toolbar. Select text and try the AI features. Responses stream in real time from the chosen large language model (LLM) provider, processed entirely within the local infrastructure.
355355

356356
The TinyMCE AI on-premises service is now running.
357357

@@ -423,7 +423,7 @@ event: done
423423
data: {}
424424
----
425425

426-
If the stream emits `event: error`, inspect the `data` payload. Provider errors (invalid API key, IAM denial, model unavailable) ride inside the SSE response. The HTTP status stays 200. See the xref:tinymceai-on-premises-troubleshooting.adoc[LLM provider errors] section in the Troubleshooting guide for details.
426+
If the stream emits `event: error`, inspect the `data` payload. Provider errors (invalid API key, IAM denial, model unavailable) ride inside the Server-Sent Events (SSE) response. The HTTP status stays 200. See the xref:tinymceai-on-premises-troubleshooting.adoc[LLM provider errors] section in the Troubleshooting guide for details.
427427

428428
A successful round-trip confirms: container health, database connectivity, Redis connectivity, JWT signing, JWT verification, permissions checking, environment registration, LLM provider authentication, and SSE streaming. If problems persist after these checks, focus on the editor configuration next.
429429

modules/ROOT/pages/tinymceai-on-premises-jwt.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
:description: JWT authentication for the TinyMCE AI on-premises service using HS256 symmetric signing
44
:keywords: AI, on-premises, JWT, authentication, HS256
55

6-
The on-premises AI service uses *HS256* (HMAC-SHA256, symmetric shared secret) for JWT authentication. This is different from the Tiny Cloud AI service, which uses RS256.
6+
The on-premises AI service uses *HS256* (HMAC-SHA256, symmetric shared secret) for JSON Web Token (JWT) authentication. This is different from the Tiny Cloud AI service, which uses RS256.
77

88
[WARNING]
99
--
@@ -186,7 +186,7 @@ Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
186186

187187
=== Clock-skew leeway
188188

189-
The service allows up to 60 seconds of clock skew on the `exp` claim. Keep the token server and the AI service synchronized with NTP.
189+
The service allows up to 60 seconds of clock skew on the `exp` claim. Keep the token server and the AI service synchronized with Network Time Protocol (NTP).
190190

191191

192192

@@ -868,7 +868,7 @@ When debugging, start here. Most "auth failures" reflect wrong claim values rath
868868
|`allowed: false` on specific endpoints only |Missing the specific permission |Decode token, check the `auth.ai.permissions` array against the table above.
869869
|Token silently rejected, no decoded error |RS256 signature |Re-sign with HS256.
870870
|`aud` claim type mismatch |`aud` issued as array instead of string |Some JWT libraries default to array `aud`. Force string.
871-
|Editor shows "Failed to authenticate" |Token endpoint returned non-JSON, returned `token` as nested object, or CORS blocked the request |Open browser devtools → Network → inspect the response from `/api/ai-token`.
871+
|Editor shows "Failed to authenticate" |Token endpoint returned non-JSON, returned `token` as nested object, or Cross-Origin Resource Sharing (CORS) blocked the request |Open browser devtools → Network → inspect the response from `/api/ai-token`.
872872
|===
873873

874874
=== Sanity-check a token manually
@@ -906,6 +906,6 @@ Short-lived tokens limit exposure if a token leaks through a browser extension,
906906
== See also
907907

908908
* xref:tinymceai-on-premises-getting-started.adoc[Getting started] -- end-to-end deployment, including a demo token server
909-
* xref:tinymceai-on-premises-providers.adoc[LLM providers] -- configuring custom models through `MODELS` and the `ai:models:<provider>:<model-id>` permission syntax
909+
* xref:tinymceai-on-premises-providers.adoc[large language model (LLM) providers] -- configuring custom models through `MODELS` and the `ai:models:<provider>:<model-id>` permission syntax
910910
* xref:tinymceai-on-premises-troubleshooting.adoc[Troubleshooting] -- full troubleshooting catalog beyond JWT
911911
* xref:tinymceai-on-premises-frameworks.adoc[Framework integration] -- editor-side integration patterns for React, Vue, and Angular, including `tinymceai_token_provider` wrappers

modules/ROOT/pages/tinymceai-on-premises-production.adoc

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The AI service is stateless, persists all state to MySQL/PostgreSQL and Redis, a
1717

1818
== TLS / HTTPS
1919

20-
The AI service does not terminate TLS. Place a reverse proxy in front.
20+
The AI service does not terminate Transport Layer Security (TLS). Place a reverse proxy in front.
2121

2222
=== Nginx example
2323

@@ -48,7 +48,7 @@ server {
4848

4949
[IMPORTANT]
5050
--
51-
SSE streaming requires `proxy_buffering off`. Without it, AI responses appear to hang until the entire response is generated.
51+
Server-Sent Events (SSE) streaming requires `proxy_buffering off`. Without it, AI responses appear to hang until the entire response is generated.
5252
--
5353

5454
=== AWS ALB
@@ -383,7 +383,7 @@ spec:
383383
[cols=",",options="header",]
384384
|===
385385
|Service |AWS recommendation
386-
|Database |RDS for MySQL 8.0 (Multi-AZ for HA)
386+
|Database |RDS for MySQL 8.0 (Multi-AZ for high availability (HA))
387387
|Redis |ElastiCache for Redis 7 (cluster mode)
388388
|Storage |Same-region S3 bucket
389389
|Load balancer |ALB with `/health` target health check, 300 s idle timeout
@@ -400,16 +400,16 @@ spec:
400400
|Practice |Implementation
401401
|Network isolation |Place the AI service in a private subnet; expose only through a load balancer. Restrict database and Redis to the AI service security group.
402402
|Block panel from the public internet |Restrict `/panel/` to an admin VPN or IP allowlist. The panel manages secrets and access keys.
403-
|TLS everywhere |Terminate TLS 1.3 at the reverse proxy. Use internal mTLS between the AI service and the data layer where supported.
403+
|TLS everywhere |Terminate TLS 1.3 at the reverse proxy. Use internal mutual TLS (mTLS) between the AI service and the data layer where supported.
404404
|Secrets management |Use Vault, AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager. Never store secrets directly in orchestration manifests or commit them to source control.
405405
|Database encryption at rest |Turn on encryption at rest in the cloud provider console. RDS, Cloud SQL, and Azure Database enable this by default.
406406
|Redis authentication |Always set `REDIS_PASSWORD` (or use a managed Redis instance with authentication enabled).
407407
|Container security |Run as non-root, use a read-only filesystem where possible, and drop unnecessary Linux capabilities.
408408
|Image scanning |Scan `registry.containers.tiny.cloud/ai-service` with Trivy, Snyk, or the registry's built-in scanner.
409-
|Least-privilege JWTs |Grant only the permissions each user role requires. Avoid full-access tokens in production.
409+
|Least-privilege JSON Web Tokens (JWTs) |Grant only the permissions each user role requires. Avoid full-access tokens in production.
410410
|API secret rotation |Periodically create a new access key, add the new key to the configuration, then revoke the old key. The token endpoint reads the secret at request time.
411-
|Audit logging |Enable `ENABLE_METRIC_LOGS=true` and ship logs to a SIEM.
412-
|LLM API key rotation |Add the new key to the `PROVIDERS` array, restart the service, then revoke the old key after confirming the new one works.
411+
|Audit logging |Enable `ENABLE_METRIC_LOGS=true` and ship logs to a Security Information and Event Management (SIEM).
412+
|Large language model (LLM) API key rotation |Add the new key to the `PROVIDERS` array, restart the service, then revoke the old key after confirming the new one works.
413413
|===
414414

415415
== Rate limiting
@@ -479,7 +479,7 @@ When enabled, the service writes a structured JSON entry for each request. Key f
479479
|===
480480
|Variable |Required |Default |Description
481481
|`LLM_TELEMETRY_ENABLED` |Yes |`false` |Primary telemetry switch
482-
|`OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` |Yes |- |OTLP endpoint URL
482+
|`OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` |Yes |- |OpenTelemetry Protocol (OTLP) endpoint URL
483483
|`OTEL_TRACES_SAMPLER_ARG` |No |`1.0` |Sampling rate (0.0 to 1.0)
484484
|`OTEL_DEBUG` |No |- |Verbose OTLP diagnostic logging
485485
|===
@@ -632,7 +632,7 @@ These values are approximate and vary with hardware, provider latency, and promp
632632
|1 to 50 |1 |db.t3.small (or 2 vCPU / 4 GB self-managed) |cache.t3.micro |Development and small teams
633633
|50 to 500 |2 |db.r6g.large |cache.r6g.large |Small production
634634
|500 to 5,000 |3 to 5 |db.r6g.xlarge (Multi-AZ) |cache.r6g.xlarge (cluster) |Medium production
635-
|5,000{plus} |5{plus} (HPA) |db.r6g.2xlarge{plus} |cache.r6g.2xlarge{plus} |Large production; contact Tiny for guidance
635+
|5,000{plus} |5{plus} (Horizontal Pod Autoscaler (HPA)) |db.r6g.2xlarge{plus} |cache.r6g.2xlarge{plus} |Large production; contact Tiny for guidance
636636
|===
637637

638638
Starting point for self-managed deployments:

modules/ROOT/pages/tinymceai-on-premises-providers.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77

88

9-
The `PROVIDERS` environment variable tells the AI service how to reach the upstream LLM. The `MODELS` environment variable tells the service which models are exposed to clients and which features each model supports. This page is the definitive reference for both: every supported `type`, every required field, and every known issue encountered in production.
9+
The `PROVIDERS` environment variable tells the AI service how to reach the upstream large language model (LLM). The `MODELS` environment variable tells the service which models are exposed to clients and which features each model supports. This page is the definitive reference for both: every supported `type`, every required field, and every known issue encountered in production.
1010

1111
Start with the xref:tinymceai-on-premises-getting-started.adoc[Getting Started guide] if the AI service container is not yet running. The following sections assume a running `ai-service` container.
1212

@@ -19,7 +19,7 @@ The AI service uses two related environment variables:
1919
|Variable |Type |What it does
2020
|`PROVIDERS` |JSON object |Map of provider IDs to provider configurations. Each entry says how to authenticate with one upstream LLM API.
2121
|`MODELS` |JSON array |List of models exposed to clients. Each model points at a `PROVIDERS` entry and declares which features it can serve.
22-
|JWT `auth.ai.permissions` |string array |Per-user authorization list. Includes `ai:models:<provider-key>:<model-id>` entries to gate access to individual models.
22+
|JSON Web Token (JWT) `auth.ai.permissions` |string array |Per-user authorization list. Includes `ai:models:<provider-key>:<model-id>` entries to gate access to individual models.
2323
|===
2424

2525
The `PROVIDERS` keys are arbitrary identifiers (for example `"openai"`, `"my-bedrock"`, `"team-azure"`). Each value object has a `type` field that picks the implementation:
@@ -378,7 +378,7 @@ Amazon's hosted-model marketplace (Anthropic, Meta, Mistral, Cohere, Amazon Tita
378378
.Configuration details
379379
[%collapsible]
380380
====
381-
IMPORTANT: The AI service does *not* use the AWS SDK default credential chain. `AWS_PROFILE`, `~/.aws/credentials`, IRSA, EC2 instance profiles, ECS task roles, and web identity tokens are all ignored. Inline the credentials in the `PROVIDERS` JSON.
381+
IMPORTANT: The AI service does *not* use the AWS SDK default credential chain. `AWS_PROFILE`, `~/.aws/credentials`, IAM Roles for Service Accounts (IRSA), EC2 instance profiles, ECS task roles, and web identity tokens are all ignored. Inline the credentials in the `PROVIDERS` JSON.
382382
383383
*JSON shape:*
384384
@@ -505,7 +505,7 @@ Google's enterprise model surface. Project-scoped, IAM-driven, GCP-billed. Crede
505505
.Configuration details
506506
[%collapsible]
507507
====
508-
IMPORTANT: The Vertex adapter ignores ADC, `GOOGLE_APPLICATION_CREDENTIALS`, GKE Workload Identity, and Compute Engine metadata server credentials. Inline either a service-account key or an account-bound API key in the `PROVIDERS` JSON.
508+
IMPORTANT: The Vertex adapter ignores Application Default Credentials (ADC), `GOOGLE_APPLICATION_CREDENTIALS`, GKE Workload Identity, and Compute Engine metadata server credentials. Inline either a service-account key or an account-bound API key in the `PROVIDERS` JSON.
509509
510510
*JSON shape (service account):*
511511
@@ -666,7 +666,7 @@ For any HTTP API that implements the OpenAI Chat Completions interface, includin
666666
|===
667667
|Field |Required |Notes
668668
|`type` |Yes |Literal `"openai-compatible"`
669-
|`baseUrl` |Yes |*Must include the `/v1` suffix.* Without it, every request fails with a misleading "Not Found" SSE error.
669+
|`baseUrl` |Yes |*Must include the `/v1` suffix.* Without it, every request fails with a misleading "Not Found" Server-Sent Events (SSE) error.
670670
|`apiKeys` |No |Sent as `Authorization: Bearer <key>`. Most local runtimes ignore it.
671671
|`headers` |No |Additional headers such as auth tokens or tenant IDs.
672672
|===

0 commit comments

Comments
 (0)