You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create tokens for the test users. Create as many users as needed for the number of concurrent databases (assign 2 databases per user, or 1 per user if NUM_DBS <= 2).
71
+
Authentication depends on the RLS mode:
72
+
73
+
**If RLS is disabled:** Use the project `APIKEY` (already extracted from the connection string). After each `cloudsync_network_init`/`cloudsync_network_init_custom` call, authenticate with:
74
+
```sql
75
+
SELECT cloudsync_network_set_apikey('<APIKEY>');
76
+
```
77
+
No tokens are needed. Skip token creation entirely.
78
+
79
+
**If RLS is enabled:** Create tokens for the test users. Create as many users as needed for the number of concurrent databases (assign 2 databases per user, or 1 per user if NUM_DBS <= 2).
Save each user's `token` and `userId` from the response.
89
+
Save each user's `token` and `userId` from the response. After each `cloudsync_network_init`/`cloudsync_network_init_custom` call, authenticate with:
90
+
```sql
91
+
SELECT cloudsync_network_set_token('<TOKEN>');
92
+
```
82
93
83
-
If RLS is disabled, skip this step — tokens are not required.
94
+
**IMPORTANT:** Using a token when RLS is disabled will cause the server to silently reject all sent changes (send appears to succeed but data is not persisted remotely). Always use `cloudsync_network_set_apikey` when RLS is off.
84
95
85
96
### Step 5: Run the Concurrent Stress Test
86
97
@@ -95,9 +106,9 @@ Create a bash script at `/tmp/stress_test_concurrent.sh` that:
95
106
2.**Defines a worker function** that runs in a subshell for each database:
96
107
- Each worker logs all output to `/tmp/sync_concurrent_<N>.log`
97
108
- Each iteration does:
98
-
a. **DELETE all rows** → `send_changes()` → `check_changes()`
99
-
b. **INSERT <ROWS> rows** (in a single BEGIN/COMMIT transaction) → `send_changes()` → `check_changes()`
100
-
c. **UPDATE all rows** → `send_changes()` → `check_changes()`
109
+
a. **DELETE all rows** → `cloudsync_network_sync(100, 10)`
110
+
b. **INSERT <ROWS> rows** (in a single BEGIN/COMMIT transaction) → `cloudsync_network_sync(100, 10)`
111
+
c. **UPDATE all rows** → `cloudsync_network_sync(100, 10)`
101
112
- Each session must: `.load` the extension, call `cloudsync_network_init()`, `cloudsync_network_set_token()` (if RLS), do the work, call `cloudsync_terminate()`
102
113
- Include labeled output lines like `[DB<N>][iter <I>] deleted/inserted/updated, count=<C>` for grep-ability
0 commit comments