Skip to content

Commit 7260b2c

Browse files
authored
Create test_rig_architecture.md
1 parent fbf1478 commit 7260b2c

1 file changed

Lines changed: 375 additions & 0 deletions

File tree

docs/hil/test_rig_architecture.md

Lines changed: 375 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,375 @@
1+
# HIL Test Rig Architecture
2+
3+
This document defines the recommended hardware-in-the-loop (HIL) test-rig architecture for IX-HapticSight.
4+
5+
At the current repository stage, this is a **planning and evidence-structure document**, not proof that the rig has been built or validated.
6+
Its purpose is to define:
7+
8+
- what a credible HIL rig should contain
9+
- what measurements it should produce
10+
- how those measurements should map back to repository claims
11+
- how future HIL evidence should be packaged
12+
13+
This is the bridge between the current software-first repo and future measured physical evidence.
14+
15+
---
16+
17+
## 1. Purpose
18+
19+
A HIL rig is needed because software benchmarks alone cannot answer the hardest questions in a human-facing interaction system.
20+
21+
Software benchmarks can help show:
22+
- correct decision paths
23+
- correct denial logic
24+
- correct event emission
25+
- correct replay behavior
26+
27+
They cannot by themselves show:
28+
- real contact force behavior
29+
- real retreat timing
30+
- real backend response to faults
31+
- real overforce detection latency
32+
- real safe-hold fallback behavior under motion constraints
33+
34+
The HIL architecture exists to close that gap.
35+
36+
---
37+
38+
## 2. Current Repo Status
39+
40+
What the repo already has:
41+
42+
- protocol schemas
43+
- runtime coordination
44+
- structured event logging
45+
- replay helpers
46+
- interface models for:
47+
- force/torque
48+
- tactile
49+
- proximity
50+
- thermal
51+
- simulated execution adapter
52+
- deterministic benchmark runner
53+
- benchmark reporting layer
54+
55+
What the repo does **not** yet have:
56+
57+
- a real HIL adapter
58+
- calibrated physical test fixture definitions in code
59+
- measured force traces
60+
- measured retreat timing
61+
- measured thermal trip behavior
62+
- measured actuator abort timing
63+
64+
So this document is intentionally future-facing.
65+
66+
---
67+
68+
## 3. What the HIL Rig Should Prove
69+
70+
A serious HIL rig for IX-HapticSight should help answer questions like:
71+
72+
### Contact control questions
73+
- does the system stay within expected force bounds
74+
- how quickly does force rise and fall
75+
- what peak force actually occurs at contact
76+
- what happens when the requested dwell ends
77+
78+
### Fault behavior questions
79+
- how fast does the system react to overforce
80+
- what happens when a safety trigger occurs during motion
81+
- can retreat begin reliably after fault detection
82+
- when does the system fall back to safe hold instead
83+
84+
### Sensing questions
85+
- are force/torque signals fresh enough
86+
- can tactile signals reveal contact spread or pressure concentration
87+
- does proximity support safer near-contact slowdown
88+
- can thermal signals trigger a clean safety response
89+
90+
### Evidence questions
91+
- can the event log be aligned with measured traces
92+
- can a future reviewer reproduce what happened
93+
- can a requirement be traced to a test and a measured artifact
94+
95+
That is the real value of a HIL rig here.
96+
97+
---
98+
99+
## 4. Recommended Rig Layers
100+
101+
A strong HIL rig for this repo should have five layers.
102+
103+
### Layer A — Structural test fixture
104+
A stable physical fixture that represents the interaction surface or contact target.
105+
106+
Examples:
107+
- compliant shoulder-shaped fixture
108+
- flat pad with controlled compliance
109+
- modular contact block with replaceable layers
110+
111+
Purpose:
112+
- provide repeatable contact geometry
113+
- support multiple material and compliance conditions
114+
- make contact trials comparable
115+
116+
---
117+
118+
### Layer B — Sensing and instrumentation
119+
The rig should include at least some of:
120+
121+
- calibrated load cell or force sensor
122+
- 6-axis force-torque sensor where practical
123+
- tactile patch or contact-pressure sensing surface where possible
124+
- proximity sensing zone near the contact region
125+
- thermal sensing point or surface sensor
126+
- timing source for synchronized timestamps
127+
128+
Purpose:
129+
- convert “it seemed okay” into measured data
130+
131+
---
132+
133+
### Layer C — Execution backend under test
134+
The rig should exercise a real execution path, not only synthetic function calls.
135+
136+
Possible progression:
137+
1. simulated execution adapter
138+
2. middleware-connected execution adapter
139+
3. physical actuator / controller bridge
140+
4. higher-fidelity motion backend
141+
142+
Purpose:
143+
- gradually increase realism while keeping observability
144+
145+
---
146+
147+
### Layer D — Safety and interrupt path
148+
A HIL rig must explicitly test fault and stop behavior, not just nominal contact.
149+
150+
Examples:
151+
- overforce injection
152+
- artificial stale-signal condition
153+
- simulated sensor drop
154+
- backend refusal
155+
- manual abort input
156+
- safe-hold command path
157+
158+
Purpose:
159+
- prove the system handles bad cases, not just happy paths
160+
161+
---
162+
163+
### Layer E — Evidence capture and packaging
164+
The HIL rig should record artifacts in structured form.
165+
166+
Examples:
167+
- measured force trace
168+
- thermal trace
169+
- event log JSONL
170+
- benchmark scenario ID
171+
- calibration reference
172+
- fault-injection note
173+
- operator note if needed
174+
- run manifest
175+
176+
Purpose:
177+
- make the result reviewable and traceable later
178+
179+
---
180+
181+
## 5. Recommended Initial Fixture
182+
183+
The first serious HIL fixture should stay narrow.
184+
185+
Recommended v1 fixture:
186+
- one shoulder-support-style contact region
187+
- compliant target surface
188+
- one force measurement path
189+
- one basic retreat path
190+
- one backend stop path
191+
192+
Why narrow:
193+
- smaller scope means stronger evidence sooner
194+
- fewer variables means clearer debugging
195+
- easier alignment with current repo mission
196+
197+
The repo does **not** need a humanoid rig first.
198+
It needs a **repeatable, instrumented contact rig** first.
199+
200+
---
201+
202+
## 6. Minimum Instrumentation Set
203+
204+
If the HIL rig had to start with the minimum credible measurement stack, I would choose:
205+
206+
1. **Calibrated force measurement**
207+
- non-negotiable
208+
209+
2. **Timestamped event logging**
210+
- already fits the repo direction
211+
212+
3. **One controlled actuator or motion backend**
213+
- even if simple
214+
215+
4. **One hard abort path**
216+
- to measure stop behavior
217+
218+
5. **One retreat-capable motion path**
219+
- to measure withdrawal behavior
220+
221+
Everything else is valuable, but those are the minimum “this is becoming real” pieces.
222+
223+
---
224+
225+
## 7. Recommended Optional Instrumentation
226+
227+
As the rig matures, add:
228+
229+
- tactile patch sensing
230+
- short-range proximity sensing
231+
- thermal sensing on the interaction surface
232+
- synchronized video reference for lab-only debugging
233+
- motion capture or encoder-derived pose history
234+
- external timing or trigger channel for fault injection correlation
235+
236+
These make the evidence richer, but they should not replace the basics.
237+
238+
---
239+
240+
## 8. Example HIL Trial Classes
241+
242+
The first trial catalog should be small and explicit.
243+
244+
### Trial class A — Nominal bounded contact
245+
Goal:
246+
- verify measured force remains within expected limits
247+
- verify dwell behavior
248+
- verify release behavior
249+
250+
### Trial class B — Overforce interruption
251+
Goal:
252+
- trigger force threshold
253+
- verify abort or retreat behavior
254+
- measure detection-to-response timing
255+
256+
### Trial class C — Retreat path trial
257+
Goal:
258+
- begin nominal approach/contact
259+
- force retreat condition
260+
- verify retreat start and completion timing
261+
262+
### Trial class D — Safe-hold fallback trial
263+
Goal:
264+
- make retreat unavailable or unsafe
265+
- verify safe-hold path is explicit and measurable
266+
267+
### Trial class E — Sensor freshness/fault trial
268+
Goal:
269+
- inject stale or invalid sensor condition
270+
- verify runtime denial or interruption behavior
271+
272+
These trial classes map directly to the repo’s strongest safety claims.
273+
274+
---
275+
276+
## 9. Recommended Artifact Set Per Trial
277+
278+
Each HIL trial should eventually produce a bundle with at least:
279+
280+
- trial ID
281+
- benchmark/scenario ID if applicable
282+
- rig configuration ID
283+
- calibration reference IDs
284+
- event log path
285+
- measured force trace path
286+
- optional thermal/proximity/tactile trace paths
287+
- expected outcome
288+
- observed outcome
289+
- operator note if needed
290+
- timestamp window
291+
- pass/fail/error result
292+
- reason code
293+
294+
That is the minimum artifact discipline needed to make a future HIL result trustworthy.
295+
296+
---
297+
298+
## 10. Mapping Back to Repo Requirements
299+
300+
The HIL rig should not become a disconnected lab toy.
301+
Its outputs should map back to documented repo structure:
302+
303+
- `docs/safety/invariants.md`
304+
- `docs/safety/requirements_traceability.md`
305+
- `docs/safety/fault_handling.md`
306+
- `docs/safety/retreat_semantics.md`
307+
- `docs/governance/safety_case.md`
308+
- `docs/benchmarks/metrics.md`
309+
310+
Examples:
311+
- overforce trial maps to force and fault invariants
312+
- retreat timing trial maps to retreat semantics
313+
- safe-hold fallback trial maps to fault-handling expectations
314+
- event-log correlation maps to replay/logging evidence
315+
316+
That traceability is the real point.
317+
318+
---
319+
320+
## 11. Recommended HIL Data Flow
321+
322+
A future HIL run should ideally look like this:
323+
324+
1. choose explicit scenario or trial definition
325+
2. load rig configuration and calibration references
326+
3. initialize runtime service / execution backend
327+
4. run the trial
328+
5. capture:
329+
- event log
330+
- measured traces
331+
- execution backend status
332+
- timing markers
333+
6. package the artifact bundle
334+
7. compare expected vs observed outcomes
335+
8. update benchmark/reporting layer where appropriate
336+
337+
This preserves continuity with the current repo architecture.
338+
339+
---
340+
341+
## 12. Current Gaps Before Real HIL Work
342+
343+
Before real HIL can be credible, the repo still needs:
344+
345+
- calibration conventions
346+
- fault-injection procedure conventions
347+
- artifact manifest conventions
348+
- stronger event-log schema versioning
349+
- clearer trial ID / bundle ID conventions
350+
- actual physical backend integration later
351+
352+
That is why this doc belongs before any strong HIL claims.
353+
354+
---
355+
356+
## 13. Review Questions
357+
358+
When evaluating a proposed HIL rig plan, ask:
359+
360+
1. What exact repo claim does this rig test?
361+
2. What is physically measured?
362+
3. How are timestamps aligned?
363+
4. What makes the trial repeatable?
364+
5. What artifact bundle comes out of it?
365+
6. How does the result trace back to repo requirements?
366+
367+
If those answers are weak, the rig design is weak.
368+
369+
---
370+
371+
## 14. Final Rule
372+
373+
A HIL rig should reduce uncertainty about physical behavior.
374+
375+
If it produces impressive-looking demos without traceable measurements, it is not strong enough for this repo.

0 commit comments

Comments
 (0)