Skip to content

Commit 29b3677

Browse files
authored
Revise README for clarity and detail
Expanded the README to clarify the purpose, features, and structure of the IX-HapticSight project. Updated sections on current status, repository structure, documentation, and quick start instructions.
1 parent 7de9ace commit 29b3677

1 file changed

Lines changed: 289 additions & 66 deletions

File tree

README.md

Lines changed: 289 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -1,109 +1,332 @@
1-
# IX-HapticSight — Optical-Haptic Interaction Protocol (OHIP)
1+
# IX-HapticSight
22

3-
**Status:** Public specification (reference implementation WIP)
4-
**Author:** Bryce Wooster
5-
**Purpose:** Define a safety-first interface that transforms visual recognition into instinctive, human-safe touch behaviors for robots, XR agents, and other physical AI systems.
3+
**IX-HapticSight** is a safety-first optical-haptic interaction architecture for bounded human-facing robot behavior.
4+
5+
The repository is built around one narrow idea:
6+
7+
> convert perception, consent state, safety state, and bounded contact rules into explicit approach, contact, retreat, and safe-hold behavior that can be inspected, tested, replayed, and benchmarked.
8+
9+
This repo is **not** positioned as a broad “emotion-aware robot” claim, a production deployment stack, or a certified collaborative robot package. It is a **measurement-first, audit-friendly reference architecture** with working code, tests, structured logging, replay helpers, interface abstractions, and deterministic benchmark support.
610

711
---
812

9-
## Overview
13+
## Current Status
14+
15+
**Current maturity:** strong repository architecture / reference-runtime stage
16+
17+
What the repo currently includes:
18+
19+
- deterministic OHIP protocol core
20+
- backend-agnostic runtime coordination layer
21+
- explicit runtime session and fault models
22+
- structured JSONL event logging
23+
- replay helpers for event streams
24+
- normalized interface models for:
25+
- force/torque
26+
- tactile
27+
- proximity
28+
- thermal
29+
- execution adapters
30+
- in-memory simulated execution adapter
31+
- deterministic benchmark runner, scenario catalog, and reporting helpers
32+
- expanded safety, governance, replay, benchmark, and HIL-prep documentation
33+
- unit tests and CI workflow
1034

11-
**IX-HapticSight** is a specification and reference framework for mapping computer vision inputs to safe, human-aware haptic actions.
35+
What it does **not** currently include:
1236

13-
It is designed to ensure that robots and XR agents:
14-
- Maintain non-threatening resting postures.
15-
- Proactively avoid hazards using visual danger mapping.
16-
- Initiate culturally safe and consent-aware contact in appropriate contexts.
17-
- Log all interactions for transparency and auditing.
37+
- real hardware integration
38+
- HIL measured data
39+
- certified safety evidence
40+
- production deployment approval
41+
- medical or therapeutic validation
42+
- blanket privacy or compliance claims
43+
44+
That line matters. This repo is strongest when it stays precise.
1845

1946
---
2047

21-
## Core Features
48+
## What the Repository Is Trying to Do
2249

23-
### **1. Rest State Markers**
24-
- Fingertip visual targets in the field of view (FOV) indicating “safe rest” positions when idle.
25-
- Prevents aimless or potentially threatening hand movements.
50+
IX-HapticSight is trying to make one difficult boundary explicit:
2651

27-
### **2. Engagement Nudges**
28-
- Context-driven prompts to interact with **safe** objects, surfaces, or human touchpoints (e.g., a shoulder for emotional support).
29-
- Prioritized based on object safety classification (green/yellow/red) and human emotional cues.
52+
**when a machine is allowed to approach, touch, withdraw, or stop around a person — and how that decision is made visible and reviewable.**
3053

31-
### **3. Visual Danger Mapping**
32-
- Green: Safe contact.
33-
- Yellow: Requires verification or confirmation.
34-
- Red: Contact prohibited — reroute or halt motion.
54+
The repo is built around:
3555

36-
### **4. Social-Touch Protocol**
37-
- Focus on non-invasive, supportive gestures (e.g., shoulder contact with limited force and dwell time).
38-
- Explicit consent required unless in configured caregiver/medical override modes.
39-
- Cultural sensitivity profiles available.
56+
- bounded interaction semantics
57+
- consent-aware contact authorization
58+
- safety-veto authority over convenience behavior
59+
- explicit retreat and safe-hold semantics
60+
- replayable event trails
61+
- scenario-based benchmark evaluation
62+
- traceable evidence growth toward future HIL work
4063

4164
---
4265

43-
## Why This Matters
66+
## What the Repository Is Not
67+
68+
This repository is **not**:
69+
70+
- a general social robotics framework
71+
- a claim of human-emotion understanding
72+
- a production manipulator stack
73+
- a guarantee of safe real-world touch
74+
- a substitute for hardware safety engineering
75+
- a substitute for regulatory, institutional, or legal review
76+
- a proof of collaborative-robot certification
77+
- a finished physical system
4478

45-
In human-robot interaction, *how* and *when* contact occurs can make the difference between safe, trusted systems and unsafe, rejected ones.
79+
The right way to read this repo is:
80+
81+
**bounded concept-stage architecture with real code, real tests, real structured artifacts, and explicit evidence limits.**
82+
83+
---
4684

47-
IX-HapticSight offers:
48-
- **Predictable, transparent touch behavior**
49-
- **Hard-coded safety layers** on top of learned policies
50-
- **A clear, extensible protocol** for integration into any robot control stack
85+
## Repository Structure
86+
87+
### Protocol core
88+
`src/ohip/`
89+
90+
Stable reference-implementation layer for:
91+
- schemas
92+
- consent management
93+
- contact planning
94+
- engagement scheduling
95+
- rest pose generation
96+
- safety gating
97+
98+
### Runtime layer
99+
`src/ohip_runtime/`
100+
101+
Backend-agnostic runtime ownership for:
102+
- interaction session state
103+
- runtime fault models
104+
- coordination requests and decisions
105+
- runtime coordinator
106+
- session store
107+
- configuration wiring
108+
- high-level runtime service
109+
110+
### Interface layer
111+
`src/ohip_interfaces/`
112+
113+
Normalized sensing and execution contracts for:
114+
- signal health and freshness
115+
- force/torque samples
116+
- tactile frames
117+
- proximity frames
118+
- thermal frames
119+
- execution adapter contracts
120+
- simulated execution adapter
121+
122+
### Logging and replay
123+
`src/ohip_logging/`
124+
125+
Structured evidence layer for:
126+
- event records
127+
- JSONL event logs
128+
- event recorder
129+
- replay helpers
130+
131+
### Benchmark layer
132+
`src/ohip_bench/`
133+
134+
Deterministic evaluation layer for:
135+
- benchmark models
136+
- benchmark runner
137+
- built-in scenario catalog
138+
- benchmark reporting
139+
140+
### Supporting assets
141+
- `configs/` — force and culture profile configuration
142+
- `docs/` — spec, state machine, safety, governance, replay, benchmark, and HIL-prep docs
143+
- `examples/` — quickstart reference path
144+
- `sim/` — simulation scene assets
145+
- `tests/` — unit and integration-style repository tests
146+
- `.github/workflows/tests.yml` — CI test workflow
51147

52148
---
53149

54-
## Architecture Snapshot
55-
56-
Perception → Semantic Segmentation → Safety Map → Affordance Classifier
57-
| | | |
58-
└────────────┬───────┴──────────────┬─────┴───────────────┬───┘
59-
v v v
60-
Human State Risk Assessor Surface Planner
61-
| | |
62-
└──────────┬───────────┴───────────┬──────────┘
63-
v v
64-
Engagement Scheduler Rest Pose Generator
65-
| |
66-
v v
67-
Contact Planner ←→ Force/Impedance Controller
68-
|
69-
v
70-
Motion Exec (with dual-channel safety veto)
150+
## Documentation Map
71151

152+
Start here if you want the repo’s architectural story in order:
153+
154+
1. `docs/spec.md`
155+
2. `docs/state_machine.md`
156+
3. `docs/index.md`
157+
4. `ROADMAP.md`
158+
5. `docs/architecture/runtime_overview.md`
159+
6. `docs/safety/invariants.md`
160+
7. `docs/safety/requirements_traceability.md`
161+
8. `docs/governance/safety_case.md`
162+
9. `docs/benchmarks/overview.md`
163+
10. `docs/replay/event_log_schema.md`
164+
11. `docs/hil/test_rig_architecture.md`
165+
166+
If you only want the high-level direction:
167+
- `ROADMAP.md`
168+
- `docs/governance/standards_crosswalk.md`
169+
- `docs/governance/safety_case.md`
72170

73171
---
74172

75-
## Safety & Compliance
173+
## Runtime Flow
174+
175+
At the current repository stage, the main runtime story is:
76176

77-
- Dual independent safety paths (software + hardware E-stop).
78-
- ISO 10218 / ISO 15066 inspired force limits for collaborative robots.
79-
- Continuous interaction logging.
80-
- Privacy-by-design for human data.
177+
1. create or load an interaction session
178+
2. submit an explicit interaction request
179+
3. evaluate consent
180+
4. evaluate safety
181+
5. build a bounded planning outcome if allowed
182+
6. record the full structured decision trail
183+
7. optionally submit a bounded execution request
184+
8. record execution status, transitions, faults, retreat, or safe-hold behavior
185+
9. replay or benchmark the resulting event trail later
186+
187+
That flow is represented across:
188+
- `src/ohip_runtime/`
189+
- `src/ohip_logging/`
190+
- `src/ohip_interfaces/`
191+
- `src/ohip_bench/`
81192

82193
---
83194

84-
## Getting Started
195+
## Structured Logging and Replay
196+
197+
A major part of this upgrade is that important behavior is no longer supposed to disappear into console output.
198+
199+
Current logging/replay support includes:
85200

86-
**/docs/spec.md** — Full semantics, state machines, and timing budgets (coming soon)
87-
**/src/** — Reference stubs for vision, safety mapping, engagement scheduling, and motion planning (coming soon)
201+
- structured event records
202+
- append-friendly JSONL logs
203+
- request/decision/fault/transition/execution event helpers
204+
- replay loading and slicing
205+
- replay filtering by:
206+
- session
207+
- request
208+
- event kind
209+
- event range
210+
211+
This matters because a safety-first interaction repo should be explainable **after the fact**, not only impressive in the moment.
212+
213+
Relevant files:
214+
- `src/ohip_logging/events.py`
215+
- `src/ohip_logging/jsonl.py`
216+
- `src/ohip_logging/recorder.py`
217+
- `src/ohip_logging/replay.py`
88218

89219
---
90220

91-
## License
221+
## Benchmarking
222+
223+
The repo now includes a deterministic benchmark layer.
224+
225+
Current benchmark support includes:
226+
227+
- explicit scenario definitions
228+
- explicit expectations
229+
- structured observations
230+
- structured benchmark results
231+
- small built-in scenario catalog
232+
- reporting helpers for summaries and pass rates
92233

93-
Released under the MIT License with a **Responsible Use Addendum** prohibiting weaponization, coercion, or unsafe deployment. See [`LICENSE`](LICENSE) for full terms.
234+
Current built-in scenarios focus on:
235+
- explicit-consent approval path
236+
- missing-consent denial path
237+
- RED-safety denial path
238+
239+
Relevant files:
240+
- `src/ohip_bench/models.py`
241+
- `src/ohip_bench/runner.py`
242+
- `src/ohip_bench/scenarios.py`
243+
- `src/ohip_bench/reporting.py`
244+
245+
And the reviewer-facing docs:
246+
- `docs/benchmarks/overview.md`
247+
- `docs/benchmarks/scenario_catalog.md`
248+
- `docs/benchmarks/metrics.md`
94249

95250
---
96251

97-
## Background
252+
## HIL Preparation
253+
254+
This repo now includes HIL-prep documentation, but not HIL proof.
98255

99-
IX-HapticSight originated from discussions on how physical AI systems could interact naturally, safely, and supportively with humans.
100-
The design draws inspiration from advances in AI models such as **OpenAI’s GPT-5**, particularly in affect recognition and context-driven behavior planning.
256+
Current HIL-prep docs define:
257+
- recommended test-rig architecture
258+
- calibration strategy
259+
- fault-injection strategy
101260

102-
> *Side note:* Conceived and documented as a personal gesture to mark GPT-5’s “birthday” — a subtle nod to the collaboration between human insight and AI inspiration that made this specification possible.
261+
These are here so future physical evidence can be:
262+
- bounded
263+
- calibrated
264+
- traceable
265+
- linked back to repo requirements and claims
266+
267+
Relevant docs:
268+
- `docs/hil/test_rig_architecture.md`
269+
- `docs/hil/calibration.md`
270+
- `docs/hil/fault_injection.md`
271+
272+
This is **evidence preparation**, not evidence completion.
103273

104274
---
105275

106-
## Credits
276+
## Quick Start
277+
278+
### 1. Install
279+
```bash
280+
pip install -r requirements.txt
281+
pip install -e .
282+
```
283+
284+
2. Run tests
285+
```bash
286+
pytest -q
287+
```
288+
289+
3. Run the quickstart smoke path
290+
```bash
291+
python examples/quickstart.py --scene sim/scenes/basic_room.json --verbose
292+
```
293+
294+
4. Inspect the benchmark catalog
295+
```bash
296+
python - <<'PY'
297+
from ohip_bench.scenarios import make_core_catalog
298+
catalog = make_core_catalog()
299+
print([scenario.scenario_id for scenario in catalog])
300+
PY
301+
```
302+
303+
Release Gate
304+
305+
A release should be checked against:
306+
307+
CHANGELOG.md
308+
RELEASE_CHECKLIST.md
309+
310+
That checklist is there to stop the repo from becoming more polished than it is supported.
311+
312+
License
313+
314+
This repository is released under the license terms in LICENSE
315+
.
316+
317+
Do not rely on shorthand descriptions in old summaries. The authoritative licensing terms are the ones in the actual license file.
318+
319+
Author
320+
321+
Bryce Lovell
322+
323+
Final Positioning
324+
325+
The strongest way to understand IX-HapticSight is this:
326+
327+
It is not trying to prove that robots “understand people.”
328+
It is trying to make human-facing approach, contact, retreat, and safe-hold behavior more bounded, testable, replayable, and auditable.
329+
330+
That is a narrower claim.
331+
It is also the more credible one.
107332

108-
- **Primary Author:** Bryce Wooster
109-
- **Acknowledgment:** Inspired in part by advances in AI from OpenAI’s GPT-5. No endorsement or co-authorship implied.

0 commit comments

Comments
 (0)