Skip to content

Latest commit

 

History

History
642 lines (506 loc) · 29.1 KB

File metadata and controls

642 lines (506 loc) · 29.1 KB

TrustOps-Env : Content Moderation & Trust Safety Environment


Core Concept & Problem Description

The TrustOps-Env is an open environment designed to simulate the content moderation pipelines used by major social platforms. The primary objective is to deploy an AI agent capable of detecting harmful content, enforcing policies, and escalating uncertain edge cases, while managing massive scales (millions of posts), legal risks, and false positives.

The Core Problem: While the theoretical design of TrustOps-Env was robust, a major technical roadblock existed: the application was failing to render properly when deployed to HuggingFace Spaces, resulting in a completely blank UI.

Because of its high complexity and ethical/legal risks, it was positioned primarily as an observable research tool. A blank UI meant observers couldn't see the tasks running. The system was incorrectly defaulting to a Docker runtime (indicated by a ?docker=true URL parameter) instead of the intended Python environment.

Challenge Description Impact on Researchers / Users
Blank UI / Rendering Failure The system defaulted to a hidden Docker runtime instead of Python. Application failed to load on HuggingFace Spaces.
Blocking Execution Code Hardcoded time.sleep() UI hacks prevented real-time logs from showing. Observers couldn't see the agent's reasoning steps.
Security Risks & Hardcoded Paths Hardcoded HF API tokens and local absolute paths were pushed to the repo. Security blocks by GitHub and portability issues.

Current Situation v/s Desired Outcome

Current Situation (Broken State) Desired Outcome (Fixed State)
Blank UI rendering on HuggingFace Clean, visually observable Gradio UI rendering
Hidden Dockerfile forcing wrong env Correct Python environment setup (sdk: gradio)
Hardcoded API Tokens triggering blocks Secure environment variables for API keys
Script blocked without showing logs Real-time logs ([START], [STEP], [END]) displayed

flowchart TD
A[TrustOps-Env Environment] --> B[AI Developers]
A --> C[Trust & Safety Researchers]
A --> D[Policy Analysts]
Loading
Category Details
Geographical Focus Global / AI Policy Sector
Type of Task Simulated Moderation challenges of varying difficulties:
• EASY (spam vs. safe)
• MEDIUM (platform policy enforcement)
• HARD (nuanced, context-dependent content)
Data Tracked & Generated Agent Actions (approve, remove, flag, escalate), step count, and reasoning sequences.
Current Market Gap Lack of observable, step-by-step reinforcement learning environments simulating real-world, high-scale trust and safety moderation.
Opportunity Identified A secure, portable research environment capable of real-time logging and reasoning tracking.

Existing Gap vs TrustOps-Env Improvement

xychart-beta
title "Without System Fixes (Current State)"
x-axis ["UI Observability","Agent Tracking","Security Score","Portability"]
y-axis "Capability Score (%)" 0 --> 100
bar [0, 10, 15, 20]
Loading
xychart-beta
title "With Proposed Solutions"
x-axis ["UI Observability","Agent Tracking","Security Score","Portability"]
y-axis "Capability Score (%)" 0 --> 100
bar [95, 100, 90, 85]
Loading

Root Cause Analysis

Problem Area Root Cause Impact on Operations
Incorrect Runtime A hidden Dockerfile dictated incorrect environment deployment The UI failed to render, breaking HuggingFace deployment entirely.
Missing Log Renders Blocking time.sleep() functions halted the event loop Real-time execution outputs were not rendered to the Gradio UI.
Environment Security HF API tokens were hardcoded inside the python file Application triggered GitHub Security flags and was unusable.

Solution Strategy

  1. Runtime Correction: Located and permanently deleted the hidden Dockerfile, forcing HuggingFace into its Python runtime. Updated README metadata to sdk: gradio.
  2. Agent Tracking Enablement: Implemented a wrapper function to capture backend print logs to display real-time execution steps.
  3. UI Visibility: Removed the blocking hack (time.sleep()), allowing the Gradio UI to stream log metrics ([START], [STEP], [END]).
  4. Security & Portability Setup: Removed all hardcoded credentials and replaced them with secure .env variables and dynamic relative paths.

System Workflow

flowchart TD
    A([Start: Initialize TrustOps-Env]) --> B[/Load Secure Env Variables .env /]
    B --> C[Boot Python Runtime via HuggingFace Spaces]
    C --> D[Initialize Gradio Event Loop & UI Components]
    
    D --> E{API Health Check}
    E -->|Failed| F[Raise Startup Error / Fallback]
    E -->|Passed| G[Fetch Batch Content Queue]
    
    G --> H{Evaluate Content Complexity Matrix}
    
    H -->|EASY| I[Run Spam Filters & NLP Heuristics]
    H -->|MEDIUM| J[Load Platform Policy Checkpoints]
    H -->|HARD| K[Activate Context-Aware Deep LLM Analysis]
    
    I --> L[Agent Cognitive Processing - START Logging]
    J --> L
    K --> L
    
    L --> M[Compute Safety & Confidence Embeddings]
    M --> N{Agent Autonomous Decision Engine}
    
    N -->|Confidence > 90% & Safe| O[Approve Content ✅]
    N -->|Confidence > 90% & Violation| P[Remove Content 🚫]
    N -->|Confidence Between 50-89%| Q[Flag for Secondary Verification ⚠️]
    N -->|Confidence < 50% / Edge Case| R[Escalate to Human Trust & Safety Team ⬆️]
    
    O --> S[Commit Record to Moderation DB & Assign +0.3/-0.2 Penalty Matrix]
    P --> S
    Q --> S
    R --> S
    
    S --> T[Trigger Wrapper Output]
    T --> U[Stream STEP Reasoning Logic to Gradio UI]
    U --> V([End: Final END Status & Loop Reset])
Loading

Architecture Flow

flowchart LR
    A[HuggingFace Spaces] --> B[Gradio Real-time UI]
    B --> C[Agent Tracker / Wrapper Function]
    C --> D[Core Moderation Logic]
    D --> E[Environment State & Difficulty Matrix]
    E --> F[Execution Action Log]
Loading

ER Diagram

erDiagram

ENVIRONMENT {
    string EnvironmentID PK
    string Type "OpenEnv Simulation"
    string Runtime "Python"
    string Framework "Gradio"
    string APIHealthStatus
    string AllowedOrigins
    boolean IsActive
}

AGENT {
    string AgentID PK
    string ModelName
    string ReasoningLog
    int StepCount
    float CurrentReward
    float CognitiveEmbeddingScore
}

MODERATION_TASK {
    string TaskID PK
    string BatchQueueID
    string Difficulty "EASY, MEDIUM, HARD"
    string ComplexityMatrixScore
    string ExpectedAction
    string Type
}

CONTENT {
    string ContentID PK
    string Text
    string SourcePlatform
    boolean HasNuance
    string Language
    string NLPHeuristicFlag
}

ACTION_LOG {
    string LogID PK
    string Stage "START, STEP, END"
    string Action "Approve, Remove, Flag, Escalate"
    float ConfidencePercentage
    datetime Timestamp
    float PenaltyIncurred
}

HUMAN_ESCALATION {
    string TicketID PK
    string TaskID FK
    string AgentID FK
    string TrustSafetyTeamStatus
}

ENVIRONMENT ||--o{ AGENT : "hosts"
ENVIRONMENT ||--o{ MODERATION_TASK : "assigns"
AGENT ||--o{ MODERATION_TASK : "performs"
MODERATION_TASK ||--|| CONTENT : "contains"
AGENT ||--o{ ACTION_LOG : "records"
ACTION_LOG ||--o{ HUMAN_ESCALATION : "triggers"
Loading

End-to-End Workflow

  1. Initialization: The environment starts by strictly relying on environment variables, utilizing relative paths to establish a portable baseline.
  2. Setup: HuggingFace parses the configuration, correctly booting the Python runtime and launching the clean Gradio interface.
  3. Task Delegation: A content moderation task is pulled from the system. It is immediately classified by an internal matrix ranging from EASY to HARD difficulty.
  4. Execution Log (START): Real-time monitoring captures the [START] phase of the task.
  5. Agent Action (STEP): The agent decides to approve, remove, flag, or escalate the content. Each distinct step of its logic process is forwarded to the UI via a wrapper function, avoiding UI halts, generating a [STEP] print output.
  6. Completion (END): Actions are appended to the main moderation log and the task counts are resolved, resulting in the final [END] state.

Hackathon / Deployment Deliverables Summary

  • Eliminated Docker runtime blockers and established a flawless Python/Gradio instance on HuggingFace Spaces.
  • Made the environment actively visual and secure by overhauling its logging system to render immediate reasoning states.
  • Stabilized and open-sourced an environment capable of testing agents against robust socio-legal moderation challenges.

Impact

  • Solves the Core Problem: Transforms a completely broken, blind script into a seamlessly executing, fully observable application.
  • Enables Research: Researchers can actually watch the agent's step-by-step reasoning and track actions immediately on the web instead of a local terminal.
  • Ensures Portability: By purging hardcoded absolute paths and secrets, the TrustOps-Env project becomes instantly cloneable and safe across varying deployment modalities.


🔬 Advanced System Design — Areas Previously Uncovered

The sections below document the deeper architectural systems, evaluation mechanics, and risk management strategies that form the complete TrustOps-Env design. These were discussed during project development but had not yet been added to the core concept document.


Reward System & Agent Evaluation Mechanics

The TrustOps-Env uses a structured reward-penalty system to evaluate agent quality. This is what transforms the environment from a simple moderation tool into an actual reinforcement learning research platform.

Reward / Penalty Score Trigger Condition
Correct Classification +0.5 Agent correctly identifies the content type (harmful, safe, borderline).
Correct Action +0.3 Agent selects the right operational action for the given classification.
Reasoning Quality +0.2 Agent provides a logically sound, step-by-step reasoning chain.
False Negative Penalty -0.2 Agent allows harmful content to pass through (most dangerous failure).
False Positive Penalty -0.1 Agent incorrectly flags or removes safe content (user trust erosion).

Maximum score per task: +1.0 (correct classification + correct action + high reasoning quality).

flowchart TD
    A([Agent Receives Task]) --> B{Classify Content}
    B -->|Correct| C["+0.5 Reward"]
    B -->|Incorrect - Harmful Allowed| D["-0.2 False Negative"]
    B -->|Incorrect - Safe Blocked| E["-0.1 False Positive"]

    C --> F{Select Action}
    F -->|Correct Action| G["+0.3 Reward"]
    F -->|Wrong Action| H["0.0 No Reward"]

    G --> I{Evaluate Reasoning Chain}
    I -->|High Quality| J["+0.2 Reasoning Bonus"]
    I -->|Low Quality| K["0.0 No Bonus"]

    J --> L([Total Score Computed])
    K --> L
    H --> L
    D --> L
    E --> L
Loading

Strategic Insight: The asymmetric penalty design (false negatives penalized more heavily than false positives at -0.2 vs -0.1) reflects real-world platform priorities — allowing genuinely harmful content is always more damaging than over-moderating. This incentivizes agents to err on the side of caution for ambiguous content, which exactly matches how leading social platforms calibrate their moderation policies.


Task Difficulty Matrix — EASY / MEDIUM / HARD

Each task in the environment is assigned a difficulty level that determines the complexity of the moderation challenge and the grading criteria applied.

Difficulty Content Type Example Scenario Agent Challenge
EASY Clear spam vs. clearly safe content "BUY CHEAP FOLLOWERS!!!" vs. "Had a great lunch today" Binary classification; minimal reasoning needed.
MEDIUM Borderline abusive / policy violations Heated argument using aggressive tone but no slurs; satire that could be misread as threats. Requires policy knowledge and contextual interpretation.
HARD Nuanced, context-dependent content Cultural expressions that appear threatening to outsiders; coded language; whistleblower content. Demands deep contextual reasoning, cultural sensitivity.
flowchart LR
    subgraph EASY ["🟢 EASY Tasks"]
        E1[Spam Detection]
        E2[Safe Content Clearance]
    end

    subgraph MEDIUM ["🟡 MEDIUM Tasks"]
        M1[Borderline Abusive Content]
        M2[Satire vs Threat Analysis]
        M3[Policy Boundary Testing]
    end

    subgraph HARD ["🔴 HARD Tasks"]
        H1[Cultural Context Nuances]
        H2[Coded Language Detection]
        H3[Whistleblower vs Disinformation]
        H4[Context-Dependent Toxicity]
    end

    EASY --> MEDIUM --> HARD
Loading

Simplified Policy Assumption: The current simulation operates under a simplified policy framework. This design choice isolates the agent's fundamental enforcement mechanics and reasoning quality from the overwhelming legal/ethical complexity of a full-fledged platform policy. It allows focused research on the core moderation challenge without operational paralysis.


Grading & Evaluation Architecture

The TrustOps-Env employs multiple evaluation layers to assess agent performance beyond simple accuracy metrics.

Evaluation Layer Method Purpose
Classification Accuracy Direct label matching Did the agent correctly identify content as harmful/safe/borderline?
Action Correctness Action-to-policy mapping Did the agent take the operationally correct action?
Reasoning Quality Embedding similarity scoring How well does the agent's reasoning chain align with expert reasoning?
Toxicity Baseline HuggingFace toxicity models Pretrained classifier used as a baseline reference for comparison.
Zero-Shot Classification HuggingFace zero-shot classifiers Evaluates agent's ability to generalize to unseen content categories.
flowchart TD
    A[Agent Output] --> B[Classification Check]
    A --> C[Action Validation]
    A --> D[Reasoning Chain Extraction]

    B --> E{Correct Label?}
    E -->|Yes| F["+0.5"]
    E -->|No| G["Penalty Applied"]

    C --> H{Policy-Aligned Action?}
    H -->|Yes| I["+0.3"]
    H -->|No| J["0.0"]

    D --> K[Compute Embedding Similarity]
    K --> L{Similarity Score > Threshold?}
    L -->|High| M["+0.2 Reasoning Bonus"]
    L -->|Low| N["0.0"]

    subgraph Baselines ["Reference Baselines"]
        O[HF Toxicity Model]
        P[Zero-Shot Classifier]
        Q[Pretrained Classifier]
    end

    B -.-> O
    B -.-> P
    B -.-> Q
Loading

Embedding Similarity for Reasoning Quality: The system does not simply check if the agent got the right answer — it evaluates how the agent reasoned. By comparing the agent's reasoning chain against expert-level reasoning embeddings, the grader rewards agents that demonstrate genuine understanding, not just pattern-matched guesswork. This is what makes TrustOps-Env a reinforcement learning research environment, not just a test bench.


Agent Observation & State Tracking

The environment maintains a structured observation space that the agent interacts with at every step of the simulation.

erDiagram
OBSERVATION_STATE {
    string moderation_log "Full history of agent actions and reasoning"
    int step_count "Number of steps agent has taken in current episode"
    string current_content "The content item currently being evaluated"
    string task_difficulty "EASY / MEDIUM / HARD"
    float cumulative_reward "Running total of agent's reward score"
}

AGENT_ACTION {
    string action_type "approve / remove / flag / escalate"
    string reasoning_chain "Step-by-step justification for the decision"
    float confidence_score "Agent's self-reported confidence"
}

OBSERVATION_STATE ||--o{ AGENT_ACTION : "triggers"
Loading
Observation Field Type Description
moderation_log string Running log of all actions taken, reasoning chains, and outcomes.
step_count int Tracks how many moderation steps the agent has completed in the episode.
current_content string The text/media currently being presented for moderation.
task_difficulty string The assigned difficulty level: EASY, MEDIUM, or HARD.
cumulative_reward float The agent's running reward total, updated after every action.

Edge Case Escalation Pipeline

Escalation is not a failure mode — it is a strategic safety mechanism. When the agent encounters content that is too ambiguous for a confident decision, escalation prevents catastrophic errors (false negatives or false positives) and routes the content for human review.

flowchart TD
    %% Premium Edge-Case Color Variables %%
    classDef default fill:#1E293B,stroke:#94A3B8,stroke-width:2px,color:#F8FAFC;
    classDef safe fill:#065F46,stroke:#10B981,stroke-width:2px,color:#fff;
    classDef danger fill:#7F1D1D,stroke:#EF4444,stroke-width:2px,color:#fff;
    classDef warning fill:#B45309,stroke:#F59E0B,stroke-width:2px,color:#fff;
    classDef routing fill:#4338CA,stroke:#818CF8,stroke-width:2px,color:#fff;
    classDef human fill:#4C1D95,stroke:#A78BFA,stroke-width:2px,color:#fff;

    A([Content Received]) --> B{Agent Confidence Check}
    
    B -->|"High Confidence"| C{Classification Decision}
    C -->|Safe| D[Approve ✅]
    C -->|Harmful| E[Remove 🚫]
    
    B -->|"Low Confidence / Ambiguous"| F{Edge Case Detected}
    F --> G[Flag for Review 🏳️]
    F --> H[Escalate to Human Moderator ⬆️]
    
    G --> I[Content enters Review Queue]
    H --> I
    
    I --> J{Human Moderator Decision}
    J -->|Safe| K[Release Content]
    J -->|Harmful| L[Remove & Log Pattern]
    J -->|Needs Policy Update| M[Route to Policy Team]
    
    D --> N([Log Action & Update State])
    E --> N
    K --> N
    L --> N
    
    %% Apply Styled Classes %%
    class D,K safe;
    class E,L danger;
    class F,G,H warning;
    class I,J,M human;
    class A,N default;
Loading

When does escalation trigger?

  • HARD tasks with nuanced, context-dependent content where cultural sensitivity is required.
  • MEDIUM tasks where the content sits exactly on a policy boundary (satire vs. genuine threat).
  • Any task where the agent's confidence score falls below the decision threshold.
  • Content involving coded language, whistleblower material, or legally sensitive information.

Reward-Driven Escalation: Because false negatives (-0.2) are penalized more heavily than the absence of a positive reward, the agent is mathematically incentivized to escalate uncertain cases rather than guess. This ensures the system errs toward safety, which mirrors how production-grade moderation pipelines operate at major platforms.


Legal & Bias Risk Management

The TrustOps-Env operates in one of the most ethically charged domains in AI. The following risks are explicitly tracked and managed:

Risk Category Severity Description Mitigation Strategy
False Negative (Harmful slip) 🔴 High Dangerous content passes through moderation undetected. Highest penalty (-0.2); escalation mechanism; toxicity baselines as safety nets.
False Positive (Over-censor) 🟡 Medium Safe content wrongly removed, eroding user trust and potentially causing legal disputes. Penalty (-0.1); reasoning quality evaluation prevents blind over-moderation.
Dataset Bias 🔴 High Training data carries demographic, cultural, or linguistic biases that distort agent behavior. Zero-shot classifiers for generalization; embedding-based evaluation to detect bias patterns.
Legal / Regulatory Exposure 🔴 High Incorrect moderation decisions leading to liability under various national/international laws. Simplified policy assumption isolates legal complexity; escalation routes to human reviewers.
Ethical Implications 🟡 Medium Automated decisions on speech/expression carry inherent ethical weight. Full reasoning chain transparency; moderation logs available for audit and accountability.
quadrantChart
    title Risk Assessment Matrix
    x-axis "Low Impact" --> "High Impact"
    y-axis "Low Probability" --> "High Probability"
    quadrant-1 "Critical: Immediate Action"
    quadrant-2 "Monitor: Proactive Defense"
    quadrant-3 "Low Priority"
    quadrant-4 "Watch: Potential Escalation"
    "False Negatives": [0.85, 0.75]
    "Dataset Bias": [0.80, 0.65]
    "Legal Exposure": [0.90, 0.50]
    "False Positives": [0.55, 0.70]
    "Ethical Weight": [0.60, 0.45]
Loading

Research Use Case Positioning

Because of the high complexity, ethical sensitivity, and difficulty of early monetization, TrustOps-Env is positioned primarily as a strong research use case, not as an immediate commercial SaaS tool.

Dimension Assessment
Complexity High — involves nuanced NLP, reinforcement learning, policy simulation, and real-time observability.
Monetization Potential Harder to monetize early — ethical/legal constraints limit commercial deployment without extensive R&D.
Research Value Very High — provides the only observable, step-by-step RL environment for social platform trust & safety.
Target Users AI researchers, Trust & Safety teams, policy analysts, academic institutions.
Competitive Advantage No existing tool provides real-time observable moderation simulation with structured reward mechanics.
%%{init: {'theme': 'base', 'themeVariables': { 'pie1': '#1E3A8A', 'pie2': '#4C1D95', 'pie3': '#064E3B', 'pie4': '#78350F', 'pieTitleTextSize': '28px', 'pieTitleTextWeight': 'bold', 'pieLegendTextSize': '20px', 'pieSectionTextSize': '20px', 'textColor': '#FFFFFF'}}}%%
pie title TrustOps-Env Value Distribution
    "Research & Academic Use" : 45
    "Trust & Safety Team Training" : 25
    "Policy Analysis & Testing" : 20
    "Commercial / SaaS Potential" : 10
Loading

Extended ER Diagram — Full System Model

erDiagram

ENVIRONMENT {
    string EnvironmentID PK
    string Type "OpenEnv Simulation"
    string Runtime "Python"
    string Framework "Gradio"
    string PolicyMode "Simplified"
    string SecurityTokens
    boolean IsDockerHalted
    int APIHealthPingMs
}

AGENT {
    string AgentID PK
    string ModelVersion
    string ReasoningLog
    int StepCount
    float CumulativeReward
    float ConfidenceScore
    boolean NeedsHumanReview
    float SafetyEmbeddingThreshold
}

MODERATION_TASK {
    string TaskID PK
    string BatchID FK
    string Difficulty "EASY, MEDIUM, HARD"
    string Type
    string PolicyContext
    datetime AssignedAt
}

CONTENT {
    string ContentID PK
    string Text
    string MediaURL
    boolean HasNuance
    string CulturalContext
    string LanguageType "direct, coded, satirical"
    float SpamProbability
}

ACTION_LOG {
    string LogID PK
    string TaskID FK
    string Stage "START, STEP, END"
    string Action "Approve, Remove, Flag, Escalate"
    datetime Timestamp
    string ReasoningChain
    float ExecutionTimeMs
    boolean StepStreamedToUI
}

REWARD_RECORD {
    string RewardID PK
    string GraderID FK
    float ClassificationScore "0.0 or 0.5"
    float ActionScore "0.0 or 0.3"
    float ReasoningScore "0.0 to 0.2"
    float PenaltyApplied "0.0, -0.1, or -0.2"
    float TotalScore
    string MatrixJustification
}

ESCALATION_QUEUE {
    string EscalationID PK
    string Reason "Low confidence, edge case, policy gap"
    string Status "Pending, Reviewed, Resolved"
    string ReviewerDecision "Release, Remove, PolicyUpdate"
    datetime EscalatedAt
    string HumanReviewerID
}

GRADER {
    string GraderID PK
    string ToxicityModelRef "HF Toxicity Model"
    string ZeroShotRef "HF Zero-Shot Classifier"
    float EmbeddingSimilarityThreshold
    boolean Active
}

UI_DASHBOARD {
    string SessionID PK
    string ConnectedAgent FK
    int RenderPings
    boolean ActiveWrapperStream
    string ErrorFallbackState
}

ENVIRONMENT ||--o{ AGENT : "hosts"
ENVIRONMENT ||--o{ MODERATION_TASK : "assigns"
ENVIRONMENT ||--|| GRADER : "uses"
ENVIRONMENT ||--|| UI_DASHBOARD : "renders to"
AGENT ||--o{ MODERATION_TASK : "performs"
MODERATION_TASK ||--|| CONTENT : "contains"
AGENT ||--o{ ACTION_LOG : "records"
AGENT ||--o{ REWARD_RECORD : "receives"
AGENT ||--o{ ESCALATION_QUEUE : "triggers"
GRADER ||--o{ REWARD_RECORD : "computes"
ACTION_LOG }|--|| MODERATION_TASK : "relates to"
Loading

Complete Production Flow — Phase Summary

flowchart LR
    subgraph Phase1 ["Phase 1: Environment Setup"]
        A1[Delete Hidden Dockerfile] --> A2[Set sdk: gradio in config]
        A2 --> A3[Configure Python Runtime]
    end

    subgraph Phase2 ["Phase 2: Security Hardening"]
        B1[Remove Hardcoded API Tokens] --> B2[Implement os.getenv for HF_TOKEN]
        B2 --> B3[Full Repo Secret Scan]
    end

    subgraph Phase3 ["Phase 3: Portability"]
        C1[Replace Absolute Paths] --> C2[Implement Dynamic Relative Paths]
        C2 --> C3[Cross-Machine Validation]
    end

    subgraph Phase4 ["Phase 4: UI & Observability"]
        D1[Remove time.sleep Hack] --> D2[Build Gradio Wrapper Function]
        D2 --> D3["Render START/STEP/END Logs"]
    end

    subgraph Phase5 ["Phase 5: Deployment"]
        E1[Push to GitHub] --> E2[Deploy to HuggingFace Spaces]
        E2 --> E3[Stable Live Environment ✅]
    end

    Phase1 --> Phase2 --> Phase3 --> Phase4 --> Phase5
Loading