Skip to content

API_REFERENCE

AutoBotSolutions edited this page May 6, 2026 · 1 revision

title: "Aurora AI Framework - Complete API Reference | 132 Endpoints Documentation" description: "Complete API reference for Aurora AI Framework v1.0.0 with 132 professional endpoints, enhanced monitoring APIs, data validation APIs, and performance optimization features." keywords: "Aurora AI API, API documentation, REST API, 132 endpoints, monitoring API, data validation API, performance optimization, enterprise AI, machine learning API" author: "Aurora Development Team" robots: "index, follow" canonical: "https://aurora-ai.github.io/docs/API_REFERENCE.md"

Aurora AI Framework - Complete API Reference

🌟 Overview

Aurora AI provides comprehensive API endpoints across integrated systems with enhanced monitoring, intelligent data validation, and optimized performance capabilities. This reference covers all endpoints including new enhanced features.

🚀 Current API Server Status

  • Base URL: http://localhost:8081
  • Server Status: Active and Responding
  • Debug Mode: Enabled
  • Health Check: /api/health - Status: 200 OK
  • Interface: Aurora AI Sci-Fi Interface
  • Last Updated: 2026-05-06

📚 Related Documentation: For complete system architecture, see our Architecture Guide. For implementation guidance, check our Integration Guide.

🚀 Quick Start: New to Aurora AI? Start with our Installation Guide and User Guide.

🔧 Developers: Explore our Testing Guide and Troubleshooting Guide for comprehensive development support.

🆕 Enhanced API Features

📊 Advanced Monitoring APIs

🔧 Intelligent Data Validation APIs

🚀 Performance Optimization APIs

🔒 Security & Compliance APIs

📋 API Categories

🏗️ Core Systems (8 Endpoints)

  • /api/status - System health and status
  • /api/health - Health check endpoint
  • /api/training/status - Training pipeline status
  • /api/models - Model repository overview
  • /api/data/validate - Data validation (POST)
  • /api/security/status - Security system status
  • /api/security/encrypt - Data encryption (POST)
  • /api/feedback/status - Feedback system status

📊 Enhanced Data Management (8 Endpoints)

  • /api/data/inventory - Data inventory and metadata
  • /api/data/cleanup - Data cleanup operations (POST)
  • /api/data/backup - Data backup operations (POST)
  • /api/data/metrics - Data analytics and metrics
  • /api/data/validate - ENHANCED Advanced data validation (POST)
  • /api/data/repair - NEW Auto-repair functionality (POST)
  • /api/data/quality - NEW Data quality reporting (GET)
  • /api/data/profile - NEW Comprehensive data profiling (GET)

🔒 Security (2 Endpoints)

  • /api/security/status - Security system status
  • /api/security/encrypt - Data encryption and decryption (POST)

📈 Enhanced Monitoring (8 Endpoints)

  • /api/monitoring/advanced - Advanced monitoring dashboard
  • /api/monitoring/alerts - System alerts and notifications
  • /api/monitoring/performance - Performance metrics and analytics
  • /api/monitoring/metrics - Real-time system metrics
  • /api/monitoring/system - NEW Comprehensive system metrics
  • /api/monitoring/optimize - NEW Resource optimization (POST)
  • /api/monitoring/quality - NEW Data quality monitoring
  • /api/monitoring/health - NEW Enhanced health monitoring

📋 Reports (2 Endpoints)

  • /api/reports/generate - Generate comprehensive reports (POST)
  • /api/reports/list - List available reports

⚙️ Configuration (4 Endpoints)

  • /api/config/current - Current configuration status
  • /api/config/validate - Configuration validation (POST)
  • /api/config/merge - Configuration merging (POST)
  • /api/config/secrets - Secrets management (POST)

🧪 Testing (2 Endpoints)

  • /api/tests/history - Test execution history
  • /api/tests/coverage - Test coverage analysis

📚 Documentation (3 Endpoints)

  • /api/docs/api - API documentation
  • /api/docs/examples - Usage examples
  • /api/docs/architecture - System architecture documentation

🔄 Workflows (2 Endpoints)

  • /api/workflows/create - Create new workflow (POST)
  • /api/workflows/list - List available workflows

💡 Examples (3 Endpoints)

  • /api/examples/quick-test - Quick system test (POST)
  • /api/examples/sample-workflow - Sample workflow execution (POST)
  • /api/examples/tutorials - Tutorial documentation

📝 Logging (4 Endpoints)

  • /api/logs/system - System logs
  • /api/logs/audit - Audit trail logs
  • /api/logs/errors - Error logs
  • /api/logs/summary - Log summary and analytics

🏛️ Core Components (3 Endpoints)

  • /api/core/components - Core component registry
  • /api/core/registry - Component registration and discovery
  • /api/core/utilities - Core utility functions

🤖 Model Repository (4 Endpoints)

  • /api/models/repository - Model repository overview
  • /api/models/version - Model versioning (POST)
  • /api/models/compare - Model comparison (POST)
  • /api/models/deploy - Model deployment (POST)

🔄 Data Pipeline (4 Endpoints)

  • /api/pipeline/status - Pipeline status and health
  • /api/pipeline/execute - Execute pipeline (POST)
  • /api/pipeline/configure - Pipeline configuration (POST)
  • /api/pipeline/metrics - Pipeline performance metrics

🧠 Inference Service (4 Endpoints)

  • /api/inference/status - Inference service status
  • /api/inference/batch - Batch inference (POST)
  • /api/inference/performance - Inference performance analytics
  • /api/inference/scale - Service scaling (POST)

🎭 System Orchestration (4 Endpoints)

  • /api/orchestration/status - Orchestration system status
  • /api/orchestration/execute - Execute orchestration workflow (POST)
  • /api/orchestration/schedule - Schedule orchestration tasks (POST)
  • /api/orchestration/diagnostics - System diagnostics

🔧 Configuration Utilities (4 Endpoints)

  • /api/config/utilities - Configuration utilities overview
  • /api/config/validate - Advanced configuration validation (POST)
  • /api/config/merge - Configuration merging (POST)
  • /api/config/secrets - Secrets management (POST)

🎓 Enhanced Training (4 Endpoints)

  • /api/training/enhanced - Enhanced model training (POST)
  • /api/training/compare - Model algorithm comparison (POST)
  • /api/training/hyperopt - Hyperparameter optimization (POST)
  • /api/training/ensemble - Ensemble model creation (POST)

📊 Monitoring Analytics (3 Endpoints)

  • /api/monitoring/analytics - Advanced monitoring analytics
  • /api/monitoring/predict - Performance prediction (POST)
  • /api/monitoring/benchmark - Performance benchmarking (POST)

⚡ Performance Optimization (3 Endpoints)

  • /api/optimization/analyze - Performance analysis (POST)
  • /api/optimization/execute - Optimization execution (POST)
  • /api/optimization/monitor - Optimization monitoring

🖥️ Resource Management (3 Endpoints)

  • /api/resources/status - Resource status monitoring

🆕 Enhanced API Endpoints - Detailed Documentation

📊 Enhanced Monitoring APIs

/api/monitoring/system - Comprehensive System Metrics

Method: GET Description: Returns 15+ comprehensive system metrics in real-time

Response Format:

{
  "timestamp": "2026-05-05T23:50:06.306795",
  "cpu_percent": 45.2,
  "cpu_count": 8,
  "cpu_freq_mhz": 2400.0,
  "memory_percent": 67.8,
  "memory_available_gb": 8.2,
  "memory_used_gb": 16.4,
  "disk_percent": 73.5,
  "disk_free_gb": 45.7,
  "disk_used_gb": 126.8,
  "network_bytes_sent_mb": 1024.5,
  "network_bytes_recv_mb": 2048.3,
  "process_memory_mb": 245.6,
  "process_cpu_percent": 12.3,
  "process_threads": 8
}

/api/monitoring/optimize - Resource Optimization

Method: POST Description: Automatically optimizes system resources based on current usage

Request Body:

{
  "optimization_level": "moderate",
  "target_metrics": ["memory", "cpu"],
  "force_cleanup": false
}

Response Format:

{
  "timestamp": "2026-05-05T23:50:06.306795",
  "optimizations_applied": [
    {
      "type": "memory",
      "action": "garbage_collection",
      "description": "Trigger garbage collection to free memory"
    }
  ],
  "metrics_after": {
    "memory_percent": 58.2,
    "process_memory_mb": 198.4
  }
}

/api/monitoring/health - Enhanced Health Monitoring

Method: GET Description: Provides comprehensive health status with recommendations

Response Format:

{
  "status": "healthy",
  "checks": {
    "cpu": "ok",
    "memory": "warning",
    "disk": "ok",
    "processes": "ok"
  },
  "alerts": [
    {
      "type": "memory",
      "severity": "warning",
      "message": "Memory usage at 78%",
      "recommendation": "Monitor memory usage closely"
    }
  ],
  "recommendations": ["Consider memory optimization in next cycle"]
}

🔧 Enhanced Data Validation APIs

/api/data/repair - Auto-Repair Functionality

Method: POST Description: Automatically detects and repairs common data issues

Request Body:

{
  "data_source": "input.csv",
  "repair_options": {
    "handle_missing": "auto",
    "remove_duplicates": true,
    "cap_outliers": true,
    "drop_high_null_columns": true
  }
}

Response Format:

{
  "timestamp": "2026-05-05T23:50:06.306795",
  "original_shape": [1000, 15],
  "repaired_shape": [995, 14],
  "quality_score": 0.95,
  "repair_log": [
    "Removed 5 duplicate rows",
    "Dropped column 'high_null_col' (85% null values)",
    "Filled missing values in column 'feature_x'"
  ],
  "recommendations": ["Data quality is now excellent"]
}

/api/data/quality - Data Quality Reporting

Method: GET Description: Generates comprehensive data quality report

Response Format:

{
  "timestamp": "2026-05-05T23:50:06.306795",
  "dataset_info": {
    "shape": [1000, 15],
    "memory_usage_mb": 45.2,
    "column_count": 15,
    "row_count": 1000
  },
  "quality_metrics": {
    "completeness": 94.5,
    "uniqueness": 89.2,
    "consistency": 95.0,
    "validity": 92.8
  },
  "column_analysis": {
    "feature1": {
      "dtype": "float64",
      "null_percentage": 2.1,
      "unique_percentage": 78.5
    }
  },
  "recommendations": [
    "Consider data imputation strategies for missing values",
    "High duplicate ratio detected. Consider deduplication"
  ]
}

/api/data/profile - Comprehensive Data Profiling

Method: GET Description: Provides detailed statistical profiling of dataset

Response Format:

{
  "timestamp": "2026-05-05T23:50:06.306795",
  "profile": {
    "numeric_columns": 8,
    "categorical_columns": 4,
    "datetime_columns": 2,
    "text_columns": 1,
    "statistics": {
      "total_cells": 15000,
      "missing_cells": 315,
      "duplicate_rows": 12
    },
    "data_types": {
      "int64": 3,
      "float64": 5,
      "object": 5,
      "datetime64[ns]": 2
    }
  }
}

📚 Enhanced Python API

ModelMonitor Class - Enhanced Methods

collect_system_metrics()

monitor = ModelMonitor()
metrics = monitor._collect_system_metrics()
print(f"CPU: {metrics['cpu_percent']}%")
print(f"Memory: {metrics['memory_percent']}%")

optimize_resources()

optimization = monitor.optimize_resources()
for opt in optimization['optimizations_applied']:
    print(f"Applied: {opt['description']}")

enhanced_alerting()

# Alert thresholds are automatically monitored
# CPU >80% warning, >90% critical
# Memory >80% warning, >90% critical
# Disk >85% warning, >95% critical
# Process Memory >1GB warning

DataValidator Class - Enhanced Methods

validate_and_repair_data()

validator = DataValidator()
clean_data, results = validator.validate_and_repair_data(raw_data)
print(f"Quality improved from {results['original_quality']} to {results['quality_score']}")

get_data_quality_report()

report = validator.get_data_quality_report(data)
print(f"Completeness: {report['quality_metrics']['completeness']}%")
for rec in report['recommendations']:
    print(f"Recommendation: {rec}")

NumpyJSONEncoder Class

json.dumps() with numpy support

from modules.monitoring import NumpyJSONEncoder
import numpy as np

data_with_numpy = {
    'numpy_array': np.array([1, 2, 3]),
    'numpy_float': np.float64(3.14159),
    'regular_data': {'key': 'value'}
}

json_str = json.dumps(data_with_numpy, cls=NumpyJSONEncoder)
# No more Float64DType serialization errors!

🚀 Performance Enhancements

Metrics Collection Improvements

  • Speed: Reduced from 1.0s to 0.1s intervals
  • Coverage: 15+ metrics vs previous basic monitoring
  • Accuracy: Process-level tracking included
  • Storage: Intelligent history management

Resource Optimization Features

  • Memory Cleanup: Automatic when >500MB usage
  • History Management: Reduces to 50 entries when needed
  • Garbage Collection: Triggered on high memory usage
  • CPU Optimization: Monitors frequency and load

Data Validation Enhancements

  • Auto-Repair: Handles missing values, duplicates, outliers
  • Quality Scoring: Comprehensive quality assessment
  • Smart Recommendations: Context-aware improvement suggestions
  • Statistical Analysis: Deep data profiling capabilities
  • /api/resources/allocate - Resource allocation (POST)
  • /api/resources/optimize - Resource optimization (POST)

🧪 Integration Testing (3 Endpoints)

  • /api/integration/test - Integration testing (POST)
  • /api/integration/validate - System validation (POST)
  • /api/integration/benchmark - Integration benchmarking (POST)

🔍 Data Validation (3 Endpoints)

  • /api/validation/schema - Schema validation (POST)
  • /api/validation/quality - Data quality assessment (POST)
  • /api/validation/statistical - Statistical validation (POST)

🚀 API Usage Examples

System Status Check

curl -X GET "http://localhost:8080/api/status"

Data Validation

curl -X POST "http://localhost:8080/api/data/validate" \
  -H "Content-Type: application/json" \
  -d '{"data": {"field1": "value1", "field2": "value2"}}'

Enhanced Model Training

curl -X POST "http://localhost:8080/api/training/enhanced" \
  -H "Content-Type: application/json" \
  -d '{"algorithm": "RandomForest", "optimization": true}'

Performance Optimization

curl -X POST "http://localhost:8080/api/optimization/analyze" \
  -H "Content-Type: application/json" \
  -d '{"scope": "full_system", "depth": "comprehensive"}'

Resource Management

curl -X POST "http://localhost:8080/api/resources/allocate" \
  -H "Content-Type: application/json" \
  -d '{"type": "application", "application": "Aurora AI Framework"}'

📋 Request/Response Formats

Standard Response Format

{
  "status": "SUCCESS|COMPLETED|FAILED",
  "message": "Human-readable message",
  "data": {
    // Response data specific to endpoint
  },
  "quantum_signature": "AURORA-SIGNATURE-TIMESTAMP"
}

Error Response Format

{
  "error": "ERROR_CODE",
  "message": "Detailed error description",
  "details": {
    // Additional error details
  }
}

🔐 Authentication & Security

  • Authentication: All endpoints support JWT token authentication
  • Authorization: Role-based access control (RBAC)
  • Encryption: Quantum-grade encryption for sensitive data
  • Audit Trail: Complete audit logging for all operations

📊 Rate Limiting

  • Standard Endpoints: 1000 requests/minute
  • Heavy Operations: 100 requests/minute
  • Batch Operations: 50 requests/minute

🎯 Best Practices

  1. Error Handling: Always check response status codes
  2. Retry Logic: Implement exponential backoff for failed requests
  3. Pagination: Use pagination for large datasets
  4. Caching: Cache frequently accessed data
  5. Monitoring: Monitor API usage and performance

📞 Support

For API support and troubleshooting, refer to the Troubleshooting Guide.


Aurora AI API Reference
74 Professional Endpoints • Enterprise-Grade Security • 100% System Reliability

Clone this wiki locally