A complete robotic grasping system with web interface, computer vision, and motion planning for the SO-101 6-DOF robot arm.
- Web Control Interface - Browser-based robot control with 3D visualization
- Computer Vision - RealSense depth camera integration with point cloud processing
- Grasp Detection - ThinkGrasp AI-powered grasp pose prediction
- Motion Planning - Collision-free trajectory planning with IK solver
- Hand-Eye Calibration - Camera-robot coordinate transformation
- LeRobot Integration - Learning from demonstration capabilities
# 1. Install dependencies
pip install -r requirements.txt
# 2. Connect hardware
# - SO-101 robot to USB port (usually /dev/ttyACM0 or /dev/ttyUSB0)
# - Intel RealSense camera to USB 3.0 port
# 3. Launch web interface
cd web_control
python app.py
# 4. Open browser to http://localhost:5000GraspingDemo/
├── web_control/ # Web interface and APIs
│ ├── app.py # Flask server with all endpoints
│ ├── templates/ # HTML interfaces
│ ├── static/ # CSS, JS, 3D models
│ └── *.py # Camera, visualization, planning modules
├── so101_grasp/ # Core robot control library
│ ├── robot/ # Kinematics, motion planning, client
│ ├── vision/ # Camera and point cloud processing
│ ├── api/ # ThinkGrasp integration
│ └── utils/ # Configuration and transforms
├── examples/ # Usage examples
├── scripts/ # Utility scripts
├── lerobot/ # LeRobot teleoperation
└── captures/ # Saved point clouds and trajectories
- Python 3.8+
- Ubuntu 20.04/22.04/24.04 or macOS
- Intel RealSense D435/D435i/D415 camera
- SO-101 robot with Feetech servos
# Clone repository
git clone <repository>
cd GraspingDemo
# Install Python packages
pip install flask numpy opencv-python pyrealsense2 plotly open3d
pip install feetech-servo-sdk dynamixel-sdk
pip install torch torchvision # For AI features
# Set permissions for robot port
sudo chmod a+rw /dev/ttyACM0 # Linux
# On macOS: /dev/tty.usbmodem*cd web_control
python app.py
# Server runs on http://localhost:5000-
Main Control (
http://localhost:5000)- Connect/disconnect robot
- Enable/disable motors
- Home position control
- Trajectory recording/replay
-
Modern Interface (
http://localhost:5000/modern)- Cartesian control (X,Y,Z position)
- Gripper rotation
- Inverse kinematics
- 3D robot visualization
-
Unified Interface (
http://localhost:5000/unified)- Camera view with point cloud
- Grasp detection and execution
- Combined robot and vision control
-
Camera Interface (
http://localhost:5000/camera)- RGB/Depth streaming
- Point cloud capture
- Hand-eye calibration
POST /api/connect- Connect to robotPOST /api/disconnect- Disconnect robotGET /api/status- Get robot statusPOST /api/enable_torque- Enable motorsPOST /api/home- Go to home positionPOST /api/move_to_position- Move jointsPOST /api/cartesian_move- Move to XYZ positionPOST /api/gripper/<open|close>- Control gripper
POST /api/camera/connect- Connect cameraGET /api/camera/rgb- Get RGB streamGET /api/camera/depth- Get depth streamGET /api/camera/pointcloud- Get point cloudPOST /api/camera/capture- Save capture
POST /api/grasp/detect- Detect grasp posesPOST /api/grasp/execute- Execute graspPOST /api/grasp/visualize- Visualize grasps
POST /api/record/start- Start recordingPOST /api/record/keyframe- Add keyframePOST /api/record/stop- Stop and saveGET /api/trajectories- List saved trajectoriesPOST /api/replay- Replay trajectory
python scripts/test_connection.pypython scripts/calibrate_robot.py --port /dev/ttyACM0# Test robot movement
python examples/basic_control.py
# Record and replay trajectories
python examples/keyframe_recorder.py
# Test kinematics
python examples/test_kinematics.py# Capture point cloud
python so101_grasp/tools/capture_pointcloud.py
# Test camera
python scripts/test_realsense.pyEdit configuration files in config/:
robot_config.yaml- Robot parameters, joint limitscamera_config.yaml- Camera settingsgrasp_config.yaml- Grasp planning parameters
Or use environment variables:
export ROBOT_PORT=/dev/ttyACM0
export CAMERA_WIDTH=640
export CAMERA_HEIGHT=480The system uses ThinkGrasp for AI-powered grasp detection:
# Automatic in web interface, or manual:
from so101_grasp.api import grasp_predictor
# Get grasp from point cloud
result = grasp_predictor.predict(points, colors, masks)
grasp_pose = result['grasp_pose']
confidence = result['confidence']# Find robot port
ls /dev/tty* | grep -E "(ACM|USB)"
# or
python -m lerobot.find_port# Test with RealSense viewer
realsense-viewer
# Check USB 3.0 connection
lsusb | grep Intel# Linux
sudo usermod -a -G dialout $USER
sudo chmod a+rw /dev/ttyACM0
# Logout and login again# Release all motors
python scripts/tools/disable_torque.py
# Reset to home
python examples/basic_control.pypython -m pytest tests/- Add endpoint in
web_control/app.py - Add UI in
web_control/templates/ - Add robot control in
so101_grasp/robot/
- Web Layer: Flask routes and WebSocket handlers
- API Layer: RESTful endpoints for all operations
- Control Layer: Robot kinematics and motion planning
- Vision Layer: Camera and point cloud processing
- AI Layer: ThinkGrasp integration for grasp detection
Apache License 2.0