Quick answers to common questions about Intel® Edge Developer Kits.
- Getting Started
- Hardware Compatibility
- Installation Issues
- Performance & Optimization
- Use Cases & Applications
- Troubleshooting
A: Follow this learning path:
- Complete our Getting Started Guide
- Try the OpenWebUI + Ollama use case (10 minutes)
- Read AI Development Basics for concepts
- Join our community discussions to ask questions
A: The scripts work with many Intel® platforms, but we recommend:
- Students/Beginners: Any Intel® Core™ Ultra system with integrated graphics
- Professionals: Intel® Core™ Ultra + Intel® Arc™ GPU (B580 or better)
- Researchers: High-end CPU + Intel® Arc™ B60 Pro for maximum performance
Check our Hardware Selection Guide for detailed recommendations.
A:
- Quick install: 15-30 minutes (automatic)
- With reboot: Add 5-10 minutes for system restart
- Full validation: 45-60 minutes including use case testing
A: Check the compatibility matrix. Supported platforms include:
- Intel® Core™ Ultra (Series 1 & 2)
- Intel® Arc™ Graphics (A-Series, B-Series)
- Intel® 14th Gen Core™ processors
- Intel® Core™ N-series
A: Older hardware may work with reduced functionality:
- 10th-12th Gen Intel® CPUs: Basic functionality, no NPU features
- Older Intel® GPUs: Limited AI acceleration
- Very old systems: May require manual driver installation
For best results, use hardware from our validated list.
A: You need the HWE (Hardware Enablement) kernel:
sudo apt install linux-generic-hwe-24.04
sudo reboot
# Then rerun the installerA: Make sure you're running with sudo:
sudo ./main_installer.shNot just ./main_installer.sh
A: Try these steps:
- Reboot your system
- Enable Resizable BAR in BIOS (for Arc GPUs)
- Run the GPU installer separately:
sudo ./gpu_installer.sh - Check detection:
lspci | grep -i vga
A: Add your user to the docker group:
sudo usermod -aG docker $USER
# Log out and log back inA: Check these optimization steps:
- GPU Memory: Ensure you have enough VRAM for your model
- Resizable BAR: Enable in BIOS for Intel Arc GPUs
- Model Size: Try smaller model variants for faster inference
- Device Selection: Verify workloads are using GPU, not CPU
A: NPU usage depends on the application:
- Check detection:
ls /dev/intel-npu* - Monitor usage: Some apps show device utilization in logs
- Configure manually: Set environment variables like
STT_DEVICE=NPUin docker-compose files
A: Yes, but consider:
- Memory limits: Each model needs GPU/system memory
- Performance impact: Multiple models compete for resources
- Multi-GPU setups: Distribute models across different GPUs
A: Based on your interests:
- AI Beginner: OpenWebUI + Ollama - Like ChatGPT but local
- Computer Vision: AI Video Analytics - Analyze video content
- Enterprise AI: RAG Toolkit - Build knowledge systems
A: Absolutely! The use cases are starting points:
- Code: All source code is available and modifiable
- Models: Swap AI models for different capabilities
- Configuration: Adjust settings via environment variables
- Integration: Use as components in larger applications
A: Consider these steps:
- Security: Review and harden configurations
- Monitoring: Add logging and health checks
- Scaling: Use Kubernetes or similar orchestration
- Updates: Establish model and software update processes
A: Check these locations:
- Installer logs: Terminal output during installation
- System logs:
/var/log/directory - Docker logs:
docker logs [container_name] - Application logs: Usually in the use case directory
A: Common fixes:
- Check ports: Ensure required ports aren't in use
- Docker status: Verify Docker is running:
systemctl status docker - Permissions: Check file permissions in the use case directory
- Resources: Ensure enough disk space and memory
A: Try these solutions:
- Internet connection: Verify network connectivity
- Disk space: Ensure enough space for model files (often several GB)
- Permissions: Check that Docker can write to volume mounts
- Alternative sources: Some models have mirror download locations
A: Optimization checklist:
- ✅ Latest drivers installed
- ✅ Resizable BAR enabled (Intel Arc GPUs)
- ✅ Adequate cooling (check for thermal throttling)
- ✅ Power settings optimized for performance
- ✅ Models running on GPU, not CPU fallback
- Report it: GitHub Issues
- Include: System info, error messages, steps to reproduce
- Community: GitHub Discussions
- Documentation: Troubleshooting Guide
- Improvements: Submit pull requests with fixes or enhancements
- Documentation: Help improve guides and examples
- Use Cases: Share your custom implementations
Didn't find your answer? Ask in our community discussions - we're here to help!