An end-to-end AI-powered drone system for early forest fire detection and autonomous response
Ho Chi Minh City University of Technology and Engineering (HCMUTE) - Graduation Thesis 2025
- Problem Statement
- Solution Overview
- Key Technical Highlights
- System Architecture
- Experimental Results
- Hardware Components
- Software Components
- Installation & Usage
- API Reference
- Demo Video
- Authors
- Acknowledgments
Forest fires cause devastating environmental damage, loss of wildlife habitats, and threaten human communities. Traditional fire detection methods rely on:
- Satellite imaging: Low temporal resolution (hours delay)
- Fixed cameras/sensors: Limited coverage area
- Human patrols: Expensive, dangerous, and inefficient
Challenge: How can we detect forest fires in real-time with precise geolocation and immediate alerts to enable rapid response?
We developed an autonomous UAV-based fire detection system that combines:
- Edge AI Processing: YOLOv11 models optimized with TensorRT FP16, running directly on NVIDIA Jetson Nano onboard the drone
- Two-Stage Cascaded Detection: Smoke detection (early warning) followed by fire confirmation (reduces false positives)
- Real-Time Data Fusion: AI detection results + GPS coordinates + drone telemetry = geo-tagged alerts
- Autonomous Response: Automatic mission pause (LOITER mode) when smoke detected, allowing operator to assess the situation
- Multi-Channel Alerts: Instant Telegram notifications with images and Google Maps location links
- Converted YOLOv11 models from ONNX to TensorRT engines with FP16 precision
- Achieved 10+ FPS inference on Jetson Nano 4GB (power-constrained edge device)
- Custom TensorRT inference wrapper with CUDA stream management for asynchronous processing
Frame Input -> [Stage 1: Smoke Model] -> Smoke Detected?
|
YES | NO
v v
[Stage 2: Fire Model] Continue monitoring
|
Fire Confirmed?
|
YES | NO
v v
FIRE ALERT SMOKE WARNING
- Stage 1 (Smoke): Continuous inference at full FPS, lightweight model (416x416 input)
- Stage 2 (Fire): Triggered only when smoke detected, higher precision model (640x640 input)
- Benefits: Reduces computational load, minimizes false positives, enables early warning
- Direct communication with Pixhawk 6C flight controller via pymavlink
- Autonomous mode switching (GUIDED -> LOITER -> GUIDED) based on AI detection
- Mission planning with waypoint actions (Takeoff, Land, RTL, Loiter, Delay)
- Real-time telemetry streaming (GPS, altitude, attitude, battery, speed)
- Background worker thread for sending alerts without blocking inference
- Rate limiting to prevent alert spam (configurable cooldown)
- Image attachment with bounding box overlays and detection confidence
- GPS coordinates with clickable Google Maps links
- Main Process: Smoke detection + RTSP/MJPEG streaming
- Fire Worker Process: Separate CUDA context for fire confirmation (avoids GPU memory conflicts)
- Flask Server Thread: Web interface and API endpoints
- Telegram Worker Thread: Non-blocking alert delivery
| Subsystem | Components | Connection |
|---|---|---|
| Vision | Pi Camera V2 (IMX219) -> Jetson Nano | MIPI CSI-2 |
| Flight Control | GPS M10 -> Pixhawk 6C -> Air Telemetry | UART |
| Power | 4S LiPo -> PM02 -> PDB | DC 14.8V |
| Propulsion | Pixhawk -> ESC 40A x4 -> Motors | PWM |
| Ground Station | Laptop -> Ground Telemetry Radio | USB |
Jetson Nano (Onboard - Python 3.6.9):
jetson_rtsp_server_v2.py: GStreamer-based RTSP server with H.264 hardware encodingjetson_yolo11_two_stage_mjpeg_server_v5_4_telegram.py: Two-stage AI detection pipeline
Ground Station (Windows/Linux - Python 3.11):
webgcs_loiter.py: Flask + SocketIO web-based Ground Control Station with mission planning
When smoke is detected during an autonomous mission:
- Detection: Jetson Nano detects smoke with confidence above threshold
- Alert: System sends immediate Telegram alert with GPS location
- Pause: GCS commands Pixhawk to enter LOITER mode via MAVLink
- Hold Position: Drone hovers at current location for operator inspection
- Resume/RTL: Operator can resume mission or trigger Return-To-Launch
The complete drone system with all components integrated and ready for field testing.
When smoke is detected, the system immediately sends a Telegram alert with:
- Detection confidence percentage
- Bounding box visualization
- Timestamp
- GPS coordinates (when available)
The Web GCS displays real-time detection status with live video feed and telemetry data.
When fire is confirmed (Stage 2), an urgent alert is sent with higher priority notification.
The Web GCS shows fire confirmation with captured snapshots and detection history.
Alert Features:
- Immediate notification when smoke/fire detected
- GPS coordinates with Google Maps link
- Detection frame with bounding boxes
- Rate limiting to prevent alert spam
| Component | Specification | Purpose |
|---|---|---|
| NVIDIA Jetson Nano | 4GB RAM, Maxwell GPU (128 CUDA cores) | Edge AI inference |
| Pi Camera V2 | IMX219, 8MP, MIPI CSI-2 | Video capture |
| Pixhawk 6C | STM32H7, ArduPilot Copter 4.x | Flight control |
| GPS M10 | u-blox M10, 10Hz update | Position tracking |
| 433MHz Telemetry | Air + Ground pair | MAVLink communication |
| 4S LiPo Battery | 14.8V, 5000mAh | Power supply |
| ESC 40A x4 | 3-Phase brushless | Motor control |
Jetson Nano:
- Python 3.6.9
- TensorRT 8.x (pre-installed with JetPack)
- PyCUDA
- OpenCV 4.x
- Flask
- GStreamer (RTSP server)
Ground Station:
- Python 3.11+
- Flask + Flask-SocketIO
- pymavlink
- requests
.
├── jetson_rtsp_server_v2.py # RTSP H.264 streaming server
├── jetson_yolo11_two_stage_mjpeg_server_v5_4_telegram.py # AI detection pipeline
├── webgcs_loiter.py # Web Ground Control Station
├── stage1_smoke.onnx # Smoke detection model (ONNX)
├── stage2_fire.onnx # Fire detection model (ONNX)
├── requirements.txt # Python dependencies
└── docs/images/ # Architecture diagrams
git clone https://github.com/khangle2101/Real-Time-Fire-Smoke-Detection-Drone.git
cd Real-Time-Fire-Smoke-Detection-Drone# Install system dependencies
sudo apt-get update
sudo apt-get install -y gstreamer1.0-rtsp gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad python3-gi python3-gst-1.0
# Install Python packages
pip3 install flask opencv-python numpy requests
# Convert models to TensorRT (on Jetson Nano)
trtexec --onnx=stage1_smoke.onnx --saveEngine=stage1_smoke.engine --fp16
trtexec --onnx=stage2_fire.onnx --saveEngine=stage2_fire.engine --fp16# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux/macOS
# venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txtOn Jetson Nano:
# Terminal 1: Start RTSP Server
python3 jetson_rtsp_server_v2.py --width 1280 --height 720 --fps 10
# Terminal 2: Start AI Detection Server
python3 jetson_yolo11_two_stage_mjpeg_server_v5_4_telegram.py \
--smoke-engine stage1_smoke.engine \
--fire-engine stage2_fire.engine \
--rtsp rtsp://127.0.0.1:8554/fire \
--telegram-token "YOUR_BOT_TOKEN" \
--telegram-chat "YOUR_CHAT_ID"On Ground Station:
python webgcs_loiter.py| Service | URL | Description |
|---|---|---|
| Web GCS | http://localhost:5000 |
Drone control & mission planning |
| MJPEG Stream | http://<JETSON_IP>:5002/video_feed |
Live detection video |
| Detection API | http://<JETSON_IP>:5002/api/status |
JSON status endpoint |
Returns current detection status with smoke/fire confidence, bounding box count, and timestamps.
MJPEG video stream with real-time detection overlays.
Fire detection snapshots (n = 0, 1, 2) with bounding box annotations.
Create a new mission with waypoints and actions.
Start mission execution with automatic waypoint navigation.
Resume mission after smoke detection pause.
Get current smoke pause status and location.
Click to watch the full demonstration video
Graduation Thesis Project - HCMUTE 2025
| Name | Role | Contribution |
|---|---|---|
| Le Hoang Khang | Team Leader | System architecture, AI pipeline, Edge optimization, Web GCS, Hardware integration, Drone assembly, Flight testing |
| Nguyen Viet Khue | Member | Flight testing, Web GCS |
- Ultralytics - YOLOv11 object detection
- NVIDIA - TensorRT & Jetson Nano platform
- ArduPilot - Open-source autopilot firmware
- pymavlink - MAVLink Python library
- Flask - Python web framework
This project is distributed under the MIT License. See LICENSE file for more information.
Built for forest fire prevention and environmental protection





.jpg)






