Team: Code Enforcers
Detects7 is a web + API demo and research prototype for detecting seven safety-related object classes. It was trained on the Falcon synthetic dataset and deployed with a FastAPI backend and a Vite + React frontend. The project is designed to be simple to run locally and to serve as a foundation for further experiments.
| Name | GitHub | Role |
|---|---|---|
| Akash Kumar | XynaxDev | Project lead — directs model development, tunes experiments and hyperparameters, and manages API integration for deployment. |
| Lavnish | lavn1sh | Frontend & integration engineer — builds the web interface and connects the ML outputs to the UI and backend. |
| Nishtha | niishthaaaaaa | Model engineer — runs training experiments, prepares and curates the dataset, and iterates on model performance. |
| Himanshi | Himanshi1531 | Research lead — surveys literature, recommends improvements, and helps shape experiment design. |
App name: Detects7 — Team: Code Enforcers
Table: Code Enforcers — team roles and responsibilities.
Final metrics taken from ml/exp12/results.csv (final epoch = 150).
| Metric | Value |
|---|---|
| Precision | 0.91082 |
| Recall | 0.67942 |
| mAP@50 | 0.75223 |
| mAP@50:95 | 0.60106 |
| Training epochs | 150 |
| Best checkpoint | ml/exp12/weights/best.pt (also exported in models/) |
The
evaluate.pyscript generatesevaluation_summary.jsonand a confusion matrix for an official summary.
| Path | What it contains |
|---|---|
backend/ |
FastAPI backend (see backend/app/ for main.py, model_loader.py, utils.py, config.py) |
frontend/ |
Vite + React UI (src/ contains App.jsx, main.jsx, components, styles) |
ml/ |
Training and evaluation code, dataset config and experiments (yolo_params.yaml, train_yolo.py, evaluate.py, predict_user.py, exp12/) |
models/ |
Deployment artifacts (best_model_backup.pt, best_model_backup.onnx) |
local_run.py |
Convenience runner for local development |
space/ |
Python virtual environment (optional) |
README.md |
This file |
Use the project virtual environment in space/ or create a fresh one.
- Activate environment (PowerShell example)
& D:\detects7\space\Scripts\Activate.ps1
pip install -r requirements.txt
cd ml- Train (example using
ml/exp12/args.yaml)
python train_yolo.py --data yolo_params.yaml --epochs 150 --imgsz 768 --batch 8- Evaluate (produces
evaluation_summary.json)
python evaluate.py- Interactive prediction (image / video)
python predict_user.py- Backend (FastAPI) — example
cd backend
& ..\space\Scripts\Activate.ps1
pip install -r requirements.txt
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000- Frontend
cd frontend
npm install
npm run dev
# then open http://localhost:5173| Artifact | Location |
|---|---|
| Best checkpoint | ml/exp12/weights/best.pt |
| Last checkpoint | ml/exp12/weights/last.pt |
| Exported deployment copies | models/best_model_backup.pt, models/best_model_backup.onnx |
Tip: keep large model files out of Git history — use Git LFS or release assets for big binaries.
- We initially explored RTDETR but switched to Ultralytics YOLOv8 because it gave better detection performance for this dataset.
- Training specifics (see
ml/exp12/args.yaml):AdamW,lr0=0.001,imgsz=768,batch=8,epochs=150. Augmentations and other settings were tuned in the experiment. evaluate.pyruns the Ultralytics evaluation suite and produces a confusion matrix and a JSON summary.
- Dataset: Falcon synthetic dataset — https://falcon.duality.ai/
- Model backbone: Ultralytics YOLOv8
Thank you 💌 — Detects7 team