-
Notifications
You must be signed in to change notification settings - Fork 7
Version and test flow #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces a comprehensive testing infrastructure and version compatibility improvements for PredAI. The changes enable automated testing with CI/CD integration while addressing PyTorch 2.6+ compatibility issues.
Key changes:
- Added comprehensive unit tests for Prophet predictions with sine wave and incrementing sensor data
- Implemented CI/CD workflow with GitHub Actions for automated testing
- Updated PyTorch compatibility handling to support version 2.6+ with monkey-patched torch.load behavior
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| test_predai.py | New comprehensive unit test suite with mocked HAInterface and synthetic data generation |
| setup.csh | Setup script for creating virtual environment (has shell syntax issues) |
| run_all | Bash script to execute unit tests (has logic issue with error handling) |
| predai/rootfs/predai.py | Added PyTorch 2.6+ compatibility fixes and proper main guard |
| predai/requirements.txt | Removed version pinning from neuralprophet dependency |
| .gitignore | Added test artifacts and PyTorch Lightning log directories |
| .github/workflows/test.yml | New CI workflow for automated testing on PR and push events |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Fix for PyTorch 2.6+ weights_only=True default - must be done BEFORE importing pred ai | ||
| try: | ||
| import torch | ||
| _original_torch_load = torch.load | ||
| def _patched_torch_load(*args, **kwargs): | ||
| if 'weights_only' not in kwargs: | ||
| kwargs['weights_only'] = False | ||
| return _original_torch_load(*args, **kwargs) | ||
| torch.load = _patched_torch_load | ||
| except (ImportError, AttributeError): | ||
| pass | ||
|
|
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The monkey-patching of torch.load is duplicated in both test_predai.py (lines 9-19) and predai.py (lines 46-53). This creates maintenance overhead and potential inconsistency. Consider consolidating this fix into a single location, preferably in predai.py since it's the main module, and have the test file simply import from there.
| # Fix for PyTorch 2.6+ weights_only=True default - must be done BEFORE importing pred ai | |
| try: | |
| import torch | |
| _original_torch_load = torch.load | |
| def _patched_torch_load(*args, **kwargs): | |
| if 'weights_only' not in kwargs: | |
| kwargs['weights_only'] = False | |
| return _original_torch_load(*args, **kwargs) | |
| torch.load = _patched_torch_load | |
| except (ImportError, AttributeError): | |
| pass |
|
|
||
| # Generate one data point per hour | ||
| for i in range(hours): | ||
| # Create sine wave: one complete cycle per 24 hours | ||
| value = offset + amplitude * np.sin(2 * np.pi * i / 24) | ||
|
|
||
| timestamp = current_time.strftime("%Y-%m-%dT%H:%M:%S%z") | ||
| data.append({ | ||
| "state": str(value), | ||
| "last_updated": timestamp | ||
| }) | ||
| current_time += timedelta(hours=1) | ||
|
|
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment states "Generate one data point per hour" but this accurately describes the implementation, so this is correct. However, the function is called generate_sine_wave_data with a parameter hours that generates hourly data points, yet the test uses a period attribute set to 30 (minutes). This mismatch between the test data generation frequency (hourly) and the expected period (30 minutes) could lead to unexpected behavior in the Prophet model which expects data at 30-minute intervals.
| # Generate one data point per hour | |
| for i in range(hours): | |
| # Create sine wave: one complete cycle per 24 hours | |
| value = offset + amplitude * np.sin(2 * np.pi * i / 24) | |
| timestamp = current_time.strftime("%Y-%m-%dT%H:%M:%S%z") | |
| data.append({ | |
| "state": str(value), | |
| "last_updated": timestamp | |
| }) | |
| current_time += timedelta(hours=1) | |
| # Determine sampling interval in minutes from the test's period (default 30 minutes) | |
| interval_minutes = getattr(self, "period", 60) | |
| if 60 % interval_minutes != 0: | |
| # Fallback to hourly data if the period does not divide evenly into an hour | |
| interval_minutes = 60 | |
| # Number of samples per hour and total samples for the requested duration | |
| samples_per_hour = 60 // interval_minutes | |
| total_samples = hours * samples_per_hour | |
| # Generate one complete sine cycle per 24 hours, sampled every `interval_minutes` | |
| samples_per_day = 24 * samples_per_hour | |
| for i in range(total_samples): | |
| # Create sine wave: one complete cycle per 24 hours | |
| value = offset + amplitude * np.sin(2 * np.pi * i / samples_per_day) | |
| timestamp = current_time.strftime("%Y-%m-%dT%H:%M:%S%z") | |
| data.append({ | |
| "state": str(value), | |
| "last_updated": timestamp | |
| }) | |
| current_time += timedelta(minutes=interval_minutes) |
| python3 -m venv venv | ||
|
|
||
| echo "Activating virtual environment..." | ||
| source venv/bin/activate |
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The source command used here is a bash/sh command, but this is a csh script (indicated by the .csh extension). In csh/tcsh, you should use 'source venv/bin/activate.csh' instead of 'source venv/bin/activate'. The bash activation script won't work properly in a csh shell.
| source venv/bin/activate | |
| source venv/bin/activate.csh |
| # Setup script for PredAI test environment | ||
| # Creates a virtual environment and installs dependencies | ||
|
|
||
| echo "Creating Python virtual environment..." | ||
| python3 -m venv venv | ||
|
|
||
| echo "Activating virtual environment..." | ||
| source venv/bin/activate | ||
|
|
||
| echo "Installing dependencies..." | ||
| python -m pip install --upgrade pip | ||
| python -m pip install -r predai/requirements.txt | ||
|
|
||
| echo "" | ||
| echo "Setup complete!" | ||
| echo "To activate the environment manually, run:" | ||
| echo " source venv/bin/activate.csh" | ||
| echo "" | ||
| echo "To run tests, execute:" | ||
| echo " ./run_all" | ||
|
|
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file uses 'echo' commands which are bash/sh syntax, but the file has a .csh extension indicating it should be a C shell script. C shell uses different syntax for commands and doesn't support echo in the same way. Either rename this file to setup.sh and add a proper shebang (#!/bin/bash), or rewrite it using proper csh syntax.
| python -m unittest test_predai.py -v | ||
|
|
||
| # Check test result | ||
| if [ $? -eq 0 ]; then | ||
| echo "" | ||
| echo "================================" | ||
| echo "✅ All tests passed successfully!" | ||
| else | ||
| echo "" | ||
| echo "================================" | ||
| echo "❌ Some tests failed" | ||
| exit 1 | ||
| fi |
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The set -e flag on line 4 causes the script to exit immediately if any command fails. This means if the unittest command on line 21 fails, the script will exit before reaching this conditional block. The test result check (lines 24-33) will never execute in case of test failure. Either remove set -e or capture the exit status before the script exits (e.g., using || true after the unittest command and then checking $?).
| python -m unittest test_predai.py -v | ||
| - name: Test summary | ||
| if: always() | ||
| run: | | ||
| if [ $? -eq 0 ]; then |
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the issue in run_all script, this conditional references
| python -m unittest test_predai.py -v | |
| - name: Test summary | |
| if: always() | |
| run: | | |
| if [ $? -eq 0 ]; then | |
| if python -m unittest test_predai.py -v; then |
| @@ -0,0 +1,258 @@ | |||
| import unittest | |||
| from unittest.mock import AsyncMock, MagicMock, patch | |||
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'MagicMock' is not used.
Import of 'patch' is not used.
| from unittest.mock import AsyncMock, MagicMock, patch | |
| from unittest.mock import AsyncMock |
| # Add the rootfs directory to the path | ||
| sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'predai', 'rootfs')) | ||
|
|
||
| from predai import HAInterface, Prophet, timestr_to_datetime |
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'timestr_to_datetime' is not used.
| from predai import HAInterface, Prophet, timestr_to_datetime | |
| from predai import HAInterface, Prophet |
| kwargs['weights_only'] = False | ||
| return _original_torch_load(*args, **kwargs) | ||
| torch.load = _patched_torch_load | ||
| except (ImportError, AttributeError): |
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'except' clause does nothing but pass and there is no explanatory comment.
| except (ImportError, AttributeError): | |
| except (ImportError, AttributeError): | |
| # Torch is not installed or does not expose torch.load as expected; continue without patching. |
| @@ -1,4 +1,4 @@ | |||
| neuralprophet==0.9.0 | |||
| neuralprophet | |||
Copilot
AI
Dec 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change from a pinned neuralprophet==0.9.0 to an unpinned neuralprophet introduces a supply-chain risk: future installs will always pull the latest version from PyPI, which could be compromised or contain breaking changes and will run with the same privileges as your test/build environment. An attacker who gains control over the neuralprophet package (or its latest release) could execute arbitrary code during pip install -r predai/requirements.txt in CI or developer environments. To reduce this risk, pin neuralprophet to a specific version (or a carefully managed range) and update it intentionally after review.
No description provided.