Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
daf2054
chore: correct `newRelease` on different fork (#79)
spotandjake Apr 24, 2025
14e4559
feat: node 20.19.1 patch (#83)
dashrath-ky May 15, 2025
377c55d
feat: node 22.15.1 patch (#82)
dashrath-ky May 15, 2025
b8ab3ba
feat: add shas for node v20.19.1 and v22.15.1
robertsLando May 15, 2025
f7d12b7
Release 3.5.22
robertsLando May 15, 2025
4218a96
fix: correct syntax in release tag determination logic
robertsLando May 16, 2025
7275f0d
fix: update missing SHAS
robertsLando May 17, 2025
585635b
Release 3.5.23
robertsLando May 17, 2025
95e9d04
feat: add v20.19.2 patch (#84)
github-actions[bot] May 17, 2025
772a7a9
feat: add v22.16.0 patch (#85)
github-actions[bot] May 23, 2025
65f9590
feat: add v20.19.3 patch (#90)
github-actions[bot] Jun 24, 2025
33afe2a
feat: add v22.17.0 patch (#91)
github-actions[bot] Jun 25, 2025
e56a438
feat: add v20.19.4 patch (#92)
github-actions[bot] Jul 16, 2025
f82c20f
feat: add v22.17.1 patch (#93)
github-actions[bot] Jul 16, 2025
5baf78e
fix: improved error when downloading from GitHub fails
pilotkid Jul 17, 2025
6c56cdf
fix: update ARMv7 compilation flags for improved floating-point suppo…
robertsLando Jul 18, 2025
a4535d1
feat: update expected SHAs for Node.js versions 20.19.4 and 22.17.1
robertsLando Jul 19, 2025
d231c84
Release 3.5.24
robertsLando Jul 19, 2025
3fb4350
feat: enhance patch application process with AI conflict resolution
robertsLando Aug 28, 2025
9f7460c
fix: ensure error handling in patch creation step
robertsLando Aug 28, 2025
081dc1a
refactor: streamline OpenAI API call and remove manual resolution fil…
robertsLando Aug 28, 2025
2651325
fix: improve error handling in OpenAI API call and ensure proper resp…
robertsLando Aug 28, 2025
9650436
fix: streamline output handling for conflict resolution in GitHub Act…
robertsLando Aug 28, 2025
3c378db
fix: add prompt logging for OpenAI API call in conflict resolution
robertsLando Aug 28, 2025
5b53026
fix: enhance conflict resolution by improving hunk parsing and contex…
robertsLando Aug 28, 2025
fd0e2f4
fix: improve error logging for OpenAI API call by removing redundant …
robertsLando Aug 28, 2025
f88b149
fix: update system message to include JS expertise for OpenAI API con…
robertsLando Aug 28, 2025
11940cd
fix: reduce max_tokens for OpenAI API call to optimize response size
robertsLando Aug 28, 2025
2080163
fix: update OpenAI API call to use new endpoint and model, enhance pr…
robertsLando Aug 28, 2025
4f720b5
fix: improve error logging for OpenAI API call by removing stderr red…
robertsLando Aug 28, 2025
799fbc1
fix: update OpenAI API call to use responses API
robertsLando Aug 28, 2025
23ba5a2
fix: update response handling in OpenAI API call to correctly access …
robertsLando Aug 28, 2025
e7241d0
fix: add resolution output handling for patch conflicts in Node.js wo…
robertsLando Aug 28, 2025
206ea93
fix: remove unnecessary markdown formatting for resolution output in …
robertsLando Aug 28, 2025
7f35fce
fix: streamline PR creation message by removing redundant patch statu…
robertsLando Aug 28, 2025
c41ab07
fix: add logging for resolved content in OpenAI API response
robertsLando Aug 28, 2025
45e2576
fix: enhance logging format for resolved content in OpenAI API response
robertsLando Aug 29, 2025
7898d82
fix: streamline resolution output handling for PR creation
robertsLando Aug 29, 2025
03677a1
feat: add v22.19.0 patch (#106)
github-actions[bot] Aug 29, 2025
78d158c
feat: add v20.19.5 patch (#109)
github-actions[bot] Sep 4, 2025
a4bdb16
feat: node 24 support and attempt some Node 22 fixes (by @faulpeltz) …
faulpeltz Sep 17, 2025
38a22f9
fix: macOS x64 build timeout by using Intel runners and 4-core paral…
nrranjithnr Sep 25, 2025
65f6dea
feat: add support for Node.js versions 20.19.5, 22.19.0, and 24.8.0 i…
robertsLando Sep 26, 2025
123f4d0
Release 3.5.25
robertsLando Sep 26, 2025
39131f5
feat: add v24.9.0 patch (#115)
github-actions[bot] Sep 29, 2025
08d19cd
fix: bump tar-fs from 2.1.1 to 3.1.1 to fix security vulnerabilities …
Copilot Oct 2, 2025
3128ce5
Release 3.5.26
robertsLando Oct 3, 2025
d3d6a81
feat: allow linuxstatic builds for ppc64 architecture (#118)
nrranjithnr Oct 6, 2025
39afdc1
3.5.27
robertsLando Oct 6, 2025
370fbdb
Release 3.5.28
robertsLando Oct 6, 2025
2f66258
chore: switch to .release-it.json configuration (#119)
robertsLando Oct 7, 2025
bd96eb5
build: use jammy to build linux.cross (#123) by @faulpeltz
faulpeltz Oct 13, 2025
fc9d107
feat: add v22.20.0 patch (#114)
github-actions[bot] Oct 14, 2025
4ae27c5
feat: add v24.10.0 patch (#120)
github-actions[bot] Oct 14, 2025
358c7b3
feat: update expected SHAs for Node.js versions 22.20.0 and 24.10.0
robertsLando Oct 15, 2025
d1da523
Release 3.5.29
robertsLando Oct 15, 2025
cda5244
feat: enhance shas parsing logic in update workflow
robertsLando Oct 20, 2025
79fd995
feat: add v22.21.0 patch (#125)
github-actions[bot] Oct 21, 2025
61f032f
feat: add v22.21.1 patch (#126)
github-actions[bot] Oct 29, 2025
6f8ffd2
feat: add v24.11.0 patch (#127)
github-actions[bot] Oct 29, 2025
54188cb
feat: add workflow_call inputs for expected sha256sums
robertsLando Oct 29, 2025
ba6149d
fix: update expected shas (#128)
github-actions[bot] Oct 29, 2025
24fd7cf
Release 3.5.30
robertsLando Oct 29, 2025
7a9180c
feat: add v24.11.1 patch (#129)
github-actions[bot] Nov 12, 2025
330c578
Merge tag 'v3.5.30' into spotandjake/3.5.30
spotandjake Nov 13, 2025
0808cce
feat: add v20.19.6 patch (#133)
github-actions[bot] Nov 26, 2025
0b7da24
Merge branch 'yao-pkg:main' into spotandjake/3.5.30
spotandjake Nov 26, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
280 changes: 280 additions & 0 deletions .github/scripts/openai_resolver.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,280 @@
#!/usr/bin/env python3
import sys
import json
import os
import re
from urllib.request import Request, urlopen


def parse_reject_file(reject_content):
"""Parse reject file to extract hunk information"""
hunks = []
lines = reject_content.split("\n")

current_hunk = None
for line in lines:
# Match hunk header: @@ -start,count +start,count @@
hunk_match = re.match(r"@@\s*-(\d+)(?:,(\d+))?\s*\+(\d+)(?:,(\d+))?\s*@@", line)
if hunk_match:
old_start = int(hunk_match.group(1))
old_count = int(hunk_match.group(2)) if hunk_match.group(2) else 1
new_start = int(hunk_match.group(3))
new_count = int(hunk_match.group(4)) if hunk_match.group(4) else 1

current_hunk = {
"old_start": old_start,
"old_count": old_count,
"new_start": new_start,
"new_count": new_count,
"lines": [],
}
hunks.append(current_hunk)
elif current_hunk is not None and (
line.startswith(" ") or line.startswith("-") or line.startswith("+")
):
current_hunk["lines"].append(line)

return hunks


def extract_file_context(file_content, hunks, context_lines=5):
"""Extract relevant sections from file based on hunks"""
if not file_content:
return ""

file_lines = file_content.split("\n")
extracted_sections = []

for hunk in hunks:
# Calculate the range of lines to extract with extra context
start_line = max(
0, hunk["old_start"] - context_lines - 1
) # -1 for 0-based indexing
end_line = min(
len(file_lines), hunk["old_start"] + hunk["old_count"] + context_lines - 1
)

# Extract the section
section_lines = file_lines[start_line:end_line]
section = {
"start_line_num": start_line + 1, # Convert back to 1-based for display
"end_line_num": end_line,
"content": "\n".join(section_lines),
"hunk": hunk,
}
extracted_sections.append(section)

return extracted_sections


def apply_fixed_sections(original_content, fixed_sections):
"""Apply fixed sections back to the original file"""
if not original_content:
return fixed_sections[0]["content"] if fixed_sections else ""

file_lines = original_content.split("\n")

# Sort sections by start line (descending) to apply from bottom to top
sorted_sections = sorted(
fixed_sections, key=lambda x: x["start_line_num"], reverse=True
)

for section in sorted_sections:
start_idx = section["start_line_num"] - 1 # Convert to 0-based indexing
end_idx = section["end_line_num"]

# Replace the section
fixed_lines = section["content"].split("\n")
file_lines[start_idx:end_idx] = fixed_lines

return "\n".join(file_lines)


def call_openai_api(prompt, api_key, model="gpt-4.1-mini"):
"""Call OpenAI Responses API to resolve patch conflicts"""
url = "https://api.openai.com/v1/responses"

headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}

# Calculate dynamic max_tokens

data = {
"model": model,
"input": f"""You are an expert developer helping to resolve Git patch conflicts.

{prompt}

Instructions:
- Return ONLY the corrected code section
- Do not add explanations or markdown formatting
- Preserve exact line structure and formatting
- Apply the changes from the rejected hunk to the current file section

Corrected code section:""",
"temperature": 0,
}

try:
req = Request(url, data=json.dumps(data).encode("utf-8"), headers=headers)
with urlopen(req) as response:
result = json.loads(response.read().decode("utf-8"))
if "output" in result and len(result["output"]) > 0:
response_text = result["output"][0]["content"][0]["text"].strip()
# Clean up any potential formatting artifacts
response_text = response_text.replace("```", "").strip()
print(
f"RESOLVED CONTENT for {original_file}:\n```\n{response_text}\n```"
)
return response_text
else:
return None
except Exception as e:
print(f"OpenAI API error: {e}")
return None


def resolve_conflict(reject_file, original_file, api_key):
"""Resolve a single conflict using OpenAI with context extraction"""

# Read reject file content
try:
with open(reject_file, "r", encoding="utf-8", errors="ignore") as f:
reject_content = f.read()
except Exception as e:
print(f"Error reading reject file {reject_file}: {e}", file=sys.stderr)
return False

# Read current file content
current_content = ""
if os.path.exists(original_file):
try:
with open(original_file, "r", encoding="utf-8", errors="ignore") as f:
current_content = f.read()
except Exception as e:
print(f"Error reading original file {original_file}: {e}", file=sys.stderr)

# Parse reject file to extract hunks
hunks = parse_reject_file(reject_content)
if not hunks:
print(f"No valid hunks found in {reject_file}", file=sys.stderr)
return False

# Extract relevant file sections
file_sections = extract_file_context(current_content, hunks)

fixed_sections = []

# Process each section
for i, section in enumerate(file_sections):
hunk = section["hunk"]

# Create prompt for this specific section
prompt = f"""I have a Git patch that failed to apply. Here's the specific section that needs to be fixed:

REJECTED PATCH HUNK:
```
@@ -{hunk['old_start']},{hunk['old_count']} +{hunk['new_start']},{hunk['new_count']} @@
{chr(10).join(hunk['lines'])}
```

CURRENT FILE SECTION (lines {section['start_line_num']}-{section['end_line_num']}):
```
{section['content']}
```

Please apply the intended changes from the rejected hunk to this file section. Return ONLY the corrected file section content, preserving the exact line structure and formatting. Do not add explanations or markdown formatting."""

print(
f"Processing section {i+1}/{len(file_sections)} for {original_file}...",
file=sys.stderr,
)

print(f"Prompt for OpenAI API:\n{prompt}")

resolved_content = call_openai_api(prompt, api_key)

if resolved_content:
fixed_sections.append(
{
"start_line_num": section["start_line_num"],
"end_line_num": section["end_line_num"],
"content": resolved_content,
}
)
else:
print(f"❌ Failed to get resolution from OpenAI for section {i+1}")
return False

# Apply all fixed sections back to the original file
try:
final_content = apply_fixed_sections(current_content, fixed_sections)
with open(original_file, "w", encoding="utf-8") as f:
f.write(final_content)
print(f"✅ Successfully resolved {original_file}")
return True
except Exception as e:
print(
f"Error writing resolved content to {original_file}: {e}", file=sys.stderr
)
return False


if __name__ == "__main__":
if len(sys.argv) < 3:
print(
"Usage: openai_resolver.py <reject_files_dir> <api_key>",
file=sys.stderr,
)
sys.exit(1)

reject_dir = sys.argv[1]
api_key = sys.argv[2]

# Find all .rej files
reject_files = []
for root, dirs, files in os.walk(reject_dir):
for file in files:
if file.endswith(".rej"):
reject_files.append(os.path.join(root, file))

if not reject_files:
print("No reject files found")
sys.exit(0)

conflicts_resolved = 0
total_conflicts = len(reject_files)
failed_files = []

print(f"Found {total_conflicts} reject files to process")

for reject_file in reject_files:
original_file = reject_file[:-4] # Remove .rej extension
print(f"Processing: {reject_file} -> {original_file}")

if resolve_conflict(reject_file, original_file, api_key):
conflicts_resolved += 1
else:
print(f"Failed to resolve {original_file}")
failed_files.append(original_file)
break

# Clean up reject file
try:
os.remove(reject_file)
except:
pass

# Output results for GitHub Actions
print(f"CONFLICTS_RESOLVED={conflicts_resolved}")
print(f"TOTAL_CONFLICTS={total_conflicts}")
print(f"HAS_UNRESOLVED={failed_files.__len__() > 0}")

if failed_files:
print("FAILED_FILES=", " ".join(failed_files))

print(
f"Resolution summary: {conflicts_resolved}/{total_conflicts} conflicts resolved"
)

# Exit with appropriate code
sys.exit(0 if conflicts_resolved == total_conflicts else 1)
38 changes: 22 additions & 16 deletions .github/workflows/build-all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ jobs:
actions: write
runs-on: ubuntu-latest
needs: [build-alpine, build-linux, build-linuxstatic, build-macos, build-windows]
outputs:
shas: ${{ steps.generate_sha_file.outputs.shas_txt }}
steps:
- run: echo Is making new release? '${{ inputs.newRelease }}'
- name: Checkout
Expand All @@ -41,11 +43,15 @@ jobs:
id: get_previous_release
run: |
PREV_RELEASE=$(gh release list --limit 1 --exclude-drafts --exclude-pre-releases | cut -f3)
echo "Using release ${PREV_RELEASE}"
echo "prev_release=${PREV_RELEASE}" >> $GITHUB_OUTPUT
if [ -z "$PREV_RELEASE" ]; then
echo "No Previous Release"
else
echo "Using release ${PREV_RELEASE}"
echo "prev_release=${PREV_RELEASE}" >> $GITHUB_OUTPUT
fi

- name: Get previously released artifacts
if: ${{ inputs.newRelease }}
if: ${{ inputs.newRelease && steps.get_previous_release.outputs.prev_release != '' }}
env:
GITHUB_TOKEN: ${{ github.token }}
run: |
Expand All @@ -55,7 +61,7 @@ jobs:
run: mkdir artifact-binaries && mkdir artifact-shas

- name: Copy previous release artifacts to artifact folders
if: ${{ inputs.newRelease }}
if: ${{ inputs.newRelease && steps.get_previous_release.outputs.prev_release != '' }}
run: |
pushd release-artifacts
pwd
Expand Down Expand Up @@ -88,11 +94,14 @@ jobs:
echo "### SHAs of produced and carried forward binaries by this workflow" >> $GITHUB_STEP_SUMMARY
echo " - $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
sha_output_file=${GITHUB_SHA}_${RANDOM}_shas.txt
echo "sha_output_file=${sha_output_file}" >> $GITHUB_OUTPUT
cat artifact-shas/*.sha256sum > ${sha_output_file}
cat artifact-shas/*.sha256sum >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY

# Store SHAs in output variable using multiline format
echo "shas_txt<<EOF" >> $GITHUB_OUTPUT
cat artifact-shas/*.sha256sum >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT

# Get a random string of characters to represent the EOF delimiter
EOF=$(dd if=/dev/urandom bs=15 count=1 status=none | base64)
echo "sha_summary<<$EOF" >> $GITHUB_ENV
Expand All @@ -101,7 +110,7 @@ jobs:

- name: Determine release tag to upload assets to and draft type
run: |
if [[ "${{ inputs.newRelease }}" == "false" ]]; then
if [[ "${{ inputs.newRelease && steps.get_previous_release.outputs.prev_release != '' }}" == "false" ]]; then
echo "use_release_tag=${{ steps.get_previous_release.outputs.prev_release }}" >> $GITHUB_ENV
echo "create_draft=false" >> $GITHUB_ENV
else
Expand All @@ -126,11 +135,8 @@ jobs:
- name: Add release url to summary
run: echo "Release created/updated at ${{ steps.create_release.outputs.url }}" >> $GITHUB_STEP_SUMMARY

- name: Create PR for expected shas update
run: |
# trigger the update-expected.yml workflow with the latest shas
curl -X POST \
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/${{ github.repository }}/actions/workflows/update-expected.yml/dispatches \
-d '{"ref":"main","inputs":{"shas":"$(cat ${{ steps.generate_sha_file.outputs.sha_output_file }})"}}'
update-expected-shas:
needs: collect-artifacts
uses: ./.github/workflows/update-expected.yml
with:
shas: ${{ needs.collect-artifacts.outputs.shas }}
Loading