Skip to content

DO NOT MERGE: Output validator plus across swap (for updated types only)#1649

Draft
0xDEnYO wants to merge 110 commits intomainfrom
output-validator-plus-across-swap
Draft

DO NOT MERGE: Output validator plus across swap (for updated types only)#1649
0xDEnYO wants to merge 110 commits intomainfrom
output-validator-plus-across-swap

Conversation

@0xDEnYO
Copy link
Copy Markdown
Contributor

@0xDEnYO 0xDEnYO commented Mar 13, 2026

Which Jira task belongs to this PR?

Why did I implement it this way?

Checklist before requesting a review

Checklist for reviewer (DO NOT DEPLOY and contracts BEFORE CHECKING THIS!!!)

  • I have checked that any arbitrary calls to external contracts are validated and or restricted
  • I have checked that any privileged calls (i.e. storage modifications) are validated and or restricted
  • I have ensured that any new contracts have had AT A MINIMUM 1 preliminary audit conducted on by <company/auditor>

0xDEnYO and others added 30 commits August 20, 2025 14:45
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@lifi-action-bot lifi-action-bot marked this pull request as draft March 13, 2026 01:03
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 13, 2026

Walkthrough

Adds AcrossV4SwapFacet and OutputValidator with deployment scripts, configs, interfaces, tests, demo tooling, and helper updates; introduces backendSigner in global config, moves NEAR intents config into global.json, hardens deployment helpers, and adds extensive EIP-712 test utilities and signature handling.

Changes

Cohort / File(s) Summary
Facet & Periphery Code
src/Facets/AcrossV4SwapFacet.sol, src/Periphery/OutputValidator.sol, src/Errors/GenericErrors.sol
New AcrossV4SwapFacet (multi-target bridging, EIP-712 signature gating, slippage handling) and OutputValidator contracts added; InvalidSignature error introduced.
Interfaces
src/Interfaces/IAcrossSpokePoolV4.sol, src/Interfaces/ISpokePoolPeriphery.sol, src/Interfaces/ISponsoredOFTSrcPeriphery.sol, src/Interfaces/ISponsoredCCTPSrcPeriphery.sol
Added new interfaces and structs to model Across V4 periphery/cctp/sponsored flows and deposit param types used by the facet.
Deployment Scripts & Resources
script/deploy/facets/DeployAcrossV4SwapFacet.s.sol, script/deploy/facets/DeployOutputValidator.s.sol, script/deploy/facets/UpdateAcrossV4SwapFacet.s.sol, script/deploy/zksync/..., script/deploy/resources/deployRequirements.json, script/deploy/deploySingleContract.sh
New deploy/update scripts for AcrossV4SwapFacet and OutputValidator (including zksync variants); deployRequirements updated; deploySingleContract.sh call-site now passes CREATE3 factory address.
Configs & Deployments
config/acrossV4Swap.json, config/global.json, config/nearintents.json (deleted), .gitignore, deployments/*.{json}, deployments/_deployments_log_file.json
Added per-network Across config, added backendSigner to global.json, removed nearintents.json, unignored config/whitelist.staging.json, registered OutputValidator/AcrossV4SwapFacet and RelayerCelerIM periphery in deployment JSONs, and added OutputValidator entry to deployments log.
Helper Scripts & CLI Demo
script/helperFunctions.sh, script/demoScripts/demoAcrossV4Swap.ts, script/tasks/temp/checkPauserWalletPerNetwork.ts
Hardened JSON merges and diamond update logic, adjusted getContractAddressFromSalt usage, improved doesAddressContainBytecode checks; added TypeScript demo for AcrossV4 calldata, optional broadcasting; minor formatting changes.
Test Utilities & EIP-712 Helpers
test/solidity/utils/TestEIP712.sol, test/solidity/utils/TestAcrossV4SwapBackendSig.sol, test/solidity/utils/TestNearIntentsBackendSig.sol
New EIP-712 test helpers and backend-signature utilities to generate and sign Across and NEAR payloads for tests.
Tests
test/solidity/Facets/Across/..., test/solidity/Periphery/OutputValidator.t.sol, test/solidity/Facets/NEARIntentsFacet.t.sol
Extensive new test suites exercising AcrossV4SwapFacet (main, sponsored, refund flows), OutputValidator tests, and NEARIntentsFacet test refactor to use shared backend-sig helpers.
Docs & Rules
.cursor/rules/100-solidity-basics.mdc, .cursor/rules/102-facets.mdc, docs/AcrossV4SwapFacet.md, docs/OutputValidator.md
Added licensing prohibition against mirroring legacy Unlicense headers for new .sol files; extended facet guidance for opaque calldata and backend signature requirements; added comprehensive docs for AcrossV4SwapFacet and OutputValidator.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested labels

AuditRequired

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is a template with unchecked checkboxes and empty required fields (Linear/Jira task, implementation rationale). Critical reviewer checklist items are blank, and no actual context about why changes were made or their implications is provided. Complete the description by filling in the Linear/Jira task, rationale for implementation approach, and check all relevant checkboxes. Provide specific details about arbitrary call validation, privileged operations, and audit status before reviewers proceed.
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title mentions 'Output validator plus across swap' which aligns with major additions (OutputValidator.sol, AcrossV4SwapFacet.sol, etc.), but includes 'DO NOT MERGE' and '(for updated types only)' qualifications that suggest incomplete or draft status.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch output-validator-plus-across-swap

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread src/Facets/AcrossV4SwapFacet.sol Dismissed

// If this is the positive-slippage path, adjust the amounts after decoding but before validation.
if (_preSwapAmount != 0) {
uint256 outputAmountMultiplier = (swapAndDepositData

Check warning

Code scanning / Olympix Integrated Security

Performing integer division before multiplication can lead to unnecessary loss of precision. Please note: this detector flags the division, the associated multiplication may be remote! Check function and variable chains to confirm. For more information, visit: http://detectors.olympixdevsectools.com/article/web3-vulnerability/faulty-div Medium

Performing integer division before multiplication can lead to unnecessary loss of precision. Please note: this detector flags the division, the associated multiplication may be remote! Check function and variable chains to confirm. For more information, visit: http://detectors.olympixdevsectools.com/article/web3-vulnerability/faulty-div
Comment thread src/Facets/AcrossV4SwapFacet.sol Dismissed
bool isNative = LibAsset.isNativeAsset(_bridgeData.sendingAssetId);
if (!isNative) {
address decodedInputToken = address(
uint160(uint256(params.inputToken))

Check notice

Code scanning / Olympix Integrated Security

Performing a narrowing downcast may result in silent overflow due to bit truncation. For more information, visit: http://detectors.olympixdevsectools.com/article/web3-vulnerability/unsafe-downcast Low

Performing a narrowing downcast may result in silent overflow due to bit truncation. For more information, visit: http://detectors.olympixdevsectools.com/article/web3-vulnerability/unsafe-downcast
Comment thread src/Facets/AcrossV4SwapFacet.sol Dismissed
@lifi-action-bot
Copy link
Copy Markdown
Collaborator

Test Coverage Report

Line Coverage: 87.44% (3168 / 3623 lines)
Function Coverage: 90.67% ( 496 / 547 functions)
Branch Coverage: 67.82% ( 546 / 805 branches)
Test coverage (87.44%) is above min threshold (83%). Check passed.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
deployments/arbitrum.diamond.staging.json (1)

208-223: ⚠️ Potential issue | 🟡 Minor

Duplicate facet key detected.

The address 0xC31f575546A12Bc9dEf107CE19a242cd351BaDe2 appears twice in the Facets map (lines 208-211 and 220-223), both with AcrossV4SwapFacet. JSON parsers will silently keep only the last occurrence, which could lead to unexpected behavior during deployment tooling.

Remove one of the duplicate entries.

🔧 Suggested fix

Remove lines 220-223:

       "0xBdE21104479e1373a20F487D3f6ffDED62574FA7": {
         "Name": "PolymerCCTPFacet",
         "Version": "2.0.0"
-      },
-      "0xC31f575546A12Bc9dEf107CE19a242cd351BaDe2": {
-        "Name": "AcrossV4SwapFacet",
-        "Version": "1.0.0"
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deployments/arbitrum.diamond.staging.json` around lines 208 - 223, The Facets
map contains a duplicate key "0xC31f575546A12Bc9dEf107CE19a242cd351BaDe2" for
AcrossV4SwapFacet; remove the redundant object for that address so the Facets
map only contains a single entry for
"0xC31f575546A12Bc9dEf107CE19a242cd351BaDe2" (keep one AcrossV4SwapFacet record
and delete the other) to avoid silent overrides by the JSON parser.
🧹 Nitpick comments (6)
script/deploy/zksync/utils/ScriptBase.sol (1)

98-98: Keep new parameter naming consistent with repo convention.

Line 98 should use an underscore-prefixed parameter name to match the Solidity naming rule used across scripts/contracts.

As per coding guidelines: "Function parameters must use leading underscore prefix (e.g., _amount)."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/deploy/zksync/utils/ScriptBase.sol` at line 98, The parameter name
allowZeroAddress violates the repo's leading-underscore convention; rename the
parameter to _allowZeroAddress (update the function signature where
allowZeroAddress appears in ScriptBase.sol) and replace all references inside
the function body and any callers to use _allowZeroAddress so naming is
consistent with the rule used across contracts.
docs/AcrossV4SwapFacet.md (1)

66-72: Minor documentation correction: SwapData is an array.

The text says "a SwapData _swapData parameter" but the actual signature shows LibSwap.SwapData[] calldata _swapData (an array).

Suggested fix
 ## Swap Data
 
-Some methods accept a `SwapData _swapData` parameter.
+Some methods accept a `SwapData[] _swapData` parameter.
 
-Swapping is performed by a swap-specific library that expects an array of calldata that can be run on various DEXs (i.e. Uniswap) to make one or multiple swaps before performing another action.
+Swapping is performed by a swap-specific library that expects an array of swap instructions that can be run on various DEXs (i.e. Uniswap) to make one or multiple swaps before performing another action.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/AcrossV4SwapFacet.md` around lines 66 - 72, Update the docs to correctly
state that SwapData is passed as an array: change references that say "a
`SwapData _swapData` parameter" to indicate the parameter is `LibSwap.SwapData[]
calldata _swapData` (an array of SwapData) and adjust surrounding wording to
reflect multiple swap entries where applicable (refer to SwapData and
LibSwap.SwapData[] calldata _swapData in the function signatures).
src/Interfaces/ISpokePoolPeriphery.sol (1)

4-7: Add NatSpec reference to the upstream Across Protocol contract.

As per coding guidelines for interfaces that shadow external protocols: add a short NatSpec note with a link or identifier referencing the original Across Protocol SpokePoolPeriphery contract.

Suggested addition
 /// `@title` ISpokePoolPeriphery
 /// `@notice` Interface for interacting with Across Protocol SpokePoolPeriphery
+/// `@dev` Based on Across Protocol's SpokePoolPeriphery contract
+///      https://github.com/across-protocol/contracts/blob/master/contracts/SpokePoolPeriphery.sol
 /// `@author` LI.FI (https://li.fi)
 /// `@custom`:version 1.0.0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Interfaces/ISpokePoolPeriphery.sol` around lines 4 - 7, Add a NatSpec
reference to the upstream Across Protocol SpokePoolPeriphery contract at the top
of the ISpokePoolPeriphery interface: update the contract header comments in
ISpokePoolPeriphery.sol to include a short `@notice/`@dev or `@custom`:see tag with
the original contract name and a URL or identifier pointing to the upstream
Across Protocol SpokePoolPeriphery (e.g., repository or Etherscan address), so
readers can trace this interface back to the source implementation.
script/deploy/facets/DeployOutputValidator.s.sol (1)

1-39: Move this deploy script out of script/deploy/facets/.

OutputValidator is not a facet, so placing its deploy script under the non-facet deploy path will keep structure consistent and easier to navigate.

As per coding guidelines: “Place deployment scripts in script/deploy/facets/ for facet deployments or script/deploy/ for other contracts.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/deploy/facets/DeployOutputValidator.s.sol` around lines 1 - 39, Move
this deploy script out of the facets folder: relocate the DeployScript contract
file (containing DeployScript, run(), getConstructorArgs(), and the
OutputValidator deployment logic) from script/deploy/facets/ to script/deploy/
so it lives with non-facet deployment scripts; update any import or relative
root path usage if necessary and ensure the contract name DeployScript and its
getConstructorArgs() still compile and reference the correct config path and
OutputValidator type after moving.
test/solidity/Facets/Across/SwapAPI/AcrossV4SwapFacet.SponsoredRefunds.t.sol (1)

38-38: Prefer TestBaseFacet for this facet suite.

This is a facet test, but it inherits from TestBase, so it bypasses the repo's standard facet-test hooks/overrides. Using TestBaseFacet keeps the setup aligned with the other facet suites.

As per coding guidelines "For facet tests, inherit from TestBaseFacet rather than TestBase, since standard facet functions need to be overridden".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/solidity/Facets/Across/SwapAPI/AcrossV4SwapFacet.SponsoredRefunds.t.sol`
at line 38, The test contract AcrossV4SwapFacetSponsoredRefundsTest currently
inherits from TestBase but should inherit from TestBaseFacet to ensure
facet-specific hooks/overrides run; update the contract declaration to extend
TestBaseFacet (replace TestBase with TestBaseFacet) and ensure any required
facet setup/overrides provided by TestBaseFacet are implemented/adjusted in this
contract (preserve the contract name AcrossV4SwapFacetSponsoredRefundsTest and
its existing setup logic).
test/solidity/Periphery/OutputValidator.t.sol (1)

30-34: Unused event declaration.

The TokensWithdrawn event is declared but never emitted in any test. If it's not needed, consider removing it to reduce noise. If it's intended for future use or mirroring the contract's event, adding a comment would clarify.

Proposed fix
-    event TokensWithdrawn(
-        address assetId,
-        address payable receiver,
-        uint256 amount
-    );
+    // Note: TokensWithdrawn event from OutputValidator can be checked with vm.expectEmit if needed
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/solidity/Periphery/OutputValidator.t.sol` around lines 30 - 34, The
declared event TokensWithdrawn is never emitted or asserted in the tests; either
remove the TokensWithdrawn event declaration to eliminate dead code, or if it’s
meant to mirror the production contract or be used later, add a one-line comment
above TokensWithdrawn explaining its purpose (e.g., "kept for parity with
contract / future tests") or update the relevant test(s) to emit/assert the
event (use TokensWithdrawn in the test assertions). Ensure the chosen change is
limited to the test file where TokensWithdrawn is declared.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@script/demoScripts/demoAcrossV4Swap.ts`:
- Around line 149-153: The demo forces every bytes32 finalRecipient through
bytes32ToEvmAddress which rejects non-EVM destinations; change the logic to stop
unconditionally decoding bytes32 into an EVM address: remove or guard uses of
bytes32ToEvmAddress when constructing finalRecipient (and the similar spots
referenced around the other uses) and instead pass the raw bytes32 value through
for non-EVM paths or only decode when you know the recipient is an EVM address;
update any validation to accept a 0x-prefixed 66-byte bytes32 as-is and only
call getAddress/bytes32ToEvmAddress when a clear EVM target is required. Ensure
bytes32ToEvmAddress remains available for explicit EVM-only conversions but do
not use it as the default for sponsored finalRecipient handling.
- Around line 683-691: The send path funds and approves the wrong assets for
swapAndStart: when ENTRYPOINT === 'swapAndStart' use the swapData array entries
(ISwapData) to determine deposits and msg.value for swaps with
requiresDeposit=true instead of always using
bridgeData.sendingAssetId/minAmount; update the logic around swapData parsing
and the send/deposit calculations (references: ENTRYPOINT, swapData, ISwapData,
requiresDeposit, bridgeData.sendingAssetId, bridgeData.minAmount, msgValueWei)
so that for each swap-and-bridge flow you (1) add required native ETH value
equal to the swap input amount when the swap input is native, (2) approve and
fund ERC20 inputs based on swapData.fromAsset/ fromAmount for entries with
requiresDeposit=true, and (3) preserve existing bridge funding logic for bridge
output tokens (bridgeData.isNative refers to bridge output type) so
native-output bridges still receive msgValue for the bridge while swap inputs
are separately funded from swapData.

In `@script/helperFunctions.sh`:
- Around line 4956-4994: The merge step can overwrite existing diamond sections
with empty results if a background collector failed; before building MERGED (the
jq merge using DIAMOND_NAME, FACETS_TMP, PERIPHERY_TMP), validate the temp files
(FACETS_TMP and PERIPHERY_TMP) are present and contain valid, non-empty JSON
(use jq -e '.' and check content is not an empty object/marker) and if either
check fails, abort the refresh: log a warning/error, return non-zero and do not
write to DIAMOND_FILE (avoid falling back to creating empty Facets/Periphery);
update the block that creates MERGED and the failure branch that currently falls
back to jq -n so it instead bails out when temp validation fails.

In `@src/Interfaces/ISponsoredCCTPSrcPeriphery.sol`:
- Around line 11-38: The SponsoredCCTPQuote struct in this file is
ABI-incompatible with upstream; update the struct definition (used by
depositForBurn) to include the two missing fields in the exact order and types
used upstream by inserting uint32 destinationDex and uint8 accountCreationMode
immediately after finalToken (i.e., before executionMode and actionData) so the
field ordering matches Across' SponsoredCCTPInterface.SponsoredCCTPQuote
exactly; ensure any local code that constructs or reads this struct uses the new
layout.

In `@test/solidity/Facets/Across/SwapAPI/AcrossV4SwapFacet.t.sol`:
- Around line 1178-1185: The test wrongly uses
vm.expectRevert(InvalidCallData.selector) before a low-level (bool success, ) =
address(standaloneFacet).call(encoded); because vm.expectRevert cannot catch
reverts from low-level .call(); either remove the misleading
vm.expectRevert(InvalidCallData.selector) line and keep the assertFalse(success,
...) check, or replace the low-level .call with a direct/ABI-invoked call to the
facet method so vm.expectRevert(InvalidCallData.selector) can intercept the
revert; adjust references to encoded, standaloneFacet.call, vm.expectRevert, and
assertFalse accordingly.

In `@test/solidity/Periphery/OutputValidator.t.sol`:
- Around line 158-190: The test function
test_validateOutputERC20LessThanExpected currently claims "should revert" but
asserts behavior for a no-revert path; update it to match intended behavior: if
OutputValidator.validateERC20Output is supposed to silently do nothing when
actual < expected, rename the test to something like
test_validateOutputERC20LessThanExpected_NoExcessTransferred, remove the "should
revert" comment and keep the current assertions; otherwise, if it should revert,
rename to a revert-style name (e.g.,
testRevert_validateOutputERC20LessThanExpected), add vm.expectRevert before
calling OutputValidator.validateERC20Output and adjust assertions accordingly;
ensure references to the test name and the call to validateERC20Output are
updated to keep semantics clear.

---

Outside diff comments:
In `@deployments/arbitrum.diamond.staging.json`:
- Around line 208-223: The Facets map contains a duplicate key
"0xC31f575546A12Bc9dEf107CE19a242cd351BaDe2" for AcrossV4SwapFacet; remove the
redundant object for that address so the Facets map only contains a single entry
for "0xC31f575546A12Bc9dEf107CE19a242cd351BaDe2" (keep one AcrossV4SwapFacet
record and delete the other) to avoid silent overrides by the JSON parser.

---

Nitpick comments:
In `@docs/AcrossV4SwapFacet.md`:
- Around line 66-72: Update the docs to correctly state that SwapData is passed
as an array: change references that say "a `SwapData _swapData` parameter" to
indicate the parameter is `LibSwap.SwapData[] calldata _swapData` (an array of
SwapData) and adjust surrounding wording to reflect multiple swap entries where
applicable (refer to SwapData and LibSwap.SwapData[] calldata _swapData in the
function signatures).

In `@script/deploy/facets/DeployOutputValidator.s.sol`:
- Around line 1-39: Move this deploy script out of the facets folder: relocate
the DeployScript contract file (containing DeployScript, run(),
getConstructorArgs(), and the OutputValidator deployment logic) from
script/deploy/facets/ to script/deploy/ so it lives with non-facet deployment
scripts; update any import or relative root path usage if necessary and ensure
the contract name DeployScript and its getConstructorArgs() still compile and
reference the correct config path and OutputValidator type after moving.

In `@script/deploy/zksync/utils/ScriptBase.sol`:
- Line 98: The parameter name allowZeroAddress violates the repo's
leading-underscore convention; rename the parameter to _allowZeroAddress (update
the function signature where allowZeroAddress appears in ScriptBase.sol) and
replace all references inside the function body and any callers to use
_allowZeroAddress so naming is consistent with the rule used across contracts.

In `@src/Interfaces/ISpokePoolPeriphery.sol`:
- Around line 4-7: Add a NatSpec reference to the upstream Across Protocol
SpokePoolPeriphery contract at the top of the ISpokePoolPeriphery interface:
update the contract header comments in ISpokePoolPeriphery.sol to include a
short `@notice/`@dev or `@custom`:see tag with the original contract name and a URL
or identifier pointing to the upstream Across Protocol SpokePoolPeriphery (e.g.,
repository or Etherscan address), so readers can trace this interface back to
the source implementation.

In
`@test/solidity/Facets/Across/SwapAPI/AcrossV4SwapFacet.SponsoredRefunds.t.sol`:
- Line 38: The test contract AcrossV4SwapFacetSponsoredRefundsTest currently
inherits from TestBase but should inherit from TestBaseFacet to ensure
facet-specific hooks/overrides run; update the contract declaration to extend
TestBaseFacet (replace TestBase with TestBaseFacet) and ensure any required
facet setup/overrides provided by TestBaseFacet are implemented/adjusted in this
contract (preserve the contract name AcrossV4SwapFacetSponsoredRefundsTest and
its existing setup logic).

In `@test/solidity/Periphery/OutputValidator.t.sol`:
- Around line 30-34: The declared event TokensWithdrawn is never emitted or
asserted in the tests; either remove the TokensWithdrawn event declaration to
eliminate dead code, or if it’s meant to mirror the production contract or be
used later, add a one-line comment above TokensWithdrawn explaining its purpose
(e.g., "kept for parity with contract / future tests") or update the relevant
test(s) to emit/assert the event (use TokensWithdrawn in the test assertions).
Ensure the chosen change is limited to the test file where TokensWithdrawn is
declared.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 62ea22d5-1126-4d14-92fd-030225b649ef

📥 Commits

Reviewing files that changed from the base of the PR and between ecef43c and 79812a9.

📒 Files selected for processing (41)
  • .cursor/rules/100-solidity-basics.mdc
  • .cursor/rules/102-facets.mdc
  • .gitignore
  • config/acrossV4Swap.json
  • config/global.json
  • config/nearintents.json
  • deployments/_deployments_log_file.json
  • deployments/arbitrum.diamond.staging.json
  • deployments/arbitrum.staging.json
  • deployments/optimism.diamond.staging.json
  • deployments/optimism.staging.json
  • docs/AcrossV4SwapFacet.md
  • docs/OutputValidator.md
  • script/demoScripts/demoAcrossV4Swap.ts
  • script/deploy/deploySingleContract.sh
  • script/deploy/facets/DeployAcrossV4SwapFacet.s.sol
  • script/deploy/facets/DeployNEARIntentsFacet.s.sol
  • script/deploy/facets/DeployOutputValidator.s.sol
  • script/deploy/facets/UpdateAcrossV4SwapFacet.s.sol
  • script/deploy/resources/deployRequirements.json
  • script/deploy/zksync/DeployAcrossV4SwapFacet.zksync.s.sol
  • script/deploy/zksync/DeployOutputValidator.zksync.s.sol
  • script/deploy/zksync/UpdateAcrossV4SwapFacet.zksync.s.sol
  • script/deploy/zksync/utils/ScriptBase.sol
  • script/helperFunctions.sh
  • script/tasks/temp/checkPauserWalletPerNetwork.ts
  • src/Errors/GenericErrors.sol
  • src/Facets/AcrossV4SwapFacet.sol
  • src/Interfaces/IAcrossSpokePoolV4.sol
  • src/Interfaces/ISpokePoolPeriphery.sol
  • src/Interfaces/ISponsoredCCTPSrcPeriphery.sol
  • src/Interfaces/ISponsoredOFTSrcPeriphery.sol
  • src/Periphery/OutputValidator.sol
  • test/solidity/Facets/Across/SwapAPI/AcrossV4SwapFacet.Sponsored.t.sol
  • test/solidity/Facets/Across/SwapAPI/AcrossV4SwapFacet.SponsoredRefunds.t.sol
  • test/solidity/Facets/Across/SwapAPI/AcrossV4SwapFacet.t.sol
  • test/solidity/Facets/NEARIntentsFacet.t.sol
  • test/solidity/Periphery/OutputValidator.t.sol
  • test/solidity/utils/TestAcrossV4SwapBackendSig.sol
  • test/solidity/utils/TestEIP712.sol
  • test/solidity/utils/TestNearIntentsBackendSig.sol
💤 Files with no reviewable changes (2)
  • config/nearintents.json
  • .gitignore

Comment on lines +149 to +153
const bytes32ToEvmAddress = (b32: Hex): Address => {
// bytes32(uint256(uint160(addr))) => addr in last 20 bytes
if (b32.length !== 66) throw new Error('Invalid bytes32 length')
return getAddress(`0x${b32.slice(26)}`) as Address
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't force sponsored finalRecipient through an EVM-address decoder.

These checks assume every bytes32 recipient is a zero-padded EVM address. That makes the demo reject valid sponsored quotes targeting non-EVM destinations, which is one of the new Across V4 paths this PR is adding support for.

Based on learnings "Solana bytes32 addresses should use solanaAddressToBytes32(deriveSolanaAddress(privateKey)) for contract/struct fields expecting bytes32."

Also applies to: 423-425, 568-576, 623-627

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/demoScripts/demoAcrossV4Swap.ts` around lines 149 - 153, The demo
forces every bytes32 finalRecipient through bytes32ToEvmAddress which rejects
non-EVM destinations; change the logic to stop unconditionally decoding bytes32
into an EVM address: remove or guard uses of bytes32ToEvmAddress when
constructing finalRecipient (and the similar spots referenced around the other
uses) and instead pass the raw bytes32 value through for non-EVM paths or only
decode when you know the recipient is an EVM address; update any validation to
accept a 0x-prefixed 66-byte bytes32 as-is and only call
getAddress/bytes32ToEvmAddress when a clear EVM target is required. Ensure
bytes32ToEvmAddress remains available for explicit EVM-only conversions but do
not use it as the default for sponsored finalRecipient handling.

Comment on lines +683 to +691
let swapData: ISwapData[] = []
if (ENTRYPOINT === 'swapAndStart') {
const swapDataJson = arg('--swapDataJson')
if (!swapDataJson)
throw new Error('swapAndStart requires --swapDataJson <path>')
const raw = await readFile(swapDataJson, 'utf8')
// Expect direct array of SwapData-compatible objects.
swapData = JSON.parse(raw) as ISwapData[]
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The send path is funding and approving the wrong assets for swapAndStart.

Lines 717-742 always use bridgeData.sendingAssetId/minAmount, but swapAndStart deposits come from the swapData entries with requiresDeposit=true. Lines 761-782 also forward only msgValueWei, so native direct bridges and native-input swaps are underfunded.

🐛 Suggested shape for the fix
+  const requiredDeposits =
+    ENTRYPOINT === 'swapAndStart'
+      ? swapData.filter((swap) => swap.requiresDeposit)
+      : [
+          {
+            sendingAssetId: bridgeData.sendingAssetId,
+            fromAmount: bridgeData.minAmount,
+          },
+        ]
+
+  const txValue =
+    msgValueWei +
+    requiredDeposits.reduce(
+      (sum, deposit) =>
+        getAddress(deposit.sendingAssetId) === zeroAddress
+          ? sum + deposit.fromAmount
+          : sum,
+      0n
+    )
+
-  consola.info(`value=${msgValueWei.toString()}`)
+  consola.info(`value=${txValue.toString()}`)
 
-  const isNativeAsset = getAddress(bridgeData.sendingAssetId) === zeroAddress
-
-  const tokenContract = isNativeAsset
-    ? null
-    : getContract({
-        address: bridgeData.sendingAssetId,
-        abi: erc20Abi,
-        client: { public: publicClient, wallet: walletClient },
-      })
-
-  await ensureBalance(
-    isNativeAsset ? zeroAddress : tokenContract,
-    walletAddress,
-    bridgeData.minAmount,
-    publicClient
-  )
-
-  if (!isNativeAsset && tokenContract) {
-    await ensureAllowance(
-      tokenContract,
-      walletAddress,
-      lifiDiamondAddress as string,
-      bridgeData.minAmount,
-      publicClient
-    )
-  }
+  for (const deposit of requiredDeposits) {
+    const isNativeAsset = getAddress(deposit.sendingAssetId) === zeroAddress
+    const tokenContract = isNativeAsset
+      ? null
+      : getContract({
+          address: deposit.sendingAssetId,
+          abi: erc20Abi,
+          client: { public: publicClient, wallet: walletClient },
+        })
+
+    await ensureBalance(
+      isNativeAsset ? zeroAddress : tokenContract,
+      walletAddress,
+      deposit.fromAmount,
+      publicClient
+    )
+
+    if (!isNativeAsset && tokenContract) {
+      await ensureAllowance(
+        tokenContract,
+        walletAddress,
+        lifiDiamondAddress as string,
+        deposit.fromAmount,
+        publicClient
+      )
+    }
+  }
 
-              { value: msgValueWei }
+              { value: txValue }
 
-              { value: msgValueWei }
+              { value: txValue }
Based on learnings "In EcoFacet test setup, the isNative parameter in swap-and-bridge flows refers to the bridge token type (output), not the swap input type. The ERC20 branch (isNative=false) can still swap from native ETH to ERC20 tokens, requiring msg.value = swapData[0].fromAmount for the ETH input to the swap."

Also applies to: 706-710, 717-742, 761-782

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/demoScripts/demoAcrossV4Swap.ts` around lines 683 - 691, The send path
funds and approves the wrong assets for swapAndStart: when ENTRYPOINT ===
'swapAndStart' use the swapData array entries (ISwapData) to determine deposits
and msg.value for swaps with requiresDeposit=true instead of always using
bridgeData.sendingAssetId/minAmount; update the logic around swapData parsing
and the send/deposit calculations (references: ENTRYPOINT, swapData, ISwapData,
requiresDeposit, bridgeData.sendingAssetId, bridgeData.minAmount, msgValueWei)
so that for each swap-and-bridge flow you (1) add required native ETH value
equal to the swap input amount when the swap input is native, (2) approve and
fund ERC20 inputs based on swapData.fromAsset/ fromAmount for entries with
requiresDeposit=true, and (3) preserve existing bridge funding logic for bridge
output tokens (bridgeData.isNative refers to bridge output type) so
native-output bridges still receive msgValue for the bridge while swap inputs
are separately funded from swapData.

Comment thread script/helperFunctions.sh
Comment on lines +4956 to +4994
# ensure diamond file exists and is valid JSON
if [[ ! -e $DIAMOND_FILE ]]; then
echo "{}" >"$DIAMOND_FILE"
else
# validate existing file is valid JSON, reset if corrupted
if ! jq -e . "$DIAMOND_FILE" >/dev/null 2>&1; then
warning "[$NETWORK] Existing diamond file is corrupted, resetting to empty object"
echo "{}" >"$DIAMOND_FILE"
fi
fi

# merge facets and periphery into diamond file atomically
local MERGED
MERGED=$(jq -r --arg diamond_name "$DIAMOND_NAME" --slurpfile facets "$FACETS_TMP" --slurpfile periphery "$PERIPHERY_TMP" '
MERGED=$(jq --arg diamond_name "$DIAMOND_NAME" --slurpfile facets "$FACETS_TMP" --slurpfile periphery "$PERIPHERY_TMP" '
.[$diamond_name] = (.[$diamond_name] // {}) |
.[$diamond_name].Facets = $facets[0] |
.[$diamond_name].Periphery = $periphery[0]
' "$DIAMOND_FILE" || cat "$DIAMOND_FILE")
printf %s "$MERGED" >"$DIAMOND_FILE"
' "$DIAMOND_FILE" 2>/dev/null)

# if merge failed, create fresh structure from temp files
if [[ $? -ne 0 || -z "$MERGED" ]]; then
warning "[$NETWORK] Merge failed, creating fresh diamond structure"
MERGED=$(jq -n --arg diamond_name "$DIAMOND_NAME" --slurpfile facets "$FACETS_TMP" --slurpfile periphery "$PERIPHERY_TMP" '
{
($diamond_name): {
Facets: $facets[0],
Periphery: $periphery[0]
}
}
' 2>/dev/null)
fi

# write merged JSON with proper formatting
if [[ -n "$MERGED" ]]; then
echo "$MERGED" | jq . >"$DIAMOND_FILE"
else
error "[$NETWORK] Failed to generate valid diamond JSON"
return 1
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Abort the refresh instead of writing empty sections after a worker failure.

This block persists whatever ended up in the temp files, and the surrounding flow normalizes missing facet/periphery output to {}. If either background collector fails, a transient RPC/Mongo issue will erase that section of the existing diamond log instead of preserving it.

🐛 Suggested guard
-  if [[ -n "$PID_PERIPHERY" ]]; then wait "$PID_PERIPHERY"; fi
-  if [[ -n "$PID_FACETS" ]]; then wait "$PID_FACETS"; fi
+  local PERIPHERY_STATUS=0
+  local FACETS_STATUS=0
+  if [[ -n "$PID_PERIPHERY" ]]; then
+    wait "$PID_PERIPHERY" || PERIPHERY_STATUS=$?
+  fi
+  if [[ -n "$PID_FACETS" ]]; then
+    wait "$PID_FACETS" || FACETS_STATUS=$?
+  fi
+  if [[ $PERIPHERY_STATUS -ne 0 || $FACETS_STATUS -ne 0 ]]; then
+    error "[$NETWORK] Failed to collect current diamond state; refusing to overwrite $DIAMOND_FILE"
+    rm -rf "$TEMP_DIR"
+    return 1
+  fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/helperFunctions.sh` around lines 4956 - 4994, The merge step can
overwrite existing diamond sections with empty results if a background collector
failed; before building MERGED (the jq merge using DIAMOND_NAME, FACETS_TMP,
PERIPHERY_TMP), validate the temp files (FACETS_TMP and PERIPHERY_TMP) are
present and contain valid, non-empty JSON (use jq -e '.' and check content is
not an empty object/marker) and if either check fails, abort the refresh: log a
warning/error, return non-zero and do not write to DIAMOND_FILE (avoid falling
back to creating empty Facets/Periphery); update the block that creates MERGED
and the failure branch that currently falls back to jq -n so it instead bails
out when temp validation fails.

Comment on lines +11 to +38
/// @notice Params used to create a sponsored CCTP quote and deposit for burn
/// @dev Mirrors `SponsoredCCTPInterface.SponsoredCCTPQuote` (Across)
struct SponsoredCCTPQuote {
uint32 sourceDomain;
uint32 destinationDomain;
bytes32 mintRecipient;
uint256 amount;
bytes32 burnToken;
bytes32 destinationCaller;
uint256 maxFee;
uint32 minFinalityThreshold;
bytes32 nonce;
uint256 deadline;
uint256 maxBpsToSponsor;
uint256 maxUserSlippageBps;
bytes32 finalRecipient;
bytes32 finalToken;
uint8 executionMode;
bytes actionData;
}

/// @notice Deposits tokens for burn via CCTP
/// @param quote The quote that contains the data for the deposit
/// @param signature The signature of the quote
function depositForBurn(
SponsoredCCTPQuote calldata quote,
bytes calldata signature
) external;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

LOCAL_FILE="$(fd 'ISponsoredCCTPSrcPeriphery.sol' | head -n 1)"

echo "== Local interface signature summary =="
sed -n '1,240p' "$LOCAL_FILE" | rg -n \
'struct SponsoredCCTPQuote|function depositForBurn|sourceDomain|destinationDomain|mintRecipient|amount|burnToken|destinationCaller|maxFee|minFinalityThreshold|nonce|deadline|maxBpsToSponsor|maxUserSlippageBps|finalRecipient|finalToken|executionMode|actionData'

echo
echo "== Upstream Across SponsoredCCTPInterface summary =="
curl -fsSL \
'https://raw.githubusercontent.com/across-protocol/contracts/master/contracts/periphery/mintburn/sponsored-cctp/SponsoredCCTPInterface.sol' \
| rg -n \
'struct SponsoredCCTPQuote|function depositForBurn|sourceDomain|destinationDomain|mintRecipient|amount|burnToken|destinationCaller|maxFee|minFinalityThreshold|nonce|deadline|maxBpsToSponsor|maxUserSlippageBps|finalRecipient|finalToken|executionMode|actionData'

Repository: lifinance/contracts

Length of output: 811


🌐 Web query:

Across Protocol SponsoredCCTPInterface contract GitHub repository

💡 Result:

The SponsoredCCTPInterface contract is in the Across Protocol contracts repository (across-protocol/contracts), at:

  • contracts/interfaces/SponsoredCCTPInterface.sol (github.com)

Citations:


🏁 Script executed:

# Try alternative GitHub paths for Across SponsoredCCTPInterface
echo "=== Trying alternative Across repository paths ==="

# Attempt 1: Try develop branch instead of master
curl -fsSL \
'https://raw.githubusercontent.com/across-protocol/contracts/develop/contracts/periphery/mintburn/sponsored-cctp/SponsoredCCTPInterface.sol' \
-w "\n%{http_code}\n" 2>&1 | head -20

echo "---"

# Attempt 2: Check if repo structure differs
curl -fsSL \
'https://api.github.com/repos/across-protocol/contracts/contents/contracts/periphery/mintburn/sponsored-cctp' \
2>&1 | head -30

Repository: lifinance/contracts

Length of output: 2604


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Fetching correct upstream Across SponsoredCCTPInterface ==="
curl -fsSL \
'https://raw.githubusercontent.com/across-protocol/contracts/master/contracts/interfaces/SponsoredCCTPInterface.sol' \
> /tmp/upstream_interface.sol

echo "File fetched successfully. Extracting struct and function signature..."
echo

echo "=== Upstream SponsoredCCTPInterface struct and function ==="
sed -n '1,300p' /tmp/upstream_interface.sol | grep -A 50 'struct SponsoredCCTPQuote' | head -30

echo
echo "=== Upstream depositForBurn function signature ==="
sed -n '1,300p' /tmp/upstream_interface.sol | grep -A 3 'function depositForBurn'

echo
echo "=== Full local interface for comparison ==="
fd 'ISponsoredCCTPSrcPeriphery.sol' -x cat {} | sed -n '1,100p'

Repository: lifinance/contracts

Length of output: 1888


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Complete upstream SponsoredCCTPInterface ==="
curl -fsSL 'https://raw.githubusercontent.com/across-protocol/contracts/master/contracts/interfaces/SponsoredCCTPInterface.sol' | cat -n

echo
echo "=== Local ISponsoredCCTPSrcPeriphery.sol ==="
fd 'ISponsoredCCTPSrcPeriphery.sol' -x cat -n {}

Repository: lifinance/contracts

Length of output: 5909


ABI mismatch with upstream Across: struct is missing two fields.

The local SponsoredCCTPQuote struct diverges from the upstream Across SponsoredCCTPInterface.SponsoredCCTPQuote:

Missing fields (after finalToken):

  • uint32 destinationDex
  • uint8 accountCreationMode

Local struct ends with executionMode and actionData, but upstream requires destinationDex and accountCreationMode before those two fields. This struct layout mismatch will break ABI compatibility; encoded calldata from upstream tools or contracts will fail to decode correctly.

Update the struct to match upstream field order and types exactly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Interfaces/ISponsoredCCTPSrcPeriphery.sol` around lines 11 - 38, The
SponsoredCCTPQuote struct in this file is ABI-incompatible with upstream; update
the struct definition (used by depositForBurn) to include the two missing fields
in the exact order and types used upstream by inserting uint32 destinationDex
and uint8 accountCreationMode immediately after finalToken (i.e., before
executionMode and actionData) so the field ordering matches Across'
SponsoredCCTPInterface.SponsoredCCTPQuote exactly; ensure any local code that
constructs or reads this struct uses the new layout.

Comment on lines +1178 to +1185
);

vm.expectRevert(InvalidCallData.selector);
(bool success, ) = address(standaloneFacet).call(encoded);
assertFalse(success, "expected InvalidCallData revert");

vm.stopPrank();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

vm.expectRevert does not catch low-level .call() reverts.

The vm.expectRevert(InvalidCallData.selector) on line 1180 won't intercept the revert from the low-level .call() on line 1181. The assertion assertFalse(success, ...) on line 1182 is the actual check. The vm.expectRevert here has no effect and is misleading.

Proposed fix
-        vm.expectRevert(InvalidCallData.selector);
         (bool success, ) = address(standaloneFacet).call(encoded);
         assertFalse(success, "expected InvalidCallData revert");
+        // Note: vm.expectRevert doesn't work with low-level .call(); 
+        // revert reason is checked manually above in the conditional block
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/solidity/Facets/Across/SwapAPI/AcrossV4SwapFacet.t.sol` around lines
1178 - 1185, The test wrongly uses vm.expectRevert(InvalidCallData.selector)
before a low-level (bool success, ) = address(standaloneFacet).call(encoded);
because vm.expectRevert cannot catch reverts from low-level .call(); either
remove the misleading vm.expectRevert(InvalidCallData.selector) line and keep
the assertFalse(success, ...) check, or replace the low-level .call with a
direct/ABI-invoked call to the facet method so
vm.expectRevert(InvalidCallData.selector) can intercept the revert; adjust
references to encoded, standaloneFacet.call, vm.expectRevert, and assertFalse
accordingly.

Comment on lines +158 to +190
function test_validateOutputERC20LessThanExpected() public {
vm.startPrank(address(diamond));

// Arrange - test case where actual amount is less than expected (should revert)
uint256 expectedAmount = 1000 ether;
uint256 actualAmount = 500 ether; // Less than expected

// Fund diamond with tokens
deal(address(dai), address(diamond), actualAmount);

// Approve OutputValidator to spend tokens from diamond
dai.approve(address(outputValidator), actualAmount);

outputValidator.validateERC20Output(
address(dai),
expectedAmount,
validationWallet
);

// Assert
assertEq(
dai.balanceOf(validationWallet),
0,
"No excess tokens should be transferred"
);
assertEq(
dai.balanceOf(address(diamond)),
actualAmount,
"Should keep actual amount"
);

vm.stopPrank();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test name/comment mismatch with actual behavior.

The test is named test_validateOutputERC20LessThanExpected and the comment on line 161 says "should revert", but the test does not use vm.expectRevert. Instead, it asserts that no excess is transferred and the diamond keeps the actual amount.

If the OutputValidator is designed to not revert when actual < expected (just do nothing), the test name and comment are misleading. Consider:

  1. Renaming to test_validateOutputERC20LessThanExpected_NoExcessTransferred
  2. Removing the "should revert" comment
  3. Or if it should revert, prefix with testRevert_ and add vm.expectRevert
Proposed fix (if no-revert is intended)
-    function test_validateOutputERC20LessThanExpected() public {
+    function test_validateOutputERC20LessThanExpected_NoExcessTransferred() public {
         vm.startPrank(address(diamond));

-        // Arrange - test case where actual amount is less than expected (should revert)
+        // Arrange - test case where actual amount is less than expected (no excess to transfer)
         uint256 expectedAmount = 1000 ether;
         uint256 actualAmount = 500 ether; // Less than expected
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/solidity/Periphery/OutputValidator.t.sol` around lines 158 - 190, The
test function test_validateOutputERC20LessThanExpected currently claims "should
revert" but asserts behavior for a no-revert path; update it to match intended
behavior: if OutputValidator.validateERC20Output is supposed to silently do
nothing when actual < expected, rename the test to something like
test_validateOutputERC20LessThanExpected_NoExcessTransferred, remove the "should
revert" comment and keep the current assertions; otherwise, if it should revert,
rename to a revert-style name (e.g.,
testRevert_validateOutputERC20LessThanExpected), add vm.expectRevert before
calling OutputValidator.validateERC20Output and adjust assertions accordingly;
ensure references to the test name and the call to validateERC20Output are
updated to keep semantics clear.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
script/helperFunctions.sh (4)

3951-3998: Remove commented-out dead code.

The entire extractDeployedAddressFromRawReturnData function is commented out (lines 3951-3998) while an active implementation exists immediately below (lines 3999-4045). Commented-out code adds noise and should be removed since the history is preserved in version control.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/helperFunctions.sh` around lines 3951 - 3998, Remove the entire
commented-out dead implementation of extractDeployedAddressFromRawReturnData
(the large block where the function is fully commented out) so only the active
implementation remains; ensure you delete the commented block that references
MAX_WAITING_TIME_FOR_BLOCKCHAIN_SYNC and doesAddressContainBytecode to avoid
duplicate/noise and leave the functional extractDeployedAddressFromRawReturnData
function intact.

4563-4566: ShellCheck SC2155: Declare and assign separately to catch mktemp failures.

Static analysis flags that combining declaration and command substitution can mask non-zero exit codes. If mktemp fails, the error would be silently ignored.

♻️ Suggested fix (optional)
   local STDOUT_LOG
   local STDERR_LOG
-  STDOUT_LOG="$(mktemp)"
-  STDERR_LOG="$(mktemp)"
+  STDOUT_LOG=$(mktemp) || { error "Failed to create temp file"; return 1; }
+  STDERR_LOG=$(mktemp) || { error "Failed to create temp file"; rm -f "$STDOUT_LOG"; return 1; }

Given the repository's preference for simplicity over strict ShellCheck compliance in controlled environments, this is a low-priority refinement.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/helperFunctions.sh` around lines 4563 - 4566, The assignment of
STDOUT_LOG and STDERR_LOG uses combined declaration with mktemp which can hide
mktemp failures; change to declare the variables first (e.g., STDOUT_LOG= and
STDERR_LOG=), then assign using separate mktemp calls and check each command's
exit status (or test the variable and handle failure) so that if mktemp fails
you log/exit or fallback appropriately; update the sites referencing STDOUT_LOG
and STDERR_LOG accordingly to rely on the validated values.

936-941: Non-atomic file write may truncate diamond file on jq failure.

If jq . fails (e.g., malformed $result), the redirection >"$DIAMOND_FILE" will truncate the file before the error is detected. Consider writing to a temp file first, then moving atomically:

🛡️ Suggested fix for atomic write
     # write properly formatted JSON
     if [[ -n "$result" ]]; then
-      echo "$result" | jq . >"$DIAMOND_FILE"
+      local TEMP_DIAMOND
+      TEMP_DIAMOND=$(mktemp)
+      if echo "$result" | jq . >"$TEMP_DIAMOND" 2>/dev/null; then
+        mv "$TEMP_DIAMOND" "$DIAMOND_FILE"
+      else
+        rm -f "$TEMP_DIAMOND"
+        error "[$NETWORK] Failed to format diamond JSON"
+        return 1
+      fi
     else
       error "[$NETWORK] Failed to generate valid diamond JSON"
       return 1
     fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/helperFunctions.sh` around lines 936 - 941, The current write to
DIAMOND_FILE uses direct redirection of jq output which can truncate
DIAMOND_FILE if jq fails; instead, pipe "$result" into jq and write to a
temporary file (e.g., "${DIAMOND_FILE}.tmp"), check jq's exit status before
replacing, and then atomically move the temp into DIAMOND_FILE (mv) only on
success; reference the existing variables and commands (result, jq,
DIAMOND_FILE) and ensure the failure branch still logs the error and returns 1
without touching the original DIAMOND_FILE.

1086-1092: Same atomicity concern applies here.

The write at line 1088 has the same potential truncation issue as in saveDiamondFacets. Apply the same temp-file-then-move pattern for consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@script/helperFunctions.sh` around lines 1086 - 1092, The write to
$DIAMOND_FILE from the $result jq output is not atomic and can truncate the
target; change it to write to a temporary file (e.g., using mktemp or
"${DIAMOND_FILE}.tmp.$$"), run echo "$result" | jq . > "$tmpfile", check jq
success, then mv the temp file to "$DIAMOND_FILE" to ensure atomic replace; keep
the same error handling (use the existing error "[$NETWORK] Failed to generate
valid diamond JSON" and return 1 on failure) and reference the variables/result
handling in this block to mirror the temp-file-then-move pattern used by
saveDiamondFacets.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@script/helperFunctions.sh`:
- Around line 3951-3998: Remove the entire commented-out dead implementation of
extractDeployedAddressFromRawReturnData (the large block where the function is
fully commented out) so only the active implementation remains; ensure you
delete the commented block that references MAX_WAITING_TIME_FOR_BLOCKCHAIN_SYNC
and doesAddressContainBytecode to avoid duplicate/noise and leave the functional
extractDeployedAddressFromRawReturnData function intact.
- Around line 4563-4566: The assignment of STDOUT_LOG and STDERR_LOG uses
combined declaration with mktemp which can hide mktemp failures; change to
declare the variables first (e.g., STDOUT_LOG= and STDERR_LOG=), then assign
using separate mktemp calls and check each command's exit status (or test the
variable and handle failure) so that if mktemp fails you log/exit or fallback
appropriately; update the sites referencing STDOUT_LOG and STDERR_LOG
accordingly to rely on the validated values.
- Around line 936-941: The current write to DIAMOND_FILE uses direct redirection
of jq output which can truncate DIAMOND_FILE if jq fails; instead, pipe
"$result" into jq and write to a temporary file (e.g., "${DIAMOND_FILE}.tmp"),
check jq's exit status before replacing, and then atomically move the temp into
DIAMOND_FILE (mv) only on success; reference the existing variables and commands
(result, jq, DIAMOND_FILE) and ensure the failure branch still logs the error
and returns 1 without touching the original DIAMOND_FILE.
- Around line 1086-1092: The write to $DIAMOND_FILE from the $result jq output
is not atomic and can truncate the target; change it to write to a temporary
file (e.g., using mktemp or "${DIAMOND_FILE}.tmp.$$"), run echo "$result" | jq .
> "$tmpfile", check jq success, then mv the temp file to "$DIAMOND_FILE" to
ensure atomic replace; keep the same error handling (use the existing error
"[$NETWORK] Failed to generate valid diamond JSON" and return 1 on failure) and
reference the variables/result handling in this block to mirror the
temp-file-then-move pattern used by saveDiamondFacets.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 7af5b100-3250-41e3-bb9a-5f50273467cc

📥 Commits

Reviewing files that changed from the base of the PR and between 79812a9 and 3f6b8ba.

📒 Files selected for processing (5)
  • .gitignore
  • deployments/arbitrum.diamond.staging.json
  • deployments/arbitrum.staging.json
  • script/deploy/deploySingleContract.sh
  • script/helperFunctions.sh
💤 Files with no reviewable changes (1)
  • .gitignore
✅ Files skipped from review due to trivial changes (1)
  • deployments/arbitrum.staging.json
🚧 Files skipped from review as they are similar to previous changes (1)
  • script/deploy/deploySingleContract.sh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants