Conversation
📝 WalkthroughWalkthroughA new pytest module adds fixtures that skip when observability tracing is configured and that install an OPA policy. A test issues authorised and unauthorised requests and asserts that requests blocked by Authorino do not increment the backend counter beyond the expected single increment. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
testsuite/tests/singlecluster/authorino/test_backend_not_reached.py (1)
15-17: Use consistent chained.get()pattern for safer nested attribute access.The current code mixes
.get()with attribute access (.observability). The codebase already has an established pattern intestsuite/tests/singlecluster/tracing/conftest.pythat chains.get()calls safely.♻️ Proposed refactor using consistent access pattern
- if kuadrant.model.spec.get("observability") is not None: - if kuadrant.model.spec.observability.get("tracing") is not None: - pytest.skip("This test does not verify properly with tracing enabled") + tracing_spec = kuadrant.model.spec.get("observability", {}).get("tracing") + if tracing_spec is not None: + pytest.skip("This test does not verify properly with tracing enabled")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testsuite/tests/singlecluster/authorino/test_backend_not_reached.py` around lines 15 - 17, The nested attribute access mixes .get() with attribute lookup; change to a consistent chained .get() pattern: use kuadrant.model.spec.get("observability") instead of kuadrant.model.spec.observability and then call .get("tracing") on that result so both checks use .get() (e.g., kuadrant.model.spec.get("observability") and then .get("tracing")), mirroring the safe pattern used in tracing/conftest.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@testsuite/tests/singlecluster/authorino/test_backend_not_reached.py`:
- Around line 15-17: The nested attribute access mixes .get() with attribute
lookup; change to a consistent chained .get() pattern: use
kuadrant.model.spec.get("observability") instead of
kuadrant.model.spec.observability and then call .get("tracing") on that result
so both checks use .get() (e.g., kuadrant.model.spec.get("observability") and
then .get("tracing")), mirroring the safe pattern used in tracing/conftest.py.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 525d4a97-f094-4e5f-b6bc-5d2864fdb7d5
📒 Files selected for processing (1)
testsuite/tests/singlecluster/authorino/test_backend_not_reached.py
388b003 to
ad05e30
Compare
Signed-off-by: Alex Zgabur <azgabur@redhat.com>
ad05e30 to
70ab97a
Compare
There was a problem hiding this comment.
Not sure why trying to run debugger on this file will fail for me. I think its location might be the issue.
There was a problem hiding this comment.
Hmm I didnt have problem with debugger, be sure to check where you have set "working directory", I remember that having it wrongly set the imports were not working correctly
| def authorization(authorization): | ||
| """ | ||
| Adds OPA policy that accepts all requests that contain `header` | ||
| Adds unauthorized message as it adds processing delay, increasing the chance for the bug to appear. | ||
| """ | ||
| authorization.identity.clear_all() | ||
| authorization.authorization.add_opa_policy("opa", rego_allow_header("key", "value")) | ||
| authorization.responses.set_unauthorized(DenyResponse(body=Value("Custom response"))) | ||
| return authorization |
There was a problem hiding this comment.
from what I understood, all types of failures could leak. I think we could also have a rate_limit variant for this.
There was a problem hiding this comment.
Indeed I agree to have test for RLP or TRLP as well, can be done in next PR, will create issue
| before_counter = client.get("/counter", headers={"key": "value"}).json()["counter"] | ||
| client.get_many("/counter", 10) | ||
| after_counter = client.get("/counter", headers={"key": "value"}).json()["counter"] | ||
| assert after_counter == before_counter + 1 |
There was a problem hiding this comment.
thinking about the rate_limit variant, it might be better to get the counter on a different path, for example /counter/metrics that is not rate limited and you could get the counter from there instead. What do you reckon?
There was a problem hiding this comment.
That is another way do it yes, I went with simplicity over complexity when implementing the httpbin endpoint, but if you think it should be like that I can re-implement it.
| tracing_spec = kuadrant.model.spec.get("observability", {}).get("tracing") | ||
| if tracing_spec is not None and tracing_spec.get("defaultEndpoint") is not None: | ||
| pytest.skip("This test does not verify properly with tracing enabled") |
There was a problem hiding this comment.
I am not sure I understand why tracing is an issue here. By default we enable tracing on all test clusters, if we keep it like that this test won't ever run, unless we disable/enable tracing for tests that require it.
There was a problem hiding this comment.
As this is an issue with a race condition additional processing from tracing was causing delays where the race condition window was already closed. To properly have the issue appear tracing must be turned off.
Again I agree we enable tracing by default and this is the only test requiring tracing disabled so it is a problem. Could be solved by expanding testing matrix or having this test be a disruptive one which would disable/enable tracing. Im not much for the second option. But what do you think?
There was a problem hiding this comment.
not too sure either... But with tracing enabled is it making the issue harder to reproduce or not reproducible at all? Expanding the matrix would re-run the testsuite one more time, just with tracing enabled, I reckon?
I am even considering that maybe tracing/observability should be disabled rather and test them separately.
Bottom line is that this should never happen, with or without tracing.
| Do unauthorized requests which should not increment the counter. | ||
| Lastly check the counter if it contains expected value. | ||
| """ | ||
| before_counter = client.get("/counter", headers={"key": "value"}).json()["counter"] |
There was a problem hiding this comment.
You might not need to add the custom endpoint to httpbin, we have a MockserverBackend class with retrieve_requests method which should be doing the same thing.
Also, from the quick search, there are specialized counting endpoints exist for this directly in the mockserver's api https://www.mock-server.com/mock_server/verification.html
| Lastly check the counter if it contains expected value. | ||
| """ | ||
| before_counter = client.get("/counter", headers={"key": "value"}).json()["counter"] | ||
| client.get_many("/counter", 10) |
There was a problem hiding this comment.
| client.get_many("/counter", 10) | |
| client.get_many("/counter", 3) |
does it make sense to decrease number of api calls for this test?
| @@ -0,0 +1,41 @@ | |||
| """Test that checks if requests that are blocked by Authorino do not reach the backend""" | |||
There was a problem hiding this comment.
Might be helpful for traceability to link the bug issue Kuadrant/wasm-shim#321 directly in the test file. Somewhere inside a docstring or pytest.mark.issue decorator
Description
Implements test for bug fixed by Kuadrant/wasm-shim#321
The bug is a race condition where request gets sent to backend even when rejected by policy and client sees rejected status code.
Changes
Adds 1 test
Adds 1 endpoint to Httpbin https://gitlab.cee.redhat.com/kuadrant-qe/httpbin/-/commit/f231b78970b6351ca9a58270ffeaa16ad8b3ebe0
Due to recent build the
latesttag on clusters will take some time to propagate, because Deployments created by testsuite uses pull strategyPullIfNotPresentI can change it in this PR if needed, or changelatesttag with0.2in settings.Verification
On cluster with tracing disabled (not observability) run:
Bug was fixed in Kuadrant v1.4.3 so you can test that the test is failing in 1.4.2 and passing in v1.4.3-rc2
Summary by CodeRabbit