You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## What changes are proposed in this pull request?
**WHAT**
- Extending retry function with a new parameter `max_attempt` to allow
client to retry and fail after certain amount
- Remove 500 from FilesExt retry status code
- Add new config to set the retry attempts for FilesExt
- Update the retry logic of FilesExt to fail after certain attempts.
**WHY**
- 500 errors shouldn't be retried
- The FilesExt should always prioritize fallback over retry to avoid
regression
## How is this tested?
Unit tests were updated to reflect the change.
---------
Co-authored-by: Parth Bansal <parth.bansal@databricks.com>
Copy file name to clipboardExpand all lines: NEXT_CHANGELOG.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,14 @@
4
4
5
5
### New Features and Improvements
6
6
7
+
* FilesExt retry logic now respects a retry count limit in addition to the time-based timeout. Operations will stop retrying when either the retry count (`experimental_files_ext_cloud_api_max_retries`, default: 3) or timeout (`retry_timeout_seconds`) is exceeded, whichever comes first. This provides faster feedback when APIs are consistently unavailable.
8
+
7
9
### Security
8
10
9
11
### Bug Fixes
10
12
13
+
* FilesExt no longer retries on 500 (Internal Server Error) responses. These errors now fail immediately or fallback to alternative upload methods as appropriate.
0 commit comments