-
-
Notifications
You must be signed in to change notification settings - Fork 40
Open
Description
Assume I want to support really big file uploads (e.g. multiple gigabytes), but only for certain users. This runs into multiple problems with the current API:
- You can only limit the size of whole stream or whole fields, but not the limit of headers, so I think there should be a way to set a size limit for all headers in
SizeLimit. This would allow only limiting the headers and leaving the size limit to the application, which is what my use-case needs. - Maybe there should be a way to set a general limit for "text fields" (i.e. fields without a
file_name) and a limit for "file fields" (i.e. fields with afile_name). Currently, this is supposed to be done withSizeLimit::per_field(), but this is not really viable for all use cases. - Maybe it should be possible (I don't know how hard this is to implement) to change the size limit after constructing the
Multipartstruct.
I think implementing the first point (allowing to only limit headers) should be sufficient to handle most use-cases.
The last two points come from the observation that the API of many web-frameworks in Rust use some notion of "extractors", e.g. in axum, you define "handler functions" whose arguments implement FromRequest: https://docs.rs/axum/latest/axum/extract/multipart/struct.Multipart.html#example in this example, if you do a request to /upload, the Multipart (which is a wrapper around multer::Multipart), is constructed by axum before upload() is called.
Metadata
Metadata
Assignees
Labels
No labels