Skip to content

More flexible constraints #65

@tp971

Description

@tp971

Assume I want to support really big file uploads (e.g. multiple gigabytes), but only for certain users. This runs into multiple problems with the current API:

  1. You can only limit the size of whole stream or whole fields, but not the limit of headers, so I think there should be a way to set a size limit for all headers in SizeLimit. This would allow only limiting the headers and leaving the size limit to the application, which is what my use-case needs.
  2. Maybe there should be a way to set a general limit for "text fields" (i.e. fields without a file_name) and a limit for "file fields" (i.e. fields with a file_name). Currently, this is supposed to be done with SizeLimit::per_field(), but this is not really viable for all use cases.
  3. Maybe it should be possible (I don't know how hard this is to implement) to change the size limit after constructing the Multipart struct.

I think implementing the first point (allowing to only limit headers) should be sufficient to handle most use-cases.

The last two points come from the observation that the API of many web-frameworks in Rust use some notion of "extractors", e.g. in axum, you define "handler functions" whose arguments implement FromRequest: https://docs.rs/axum/latest/axum/extract/multipart/struct.Multipart.html#example in this example, if you do a request to /upload, the Multipart (which is a wrapper around multer::Multipart), is constructed by axum before upload() is called.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions