Your documentation says:
- If your application ensures that an upload is always accessed by only one client, you can enable sticky sessions for your load balancer. In this case, the HTTP requests for the client will always be routed to the same tus server that can handle the concurrency correctly.
My question is regarding concurrency. When a client is uploading file, do the chunks get sent to server in a sequential manner or not? Each PATCH response sends back the new offset to build new chunk on the client side is my understanding. If the chunks are sent in sequential manner, why do we need to turn stick sessions on? Where does the concurrency happen in a single client to multiple tus backend server scenario ? We are using one common storage, which is Azure Block storage in our case. So all servers can write to the same storage but each client comes with it’s own file handle which is accessible to that client only from the browser’s local storage. Can client send multiple chunks at the same time in parallel PATCH request by any chance?