Question about Upload-Offset requirements

Hi –

Quick question: is a tus server implementation allowed to impose any restrictions on the Upload-Offsets that PATCH requests can start with, or no?

Reason for the question: I’m thinking about implementing the tus protocol for a secure file storage server. However, in such a server, one might want to store things in blocks/chunks with fixed boundaries, particularly if one is storing an encrypted format where a given byte of the encrypted chunk depends on plaintext bytes arbitrarily far back in the same chunk. In such cases, you can’t easily begin writing to the middle of a chunk. (You can, however, easily finish writing in the middle of a chunk, given a streaming cipher or some way of padding out the chunk.)

This part of the spec makes me think that perhaps tus isn’t designed for such storage scenarios:

The Client SHOULD send all the remaining bytes of an upload in a single PATCH request, but MAY also use multiple small requests successively for scenarios where this is desirable. One example for these situations is when the Checksum extension is used.

The Server MUST acknowledge successful PATCH requests with the 204 No Content status. It MUST include the Upload-Offset header containing the new offset. The new offset MUST be the sum of the offset before the PATCH request and the number of bytes received and processed or stored during the current PATCH request.

For concreteness, let’s suppose the server has storage broken into 64MB chunks. Suppose the client starts a PATCH with an Upload-Offset of 0, and successfully sends 67MB of data to the server in that PATCH. If I’m reading the spec correctly, the server is required to respond with an Upload-Offset of 67MB, even if the closest offset at which it could accept a subsequent PATCH is actually 64MB. Question: is the client then allowed to assume the next PATCH can start at the Upload-Offset returned by the previous PATCH (which I’m guessing is the intent), which in this case would be 67MB? If so, this would seem to rule out server implementations where all writes must start at specific boundaries.

I might have thought “Well, if that’s not allowed, then I guess I could just store things temporarily in arbitrary-sized chunks corresponding to the arbitrarily-sized PATCH requests sent by the client, and then when we’re ‘done’ with the upload, we could decrypt and reencrypt stuff into some other less arbitrary format.” But that requires a lot of unnecessary overhead, and more importantly I don’t see any way of the server ever knowing for sure that the client is actually ‘done’ and won’t send any more PATCH requests.

I might also have thought “OK, fine! Every time a PATCH starts comes in, I’ll decrypt the last previously existing partial chunk, and then re-encrypt that partial chunk’s contents but then instead of closing the encryption stream I’ll add the start of the new PATCH’s contents to the encryption stream.” However, I haven’t yet noticed in the spec any minimum size that the client is allowed to break PATCH requests into, and if the storage chunks are significantly larger than the sizes of the PATCH requests coming in, then you wind up with truly massive overhead (increasing runtime by a factor of O((chunk size / patch size)^2)).

Am I missing something and/or confused, or is it perhaps a bad idea to try using tus in situations where appends starting at arbitrary offsets are nontrivial?

Thanks,

– Scott

Hi Scott,

thanks for the detailed explanation. I still have problems understand why exactly one might want to encrypted file in blocks rather in a combined file but I will try to answer your main question first:

No, in this case the server can just return 64MB to be the new offset. The part of the specification you quoted contains a statement for exactly this situation: “The new offset MUST be the sum of the offset before the PATCH request and the number of bytes received and processed or stored during the current PATCH request.”
If you receive 67MB but only store 64MB, your offset will be increased by 64MB and the server is not required to return an offset of 67MB.

I think the rest of your questions are based on a wrong assumption here, so I would like to hear from you if this improves the situation for you? Feel free to ask me more if you want.

Hi Marius –

Thanks for the response! Hmmm…let me see if I can clarify my concerns a bit.

I still have problems understand why exactly one might want to encrypted file in blocks rather in a combined file

Suppose you have 67 MB of data, along with an encryption key. You tell me to encrypt that 67 MB of data with the key and save it to disk. I do so, and tell you “No problem. All 67 MB done!”

Now suppose you say “Oops! Actually, that’s not the end of the file. There’s another 10 MB of data at the end, for a total of 77 MB. Can you add this to the encrypted file, please?”

One possible human-ish response to this request would go something like, “ARGH! Why did you not tell me that to start with?! I’ve thrown out all the internal state I had in RAM that’s required by the encryption filter. I cannot possibly add this chunk of data to the end of a single continuous encrypted datastream without recovering the exact internal state that the encryption filter had at the end of the previous 67 MB, and I cannot possibly do that without decrypting all of the previous 67 MB and re-encrypting it! ARRRRGH!”

So, if you want to be able to upload incremental pieces of data and encrypt them with a stream-oriented cipher before they ever hit the disk, I think you basically have these options:

Option (1): Save the encryption filter’s internal state until you know for sure you’re not going to receive any more data to append.

Unfortunately, as far as I can tell, the tus protocol provides no clean mechanism for the client to say “I’m definitely done giving you data now”, which seems to me like it might be a significant oversight. (It’s a lot like having a file API without “close”.) So, there’s no clean way to take this approach.

Option (2): Break the single encryption stream into chunks where each N megabytes of plaintext data goes into its own separate encrypted stream (probably sitting in its own file). This way we don’t have to start over from scratch every time we get additional data. Let’s say the first PATCH request for 67 MB gets split into two files: foo.mp4.chunk00001 for the first 64 MB, and foo.mp4.chunk00002 for the final 3 MB. Then, when we get the second PATCH request for megabytes 68-77, we can respond in one of two ways (“2A” and “2B”):

Option (2A): Upon receiving another PATCH request, we ask the client to back up and start sending at the most recent chunks boundary. For example, if the chunk size is 64 MB, and the client sends one PATCH for 67 MB and then tries sending a second one for 10 MB, we might try asking the client to start sending the second patch at an Upload-Offset of 64 MB. (This is not great, but notably better than “Uhh, sorry, you’re gonna have to start over from Upload-Offset: 0.”)

Unfortunately, as far as I can tell the tus spec makes this awkward:

The part of the specification you quoted contains a statement for exactly this situation: “The new offset MUST be the sum of the offset before the PATCH request and the number of bytes received and processed or stored during the current PATCH request.”
If you receive 67MB but only store 64MB, your offset will be increased by 64MB and the server is not required to return an offset of 67MB.

The issue is that the because of the way the protocol is laid out (as I understand it – again, I may be missing something), the server can’t know whether the first PATCH was the final one or not. So, when you asked me to encrypt and store that first chunk of 67MB, I actually did so with no problem, and told you I did so and so returned an Upload-Offset of 67MB. It’s only later when you try adding another PATCH I wasn’t anticipating that the server wants to say “Actually, if you want to do that, then I’m gonna need you to back up to an Upload-Offset of 64MB.”

But the spec doesn’t allow for that: either (1) all the data in the first 67 MB PATCH was all “processed or stored”, in which case the server MUST respond with an Upload-Offset of 67 MB for the first PATCH and then (as I interpret it) allow the client to start at 67 MB for the next PATCH; or (2) the server MUST respond to the first PATCH with “Upload-Offset: 64 MB. Hah! You gave me 67 MB, but I only actually saved 64 MB, and dropped the other 3 MB on the floor!” In this case, even if the client didn’t have another PATCH request for data after the initial 67 MB, it’s forced to send 3 MB (megabytes 65-67) all over again. Which would be…suboptimal.

Option (2B): Upon receiving the second patch request for megabytes 68-77, the server decrypts megabytes 64-67 in foo.mp4.chunk00002, re-encrypts it to recover the encryption filter’s internal state, and then encrypts megabytes 68-77 and appends it to foo.mp4.chunk00002.

Unfortunately, as I mentioned before, tus makes this approach somewhat hazardous as well: it’s not clear that the client can’t make lots of tiny PATCH requests that are much smaller than our chunk size, in which case the cost of incrementally decrypting and re-encrypting each chunk over and over again before the next chunk boundary is reached becomes prohibitively expensive.

(EDIT: I should probably mention Option (3) again: instead of using fixed-size chunks, store one encrypted chunk for each PATCH request that the client sends you. This solves most of the problems above, but it’s still kinda kludgy: the chunk size is determined by the client without any knowledge of what the per-chunk overhead is, and you’re essentially stuck with that indefinitely because there’s no way for the client to say “I’m all done sending updates now!” to let the server know that now would be a good time to clean things up into one contiguous file.

In general, some huge fraction of my concerns about tus boils down to: “Am I missing something, or is there no way for the server to know for sure that it’s done receiving everything for a particular file? If not, how is that not a serious limitation on its usability?”)

Did that help make my concerns more clear?

Thanks again!

– Scott

Hi Scott, thanks for your detailed explanation, it helped me a lot!

This not entirely true. Normally the client tells the server the entire file size when it creates an upload (using the Upload-Length header). Based on this information, the server knows exactly how much data are still missing and if an upload is already finished. So you could in fact, store the encryption state and delete it once the upload is completed.

It added “normally” since the tus protocol also allows to omit this length information if the length is not known in advance. However, this is a rather rare use case and is only needed for streaming files, e.g. uploading a video while it’s being recorded.

Most clients can be configured to only send a certain amount of data in a single PATCH requests to tailer exactly this situation. For example, you can tell the client to only send up to 64MB and it will then split your 67MB upload into two requests (one of 64MB and one of 3MB). The option is rarely used but could be beneficial for your situation.

As I mentioned in my post right now, there is a way to know when an upload is done since the upload length is known in advance :slight_smile:

I hope this helps to answer some of your questions!

Hi Marius –

Thanks! OK, that makes sense.

Actually, now that I stare at the documentation a little longer, I see the following in the section on the “Creation” protocol extension (which I’ll admit I was mostly ignoring because it presumably wasn’t part of the “core” protocol I could confidently rely on clients implementing):

POST

The Client MUST send a POST request against a known upload creation URL to request a new upload resource. The request MUST include one of the following headers:

a) Upload-Length to indicate the size of an entire upload in bytes.

b) Upload-Defer-Length: 1 if upload size is not known at the time. Once it is known the Client MUST set the Upload-Length header in the next PATCH request. Once set the length MUST NOT be changed. As long as the length of the upload is not known, the Server MUST set Upload-Defer-Length: 1 in all responses to HEAD requests.

This would also let us know when initially-unknown-length uploads are finished.

I think you might want to think about moving this stuff (much of which has to do with PATCH requests rather than the POST used in the extension) into the core protocol for all PATCH requests, since it solves a lot of potential issues. (Or, if it’s actually intended to be in the core protocol, make the documentation at Resumable upload protocol 1.0.x | tus.io a little more explicit about that…the use of Upload-Length in the core protocol is currently left somewhat to the imagination. :slight_smile: )

Thanks again!

– Scott

Hi Scott, I am always happy to help!

Correct, that’s right!

The Creation extension is on purpose not part of the core specification. The reason is that for some services it is better to not let the end user create the uploads on their own. Instead they then have a proprietary API for doing so. The situations were this is used/necessary are rather rare but one good example is Vimeo’s API. There you cannot use tus’ POST requests to create an upload but have to go through their API for uploading a video. In their situation is makes sense and integration easier, so we prefer to keep it as an optional extension.

That being said, all server and client implements (including the official ones, of course) do support the Creation extension, so you can basically use it everywhere. But I agree with you that we could be it more apparent in our documentation.

Anyways, does all of this solve your questions about uploading to a storage with a fixed block size or are there still open problems?

Hi Marius –

The Creation extension is on purpose not part of the core specification.

I didn’t mean to suggest moving the entire Creation-using-POST extension into the core specification, but rather just the requirement that either Upload-Length or Upload-Defer-Length MUST always be included in all PATCH requests, and that in the latter case the client MUST set Upload-Length (and then never change it) as soon as it’s finally known (which would presumably always include the final PATCH request).

The reason is that for some services it is better to not let the end user create the uploads on their own. Instead they then have a proprietary API for doing so. The situations were this is used/necessary are rather rare but one good example is Vimeo’s API. There you cannot use tus’ POST requests to create an upload but have to go through their API for uploading a video. In their situation is makes sense and integration easier, so we prefer to keep it as an optional extension.

I see…that makes sense.

That being said, all server and client implements (including the official ones, of course) do support the Creation extension, so you can basically use it everywhere.

Great!

Anyways, does all of this solve your questions about uploading to a storage with a fixed block size or are there still open problems?

At this point I think it’s probably possible to make it work for my application. Maybe I’ll play with Uppy a bit to see how suitable it would be client-side for the sorts of things I have in mind.

Thanks a lot!

Best wishes,

– Scott