How to ensure file creation in the target Storage during the upload process

Hi,

I’m new to tus and currently been trying to implement an upload module into an existing ASP .NET MVC web project in a raspberry pi server. I am configuring a tus-js-client to allow uploading via web browser to the server which is handled using tusdotnet library. Initially my upload process goes like this, it will usually creates 3 files with the generated file id on the server, which includes uploadlength, metadata, and the file itself. But I notice the size of the file would only grow once the onProgress event on the client is finished transferring the file which leads me to the conclution that by default there are two process, first the tus client will send the file to the server and second, the tus client will send yet another request to build the file by accessing upload url with the file id generated by the first process which is not ideal for my requirement. My expectation, there should be only 1 process which is, when the tus client transfer, automatically the chunk file size should be growing in accordance with the data transfer progress. Upon looking through the wiki explanation here I conclude that we can combine the two process by adding the option uploadDataDuringCreation to true. But in order to do so it said I need to ensure that my server-side support the creation-with-upload extension. So I got two questions for now :

  1. Is uploadDataDuringUpload on the tus-js-client option will allow the server-side to immediately receive and build the file size during the onProgress sequence? If not is there any solution to do this, or atleast to capture the event in-between the onProgress and onSuccess because what I see, the file building is happening between those two events.

  2. How do I set up my tusdotnet server to be able to support the creation-with-upload extension.

Thanks.

Guruh.

I managed to capture one transaction, here I attached both request and the response.

Request

POST
Content-Length: 147989263
Content-Type: application/offset+octet-stream
Tus-Resumable: 1.0.0
Upload-Length: 147989263
Upload-Metadata: filename c25hcGVzZXJ2ZXItdjUuMi4yMy4xMjA0LUgyLjIuemlw,filetype YXBwbGljYXRpb24veC16aXAtY29tcHJlc3NlZA==,filesize MTQ3OTg5MjYz

Response

HTTP/1.1 201 Created
Content-Length: 0
Connection: keep-alive
Location: /api/tus/upload/7e6cbda546f0476799c700cbb7cd8429
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Location,Tus-Resumable,Tus-Version,Tus-Extension,Tus-Max-Size,Tus-Checksum-Algorithm,Upload-Length,Upload-Offset,Upload-Metadata,Upload-Checksum,Upload-Concat,Upload-Expires
Tus-Resumable: 1.0.0
Upload-Offset: 147989263

Looking at both Request and Response and checking it with the documentation I could safely assume that my server supports creation-with-upload extension. But now I observer on my server, it seems it did not create the the chunks at all. Though indeed I will delete them on the when it reached the server on the onSucceed but it appears now the upload process still going to stuck when after the onProgress event is complete for quite long time before continuing with the onSuccess method. How can I prevent this from happening? In sense, how can I immediately receive the file and the size on the server during the onProgress event?

Your intuition is correct. tus-js-client will first send an empty POST request to create a server-side upload resource (which also announces the file length etc). Once the resource is created, the client sends a PATCH request to transfer the actual file content.

This process ensures that the client knows early whether the server will accept the file or not. In addition, the client will always have an upload URL to resume the upload. Which is not the case if you enable uploadDataDuringCreation. If the upload fails with uploadDataDuringCreation enabled, tus-js-client is not able to resume. I would recommend you to only enable it if you have a very compelling reason to do so.

Thank you for your suggestion!

I did try to enable the uploadDataDuringCreation option. My expectation, I will be able to see the file size increasing on the server in accordance with the upload progress so that the moment my upload percentage reach 100% so does my file size on the server. However what actually happened was the progress was going from 0 to 100% first. After that, the chunk is slowly building size on the server. It’s kind of delayed. By enabling the uploadDataDuringCreation option, the only difference I see was, the chunk file would be created later after the upload progress is completed. Whereas when I disable it which is by default, the chunk file was generated earlier but with 0 size. And yes, there are no PATCH request since the file transfer happen during the POST request also. But I thought there shouldn’t be any delay to fill the file size on the server.

So is there a way to set up the tus configuration so that we can fill up the file size on the server in accordance with the upload percentage and not getting the delayed process to fill the file size on the server?

There will always be some delay because tus-js-client reports the upload progress as it sends data into the network. The receiving end may experience delays until it the data is read from the network and written to disk.

Are you using an intermediate proxy?

We can access the raspberry pi device over the noip ddns server and also over the TCP/IP Tunneling service from a vendor.

When we do the upload using the TCP/IP Tunneling, the connection often got lost when it tries to build the file on the server (the delayed process) after the percentage was completed. Using the ddns, we managed to send the file over to the server but connection and the building process was also slow. Though not as slow as when we use the Tunneling.

I fail to understand what your actual problem or question is. Could you repeat that? And maybe include a demo video? Or is it working for you now?

I apologize, looks like I wasn’t explaining very clear previously.

So we did have intermediate proxy. Basically Our raspberry pi server can be accessed remotely by these two ways. One is using ddns protocol over noip server (we can search for : noip ddns (sorry I cannot put more than 2 link because of new user account limitation here)), another way is using TCP/IP Tunneling. I provided a diagram to picture the situation.

Through these two methods, I’m trying to send a file to our raspberry pi system. Approximately around 150mb through the proxy environment, either it’s the ddns or the TCP/IP Tunneling. Both ways I am experiencing connection issue. In sense, the upload progression is going slowly and after that, the process of reading file for it to be written on the disk is also slow, even worse using the TCP/IP tunneling because often I got disconnected at that point.

So my question is, is there a way we can configure tus, either from the client-side using the tus-js-client or the server-side using the tusdotnet, to be able optimize the performance when sending the file over the intermediate proxy?

Also, is there a way we can capture the process of reading file from the network to be written to the disk? Because I see that process did not get captured on both onProgress and onSuccess in the tus-js-client.

Thanks for the additional details. If you are using a proxy, we always recommend to disable request buffer as this might cause the exact symptoms that you were describing: the upload is first fully buffered by the proxy because being relayed to the server.

You can read more about setting up a proxy at tusd/docs/faq.md at main · tus/tusd · GitHub. Although this documentation is written for tusd, it applies to other tus servers as well.

Hi Marius, first of all, thank you for the suggestion and sorry for the late reply. So I tested your suggestion and was able to stream the file upload process. However, I’ve read that by disabling both the proxy buffering and the proxy buffering request would have adverse effect, especially on slow network connection. On my actual production scenarios, there could be many case where the user is having slow network connection. Is there any work around to be able to use the tus without disabling the proxy buffering on a proxy?

And also for a second question, I am currently using the tusdotnet to handle the implementation on the server along with the tus-js-client on the client side. My expectation is, after the client transfer all the chunkfile in the directory, the tus should have the functionality to be able to consolidate it into a file. Is there any example for this I could follow?

The linked post talks about response buffering, which is different than request buffering. For tus, you should disable request buffering, but can leave response buffering on. The two can easily be confused, but are very different.

Not sure what you mean here. The tus server should give you one file containing all data per upload. Is that not the case?

The linked post talks about response buffering, which is different than request buffering. For tus, you should disable request buffering, but can leave response buffering on. The two can easily be confused, but are very different.

Alright, I will look further for this one, but first I just want to confirm, so what are you saying is, in the nginx.conf we can set it up like this instead of following from the sample ?

        # Disable request and response buffering
        proxy_request_buffering  off;
        #proxy_buffering          off; #we did not need to include this line at all

And by doing this, are there any known side effects I should note?

Not sure what you mean here. The tus server should give you one file containing all data per upload. Is that not the case?

Let say I am transferring an ubuntu.iso file. Then on the server end, I received the file but it is not in the same naming and format, instead it is in the form of a file, having no extension with the generated tus URL for the file name, along with some metadata and chunk informations.

How can I change back the file into the same ubuntu.iso at the server end?

None that I am aware of.

I don’t use tusdotnet on my own, so I cannot give you concrete instructions, but these links might be helpful:

Alright, I have read all the links. and for now, what I could conlcude in short is, that I have to rely on the metadata to manually change back the name and the extension of the file in the server end. (Correct me if I’m wrong). Yes I think currently this is what I am doing. Though initially I thought the transfer process would automatically re-create the file, along with the naming and the extension on the server end.

Thank you for all the help so far Marius. Should anything else comes up, I will add more to this discussion or probably will open up another topic.

All the best!

Glad I could help you!