I need to upload large files to AWS S3 (200GB was mentioned, but that may be before zipping) from Africa, where the internet connection may be unreliable.
Looking for some advice on the best way to go about this (I am mostly a backend guy with enough frontend knowledge to get things working).
So my current approach is, I have a Djang backend app, where I initiate the s3 multipart upload and generate the presigned urls. I pass this to the frontend, where I am splitting the file and uploading the chunks. I then make a call to the backend to complete the upload. This allows me to keep AWS clinet credentials secure on the backend.
It works fine while the internet connection is on, but when I disconnect my internet, and try top resume it is getting a bit messy.
So I came across Uppy and it seems like it might be useful.
Anyway, looking at the S3 section, it looks like it is doing all the steps in the frontend.
If I already have the presigned urls, what do I need to do to as I only really need the resume functionality from Uppy. Do I need to use the s3 client plugins? Is this the correct or wrong approach and why?
If someone c\n give me a rough overview and point to the right part in the docs I would be most grateful.