Hi!
Probably an easy issue, but having a hard time figuring it out:
I want to upload filed to an specific folder in my S3 bucket.
The folder should be defined on the client side
I am using Dashboard + Companion to upload my files
It all works perfectly, but it uploads to the root folder of my bucket
I tried using meta, or headers, but nothing is working.
How can I send a value which I can access in the server? I am assuming it is something I will access via the req parameter here:
I just ran a test with an assembly and it puts it in an obscurely numbered bucket.
Weād like to pass through a username from the WordPress website and have that folder created on S3, and the files stored in that user named folder in the bucket. Is this possible with uppy -> transloadit -> S3?
It is yes, by default we pick unique paths based on a hash so that we donāt risk overwriting any files if two users both upload avatar.jpg (or, a single user might do so but the second time itās an updated version). You can change the path to use the original fileās basename, meta information, and also arbitrary payloads from your side. For instance, if you also submit (hidden) form fields, you could use ${fields.userId} in the /s3/store Robotās path parameter.
Could you clarify what happens in the case of duplicate uploads to the same bucket and/or folder?
Itās implied in your answer here that they just get overwritten?
Are there any options for auto-creating file versions with simple numeric numbering added on?
If we autonumbered, weād first have to see what files are there, then increment a number, but we do everything in parallel from multiple machines and most storage platforms do not support locking so weād still run the risk of collisions. Thatās why we use hashes, they are safe (enough).
If you want simple incrementing filenames, and you can guarantee handing out unique numbers on your side, you could pass this number to us and weād use it in the filepath. Still, yes, if you mess up, weād overwrite any existing file.
Going back to this use case. ${fields.*} The form fields submitted together with the upload.
For example, ${fields.myvar} would contain a value of 1 for a form with a <input type="hidden" name="myvar" value="1" /> field.
It seems the only way to trigger the form getting captured is via a form submission event, even in the case of a single hidden field to pass through as a username to be used as a folder within a bucket ie:
or āpathā: ā${fields.myvar}/${file.name}ā,using the form example above.
But if weāre using Dashboard thereās no practical way to do a form submission I see, especially if you have autoload set for Dashboard. Are there any ways to programmatically send through custom fields (which weād be filling via PHP)?
You could trigger a form programmatically but itās still unclear to me if this works with Dashboard as thereās no examples to go on. Itās a bit fuzzy as to whatās going on under the hood.
I think you could either consider using the Form plugin to add those fields as meta upon submission, or you could use the āfieldsā parameter in the Assembly Instructions to populate them programmatically (overriding a Templateās Assembly Instructions with just those, for instance). That will also make the fields available as fields.myvar.
No problem, thereās a lot to learn as thereās multiple ways to configure things.
Iāll think about doc changes once I get through this project.
One more question though!
Dashboard says āAbility to pause/resume or cancel (depending on uploader plugin) individual or all filesā https://uppy.io/docs/dashboard/
but using the Transaloadit plugin to upload with multiple files dragged onto the dashboard with autostart you can only cancel all of them, even with limit = 1. There is no ability to cancel any of the individual files in upload progress state.
Is it possible to cancel individual uploads in this manner?
This is because all files are handled in a single Assembly, and the only sure-fire way we have to avoid this file from āmaking it to the internetā <-- which may be very important to an end-user if they catch a mistake on their end, is to cancel the entire Assembly.
If this is not what you want, and your usecase allows it, we could look into making it so that you spawn one Assembly per uploaded file. Iād have to check with the Uppy team to see if Iām right about this though and if this is already possible.
Am I going in the right direction here or did I misunderstand some part of your question?
Am I going in the right direction here or did I misunderstand some part of your question?
Let me try again just to clarify where talking about the same thing.
Currently what happens when we drag a bunch of files into the Dashboard is they all start uploading, but you only have the ability to cancel the whole group of files via the pause/cancel control on the progress bar.
Given our use case that the file group could be up to 14 gb with up to 2gb sized individual files, and the user - for example = discovers part way through that they added one wrong file you have to cancel the whole group. What would be more ideal is to be able to cancel individual files in this case, not just the whole group. For example, you can pause any individual file but you canāt appear to cancel them individually.
It does, yes, the problem is they are all handled in a single Assembly, and Assemblies can only be cancelled fully, not partially. So what we do currently is we abort the entire Assembly.
Possible solutions:
If the file is still uploading, we could inform the Assembly that expected_tus_uploads = expected_tus_uploads - 1, this would need to be implemented in Uppy + our API, we donāt have it yet. But it could lead to weird scenarios where the Assembly required 1 image and one audio to be merged into a slideshow, validation passed, and now you are removing the audio in-flight. This (and other edgy-er cases) make it a tricky problem that makes it take longer to carefully implement on the Transloadit side.
If your usecase allows it, start 1 Assembly per 1 file. Then the current constraint of āwe can only abort an entire Assemblyā does not matter anymore because each Assembly only ever holds 1 file. This could perhaps be achieved already with existing Uppy/Transloadit releases, but I would have to ask the team to be sure.
The files are just educational assets, of differing file types. Theyāre independant functionally, but parts of a larger educational package if that makes sense. eg: Art texture assets, 3D files, Video tutorials, etc
At the moment processing is minimal, weāre really just passing them onto S3 for final storage for now.
Option 2 looks like it could fit the use case if it can be implemented. I would imagine in that case Dashboard just displays an X in the top right (for example) if you wanted to abort that individual file upload. If you could let me know if itās possible with the current releases, and how it might be configured to do so that would be great, ty.
I think it would improve the upload UX. Unfortunately users do make mistakes, and this is a viable use case i think.
I just spoke to someone on the Uppy team and they said that you can control the Assembly parameters per file, and that if the parameters end up being different, a new Assembly per file is created.
In this case weāre changing fields based on file meta data, so one Assembly per file should be created:
Hi, I am not understanding what the solution is here. Maybe itās because Iām new and not really understanding what āAssemblyā is, or what RoboDog is.
To put it simply, letās just talk in code
Here, I have a simple HTML page going w/ a TUS server listening on localhost:4020 w/ the S3 credentials applied:
As the OP mentioned, when I upload it stores the file to S3 bucketās root path. I would like to specify a specific folder at the client side (preferably).