Upload to specific folder in S3

Hi!
Probably an easy issue, but having a hard time figuring it out:

  • I want to upload filed to an specific folder in my S3 bucket.
  • The folder should be defined on the client side
  • I am using Dashboard + Companion to upload my files
  • It all works perfectly, but it uploads to the root folder of my bucket

I tried using meta, or headers, but nothing is working.
How can I send a value which I can access in the server? I am assuming it is something I will access via the req parameter here:

providerOptions: {
  s3: {
    getKey: (req, filename) =>
        filename,
        key: "{key}",
        secret: "{secret}",
        bucket: "{bucket}",
        region: "us-east-1"
  }
},

correct? But how?

Any tips would be immensely appreciated!!
Thank you,
Best,
Daniel

1 Like

Any news on this one? I also want to upload to specific folder in my S3 bucket.

Me three.

I just ran a test with an assembly and it puts it in an obscurely numbered bucket.

We’d like to pass through a username from the WordPress website and have that folder created on S3, and the files stored in that user named folder in the bucket. Is this possible with uppy -> transloadit -> S3?

It is yes, by default we pick unique paths based on a hash so that we don’t risk overwriting any files if two users both upload avatar.jpg (or, a single user might do so but the second time it’s an updated version). You can change the path to use the original file’s basename, meta information, and also arbitrary payloads from your side. For instance, if you also submit (hidden) form fields, you could use ${fields.userId} in the /s3/store Robot’s path parameter.

More on this can be found in

Do let me know if you have more questions!

Great, ty.

Do let me know if you have more questions!

I’m sure I’ll have some soon, thanks!

1 Like

Could you clarify what happens in the case of duplicate uploads to the same bucket and/or folder?
It’s implied in your answer here that they just get overwritten?
Are there any options for auto-creating file versions with simple numeric numbering added on?

If we autonumbered, we’d first have to see what files are there, then increment a number, but we do everything in parallel from multiple machines and most storage platforms do not support locking so we’d still run the risk of collisions. That’s why we use hashes, they are safe (enough).

If you want simple incrementing filenames, and you can guarantee handing out unique numbers on your side, you could pass this number to us and we’d use it in the filepath. Still, yes, if you mess up, we’d overwrite any existing file.

ok, thanks.

Going back to this use case.
${fields.*} The form fields submitted together with the upload.
For example, ${fields.myvar} would contain a value of 1 for a form with a
<input type="hidden" name="myvar" value="1" /> field.

It seems the only way to trigger the form getting captured is via a form submission event, even in the case of a single hidden field to pass through as a username to be used as a folder within a bucket ie:

"exported": {
  "use": ":original",
  "robot": "/s3/store",
  "credentials": "fns3test",
  "path": "${fields.username}/${file.name}",
  "result": "true"
}

or “path”: “{fields.myvar}/{file.name}”,using the form example above.

But if we’re using Dashboard there’s no practical way to do a form submission I see, especially if you have autoload set for Dashboard. Are there any ways to programmatically send through custom fields (which we’d be filling via PHP)?

You could trigger a form programmatically but it’s still unclear to me if this works with Dashboard as there’s no examples to go on. It’s a bit fuzzy as to what’s going on under the hood.

I think you could either consider using the Form plugin to add those fields as meta upon submission, or you could use the “fields” parameter in the Assembly Instructions to populate them programmatically (overriding a Template’s Assembly Instructions with just those, for instance). That will also make the fields available as fields.myvar.

Does that help?

Yes, thanks - figured it out using fields param in the transloadit config.

Excellent! Sorry about the bumpy ride, if you have improvements for our docs I’d love to see them, and here if you have more questions.

No problem, there’s a lot to learn as there’s multiple ways to configure things.
I’ll think about doc changes once I get through this project.

One more question though!

Dashboard says “Ability to pause/resume or cancel (depending on uploader plugin) individual or all files” https://uppy.io/docs/dashboard/

but using the Transaloadit plugin to upload with multiple files dragged onto the dashboard with autostart you can only cancel all of them, even with limit = 1. There is no ability to cancel any of the individual files in upload progress state.

Is it possible to cancel individual uploads in this manner?

This is because all files are handled in a single Assembly, and the only sure-fire way we have to avoid this file from ‘making it to the internet’ <-- which may be very important to an end-user if they catch a mistake on their end, is to cancel the entire Assembly.

If this is not what you want, and your usecase allows it, we could look into making it so that you spawn one Assembly per uploaded file. I’d have to check with the Uppy team to see if I’m right about this though and if this is already possible.

Am I going in the right direction here or did I misunderstand some part of your question?

Am I going in the right direction here or did I misunderstand some part of your question?

Let me try again just to clarify where talking about the same thing.

Currently what happens when we drag a bunch of files into the Dashboard is they all start uploading, but you only have the ability to cancel the whole group of files via the pause/cancel control on the progress bar.

Given our use case that the file group could be up to 14 gb with up to 2gb sized individual files, and the user - for example = discovers part way through that they added one wrong file you have to cancel the whole group. What would be more ideal is to be able to cancel individual files in this case, not just the whole group. For example, you can pause any individual file but you can’t appear to cancel them individually.

Does that make more sense?

It does, yes, the problem is they are all handled in a single Assembly, and Assemblies can only be cancelled fully, not partially. So what we do currently is we abort the entire Assembly.

Possible solutions:

  • If the file is still uploading, we could inform the Assembly that expected_tus_uploads = expected_tus_uploads - 1, this would need to be implemented in Uppy + our API, we don’t have it yet. But it could lead to weird scenarios where the Assembly required 1 image and one audio to be merged into a slideshow, validation passed, and now you are removing the audio in-flight. This (and other edgy-er cases) make it a tricky problem that makes it take longer to carefully implement on the Transloadit side.
  • If your usecase allows it, start 1 Assembly per 1 file. Then the current constraint of “we can only abort an entire Assembly” does not matter anymore because each Assembly only ever holds 1 file. This could perhaps be achieved already with existing Uppy/Transloadit releases, but I would have to ask the team to be sure.

Please let me know if this makes sense?

PS really liking that you’re making us (re)think (some things)!

Yes, that makes sense.

The files are just educational assets, of differing file types. They’re independant functionally, but parts of a larger educational package if that makes sense. eg: Art texture assets, 3D files, Video tutorials, etc

At the moment processing is minimal, we’re really just passing them onto S3 for final storage for now.

Option 2 looks like it could fit the use case if it can be implemented. I would imagine in that case Dashboard just displays an X in the top right (for example) if you wanted to abort that individual file upload. If you could let me know if it’s possible with the current releases, and how it might be configured to do so that would be great, ty.

I think it would improve the upload UX. Unfortunately users do make mistakes, and this is a viable use case i think.

I just spoke to someone on the Uppy team and they said that you can control the Assembly parameters per file, and that if the parameters end up being different, a new Assembly per file is created.

In this case we’re changing fields based on file meta data, so one Assembly per file should be created:

uppy.use(Transloadit, {
    getAssemblyOptions: (file, options) => {
        options.fields.path = file.name.whatever
        return {
            params: options.params,
            signature: options.signature,
            fields: options.fields
        }
    }
})

Ok, thanks. I’ll give that a shot.

Hi, I am not understanding what the solution is here. Maybe it’s because I’m new and not really understanding what “Assembly” is, or what RoboDog is.

To put it simply, let’s just talk in code :slight_smile:
Here, I have a simple HTML page going w/ a TUS server listening on localhost:4020 w/ the S3 credentials applied:

<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Uppy</title>
    <link href="https://transloadit.edgly.net/releases/uppy/v1.6.0/uppy.min.css" rel="stylesheet">
  </head>
  <body>
    <div id="drag-drop-area"></div>

    <script src="https://transloadit.edgly.net/releases/uppy/v1.6.0/uppy.min.js"></script>
    <script>
      var AwsS3Multipart = Uppy.AwsS3Multipart
      var uppy = Uppy.Core()
        .use(Uppy.Dashboard, {
          inline: true,
          target: '#drag-drop-area'
        })
        uppy.use(AwsS3Multipart, {
          limit: 0,
          companionUrl: 'http://localhost:3020/'
        })

      uppy.on('complete', (result) => {
        console.log('Upload complete! We’ve uploaded these files:', result.successful)
      })
    </script>
  </body>
</html>

As the OP mentioned, when I upload it stores the file to S3 bucket’s root path. I would like to specify a specific folder at the client side (preferably).

How do we go about this?