Hi there,
I have an Uppy/Tus node js application with s3 integration.
Every time a user starts a new upload, a request is sent to the server which responds with a unique file id in the form of an uploadURL
. These upload urls are then stored by the FE and sent to a metadata db for storage.
We have the concepts of datasets in this metadata db so we’d like users to be able to upload the same file, should they wish, in different datasets.
The issue is that whilst i can create random unique IDs for every uploaded file (using a namingFunction
), there are some low-level comms going on between uppy and tus which means my endpoint isn’t being hit. I haven’t figured out how exactly how it knows, I’m assuming it builds a sha based on some metadata.
I presume this ties in with the resumable uploads feature as well, in that forcing a new upload every time will render the resumable functionality pointless.
Is there any around this?
Thanks
This is an example of my working server:
const datafileS3Store = new S3Store({
s3ClientConfig: {
bucket: "test",
region: 'eu-west-2',
endpoint: process.env.S3_ENDPOINT,
credentials: {
accessKeyId: "test",
secretAccessKey: "test"
}
}
})
/**
* init the server and set the callback, these are blocking calls, EVENTS are not
*/
const tusDatafileServer = new Server({
respectForwardedHeaders: true,
path: '/upload',
datastore: datafileS3Store,
namingFunction: () => {
// Generate a 32-character hexadecimal ID
return crypto.randomBytes(4).toString('hex')
},
// callback provided by tus, gets called for each upload
async onUploadCreate (request, reply, upload) {
logger.info(process.env.S3_ENDPOINT)
try {
// do some logic
logger.info(` uploading : id:${upload.id}, size:${upload.size}`)
} catch (err) {
logger.error(` Error in onUploadCreate: ${err}`)
throw (err)
}
logger.info(` reply code: ${reply.statusCode}`)
return reply
}
})