SeaweedFS as data store

Has anyone used SeaweedFS as a data store for a TUS server?

I want to use the TUS protocol to enable users to upload big files to my object store. Therefore I thought of using the tusd (GO) package and implementing the datastore interface.
But I have some questions:

  1. Which approach do you think is more suitable/easier to implement? Should I store the upload on the local file system until it’s finished and move it to SeaweedFS after this or should I store the chunks directly in SeaweedFS? → I think the second approach is better but I’m not sure since there is a function ConcatUploads in the datastore interface and in order to implement this, I need to delete all other chunks I stored and move them to the new, finished file. Or am I wrong about this function?
  2. Is there a more detailed info on how to implement a custom datastore then the docs in the go interface?
  3. Is there a collection of custom/community datastore implementations? Maybe somebody else have already implemented SeaweedFS as datastore or if not, maybe somebody is interested in my code as plug&play solution.

Sorry if the questions are a bit nooby, but I’m new to TUS…
regards

Hello and welcome,

to my knowledge nobody has implement something directly for SeaweedFS. But since it also exposes an S3 API, I think you could also just use tusd’s s3store and point it to your endpoint using --s3-endpoint.

The first approach is easier because you do not have to implement a data store at all. You can just let tusd save on disk and then use the post-finish hooks to move the uploads to SeaweedFS from disk. If that turns out to be too inefficient or just want a more distributed and less disk dependent solution, you can upload to Seaweed directly.

Unfortunately not right now, no.

We don’t have such a list, but should definitely collect them somewhere. Let me know if you have implemented something and we can add to the documentation :slight_smile:

I hope this helps already!