Upload folder
functionality or perhaps you drag&drop?
As an alternative/workaround you can use Sync with mode copy, select local folder as the source path and then destination folder at the remote end.
In principle this shall allow you to upload that folder before we can dig into improve memory usage issues of "Upload folder" functionality.
Concurrent workers
setting recently, it only applies to number of multipart upload within a single file upload.
Separately, based on the number of files to upload S3Drive spun out 30 workers to process the upload list. (We don't have a setting yet to limit this, but we happily may include in a next release).
That's maybe too much in general, as it means that there maybe up to 60 requests going in parallel, that is 30 files multiplied by two workers (that assumes that each file was over Start threshold as defined above, for files below that file only single worker is used).
Regardless of that setting I still find it hard to believe that 60 requests would require 32GB memory, there must've been some issue with something in our app.
UPDATE: Ah wait, big part size might be the culprit. With a current design, part size must fit in memory.
If there are 30 file uploads with 2 workers each, that gives worst case 30x2x100MB memory usage, that's still "only" 6GB though.
It sounds like we may need to dig into this deeper.
It would be interesting to know if solution with creating Sync entry helps, as this would give us confirmation that in fact something is inherently wrong with our Folder upload logic. (edited)Concurrent workers
setting might need a tooltip or a better explanation of what it does. As I understood it before your message, I thought it was the maximum number of upload connections globally to S3.
Nonetheless, I would be happy to see a concurrent file upload limit in a later release, if that's not too complicated to implement (as I see you have a large number of features being worked on at the moment).C:
SSD thrashes hard, hinting that the system tries to swap to disk.Concurrent workers
setting might need a tooltip or a better explanation of what it does. As I understood it before your message, I thought it was the maximum number of upload connections globally to S3.
Nonetheless, I would be happy to see a concurrent file upload limit in a later release, if that's not too complicated to implement (as I see you have a large number of features being worked on at the moment). mount
(and memory usage stays minimal this way, though upload speed isn't as fast),
I have a feeling this may be linked to the memory usage issue, I don't know how it is implemented for S3Drive, but I feel like building the list, rendering it and moving things between threads may cause some memory overhead.mount
(and memory usage stays minimal this way, though upload speed isn't as fast),
I have a feeling this may be linked to the memory usage issue, I don't know how it is implemented for S3Drive, but I feel like building the list, rendering it and moving things between threads may cause some memory overhead. sync
workaround in copy mode from local to remote, and the memory usage stayed minimal that way, that makes me think the problem might be linked to the UI (not 100% certain, but it's confirmed on my part that the bug happens only with a folder upload using the native S3Drive method)