Guild icon
S3Drive
Community / support / Cannot distribute chunks over several remotes in union
Avatar
Hi. I'm using your iOS app + rclone. It seems that the iOS app cannot distribute chunks on many remotes included in the union. You can see my config at the screenshot. Info about remotes before the issue: PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 23.155 GiB Free: 36.844 GiB Trashed: 8.369 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.1: Total: 15 GiB Used: 3.370 MiB Free: 14.997 GiB Trashed: 3.370 MiB Other: 0 B PS C:\Users\user> rclone about gdrive.2: Total: 15 GiB Used: 1.283 GiB Free: 13.717 GiB Trashed: 7.020 MiB Other: 0 B PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 14.890 GiB Free: 112.204 MiB Trashed: 4.441 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.4: Total: 15 GiB Used: 6.979 GiB Free: 8.021 GiB Trashed: 3.918 GiB Other: 0 B When I'm trying to upload the large file (8.65GB), the app reports that a quota has been exceeded although there is a free space for that. Chunk size is 128MB. Let's try to check how original rclone utilizes large files. I copied 17.4GB file. PS C:\Users\user> rclone copy F:\video.mkv cloud-chunker: You can see that rclone spreads file chunks over 3 remotes. PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 40.606 GiB Free: 19.394 GiB Trashed: 8.369 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.1: Total: 15 GiB Used: 8.577 GiB Free: 6.423 GiB Trashed: 3.370 MiB Other: 0 B PS C:\Users\user> rclone about gdrive.2: Total: 15 GiB Used: 8.535 GiB Free: 6.465 GiB Trashed: 7.020 MiB Other: 0 B PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 14.890 GiB Free: 112.204 MiB Trashed: 4.441 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.4: Total: 15 GiB Used: 8.604 GiB Free: 6.396 GiB Trashed: 3.918 GiB Other: 0 B
Avatar
hi @Tom any update?
Avatar
Hi @tothesky, Sorry not yet, we've picked that up but had no chance to give it a go. I shall have some update over the next few days. (edited)
Avatar
OK, I will wait for 🫡 (edited)
Avatar
Hi again @tothesky, by any chance can you upload your chunker and union configuration either in JSON or INI format? What's your rclone --version that you run locally?
Avatar
[cloud] type = union upstreams = gdrive.1: gdrive.2: gdrive.3: gdrive.4: [cloud-chunker] type = chunker remote = cloud-crypto: chunk_size = 128Mi [cloud-crypto] type = crypt remote = cloud: password = password2 = rclone v1.69.1 - os/version: Microsoft Windows 11 Pro 24H2 24H2 (64 bit) - os/kernel: 10.0.26100.3915 (x86_64) - os/type: windows - os/arch: amd64 - go/version: go1.24.0 - go/linking: static - go/tags: cmount
Avatar
Thanks, we will look into that. a) By the way, is your config order intentional? Currently: chunker -> crypt -> union -> [gdrive1: gdrive2: gdrive3: gdrive4:] I am certainly not familiar that much with chunker and union to asses, but I've seen configurations where there is an individual chunker for each back-end origin and union distributes that over. Without crypt that would look like: union -> chunker -> [chunker_gdrive1: chunker_gdrive2: chunker_gdrive3: chunker_gdrive4:] b) Did you explicitly omit filepath encryption in crypt? filename_encryption = standard directory_name_encryption = true c) Is crypt in the middle of the chain intentional? Isn't crypt at the top preferred version in most cases? Having said that, your config should behave consistent between Rclone CLI and S3Drive, so we're going to check that out regardless of my above comment. (edited)
Avatar
a/c) The order is intentional, right. I think that suggested union -> chunker -> [chunker_gdrive1: chunker_gdrive2: chunker_gdrive3: chunker_gdrive4:] is like union -> [chunker_gdrive1: chunker_gdrive2: chunker_gdrive3: chunker_gdrive4:], where each chunker_gdrive* has its own gdrive*. If I don't catch you, correct me please. This scheme does the same as mine, so why I should use it when mine is easier to use? 😄 BTW that scheme makes me to add a new chunker for each new back-end remote. Mine has no this drawback and I can easily scale the cloud. crypt after chunker has some advantages. Obvious crypt -> chunker firstly encrypts large file and then cuts it into chunks, what creates files like abracadabra.part1, abracadabra.part2 and so on. I don't like that, I don't want to provide the information about the sequence of chunks. So chunker -> crypt firstly cuts the file and then encrypts chunks with their filenames too so there will be no .part* information in back-ends. b) Right, so rclone should take default values according to its doc. filename_encryption: standard directory_name_encryption: true (Encrypt directory names)
Avatar
Avatar
tothesky
Hi. I'm using your iOS app + rclone. It seems that the iOS app cannot distribute chunks on many remotes included in the union. You can see my config at the screenshot. Info about remotes before the issue: PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 23.155 GiB Free: 36.844 GiB Trashed: 8.369 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.1: Total: 15 GiB Used: 3.370 MiB Free: 14.997 GiB Trashed: 3.370 MiB Other: 0 B PS C:\Users\user> rclone about gdrive.2: Total: 15 GiB Used: 1.283 GiB Free: 13.717 GiB Trashed: 7.020 MiB Other: 0 B PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 14.890 GiB Free: 112.204 MiB Trashed: 4.441 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.4: Total: 15 GiB Used: 6.979 GiB Free: 8.021 GiB Trashed: 3.918 GiB Other: 0 B When I'm trying to upload the large file (8.65GB), the app reports that a quota has been exceeded although there is a free space for that. Chunk size is 128MB. Let's try to check how original rclone utilizes large files. I copied 17.4GB file. PS C:\Users\user> rclone copy F:\video.mkv cloud-chunker: You can see that rclone spreads file chunks over 3 remotes. PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 40.606 GiB Free: 19.394 GiB Trashed: 8.369 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.1: Total: 15 GiB Used: 8.577 GiB Free: 6.423 GiB Trashed: 3.370 MiB Other: 0 B PS C:\Users\user> rclone about gdrive.2: Total: 15 GiB Used: 8.535 GiB Free: 6.465 GiB Trashed: 7.020 MiB Other: 0 B PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 14.890 GiB Free: 112.204 MiB Trashed: 4.441 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.4: Total: 15 GiB Used: 8.604 GiB Free: 6.396 GiB Trashed: 3.918 GiB Other: 0 B
Can you please check if app reports storage stats correctly in the drawer menu for your cloud-crypto?
Avatar
exact cloud-crypto, not cloud-chunker?
Avatar
Avatar
tothesky
exact cloud-crypto, not cloud-chunker?
Ah, sorry, you're right, I meant the one that you write directly to. Technically they shouldn't really differ at all or a lot. (edited)
Avatar
And I don't want to spend my profiles limit as at the previous time 😄 The app reports 64.42 GB 20.71 used, 43.71 GB free whereas rclone reports rclone about cloud-chunker: Total: 60 GiB Used: 19.291 GiB Free: 40.708 GiB Trashed: 4.501 GiB Other: 197.915 KiB BTW 64 GB is impossible because each gdrive provides 15 GB. Aah, maybe you are summarizing it with the trash.
Avatar
That looks OKish, Rclone reports in GiB, we report in GB, so there is a difference. Some providers decided to display: GB unit, but use GiB value, as GiB isn't really used much outside of tech/IT environment. In other words, general public when sees GB, actually expects: GiB, even though this is technically incorrect. We've tried to be correct (hopefully). UPDATE: Perhaps in that particular place it has more sense to display: GiB, but then shall we follow the same pattern everywhere, including file listings? (edited)
Avatar
you mean that your GB is a marketing gigabyte that equals to 1000 MB?
Avatar
Avatar
tothesky
you mean that your GB is a marketing gigabyte that equals to 1000 MB?
Our GB equals to real GB as displayed e.g. on Ubuntu file manager. G stands for giga which is 10x based multiplier, not 2^x I believe that for many cloud products, when they advertise 1GB they actually provide more: 1GiB ~ 1,07374GB (edited)
9:35 PM
I am still trying to get my head around your issue. How fast is your upload bandwidth by the way? My unproven hypothesis is that for some reason when uploading using S3Drive you've managed to "exceed" one of the Google Drive account quotas, before the https://rclone.org/union/#union-cache-time (120s) had chance to reorder them. By default when uploading, the create = epmfs rule is used, which tries to upload to storage with most free space. I am wondering if Rclone selects the most free space storage every 120s, or as it uploads it updates the stats/counters on the fly. (edited)
Avatar
so it's 10 * some number and not 1024 * some number hmm, it seemed to me that Ubuntu has honest GB representation as GiB as Windows/Android/iOS it would be better to have GiB representation (for files too) for me but that's my opinion
Avatar
Avatar
tothesky
so it's 10 * some number and not 1024 * some number hmm, it seemed to me that Ubuntu has honest GB representation as GiB as Windows/Android/iOS it would be better to have GiB representation (for files too) for me but that's my opinion
We'll probably make default unit configurable. (edited)
Avatar
How fast is your upload bandwidth by the way?
I saw 150 Mbit/s max while uploading with original rclone and about 15 MB/sec max when using the app
Avatar
Fair enough, we'll need to set up test environment and play with the uploads. Perhaps there is someting that prevents rclone (enclosed in S3Drive) to properly updated the "upload stats", so instead of union selecting next least used storage it exceeds the one that was initially selected. Still not sure about that one either. Your initial gdrive.1 has free 14.997 GiB, so 8.65GB file upload shouldn't fail even if reordering hasn't worked and it all ended up on that drive. (edited)
Avatar
yes, it also seemed to me that the app stubbornly accesses only one remote ignoring others in the union
Avatar
Avatar
tothesky
so it's 10 * some number and not 1024 * some number hmm, it seemed to me that Ubuntu has honest GB representation as GiB as Windows/Android/iOS it would be better to have GiB representation (for files too) for me but that's my opinion
I am getting confused, I believe Ubuntu has honest GB representation (right according to the technicals, not marketing). Windows on the other hand uses GiB, but displays GB. Other platforms I don't know, but checked size of 1.58MB photo on iOS and it shows 1MB, so it uses even different kind of magic 🙂 Attached screenshots from Ubuntu and iOS. Forget about that, I am stupid, iOS shows HEIC, whereas it was transcoded to JPEG before it landed on my Ubuntu, so not a true test really. Anyway, I will try to get back to you regarding other issues as soon as we find solution, hopefully next week. (edited)
Avatar
as for me GB as 1000 MB was invented by cunning people which wanted to sell 476 GB disk drive as 512 GB but we are decent guys 😄
Avatar
more logs with timestamps for you uploading were processing for 7 minutes until it failed the app utilized third remote only PS C:\Users\user> # initial (before uploading) PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 14.790 GiB Free: 45.209 GiB Trashed: 0 B Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.1: Total: 15 GiB Used: 0 B Free: 15 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> rclone about gdrive.2: Total: 15 GiB Used: 1.277 GiB Free: 13.723 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 10.452 GiB Free: 4.548 GiB Trashed: 0 B Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.4: Total: 15 GiB Used: 3.061 GiB Free: 11.939 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> # opening the app, starting uploading PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture) 02:18:11 PS C:\Users\user> # intermediate measure PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture) 02:20:13 PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 15.916 GiB Free: 44.084 GiB Trashed: 0 B Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.1: Total: 15 GiB Used: 0 B Free: 15 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> rclone about gdrive.2: Total: 15 GiB Used: 1.277 GiB Free: 13.723 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 11.828 GiB Free: 3.172 GiB Trashed: 0 B Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.4: Total: 15 GiB Used: 3.061 GiB Free: 11.939 GiB Trashed: 0 B Other: 0 B
10:37 PM
too big text for discord I'm splitting it
10:37 PM
PS C:\Users\user> # intermediate measure PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture) 02:22:23 PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 17.666 GiB Free: 42.334 GiB Trashed: 0 B Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.1: Total: 15 GiB Used: 0 B Free: 15 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> rclone about gdrive.2: Total: 15 GiB Used: 1.277 GiB Free: 13.723 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 13.703 GiB Free: 1.297 GiB Trashed: 0 B Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.4: Total: 15 GiB Used: 3.061 GiB Free: 11.939 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> # intermediate measure PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture) 02:24:01 PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 19.041 GiB Free: 40.958 GiB Trashed: 0 B Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 14.703 GiB Free: 303.693 MiB Trashed: 0 B Other: 197.915 KiB
10:38 PM
PS C:\Users\user> # after fail PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture) 02:25:07 PS C:\Users\user> rclone about cloud-chunker: Total: 60 GiB Used: 19.291 GiB Free: 40.708 GiB Trashed: 4.501 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.1: Total: 15 GiB Used: 0 B Free: 15 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> rclone about gdrive.2: Total: 15 GiB Used: 1.277 GiB Free: 13.723 GiB Trashed: 0 B Other: 0 B PS C:\Users\user> rclone about gdrive.3: Total: 15 GiB Used: 14.953 GiB Free: 47.630 MiB Trashed: 4.501 GiB Other: 197.915 KiB PS C:\Users\user> rclone about gdrive.4: Total: 15 GiB Used: 3.061 GiB Free: 11.939 GiB Trashed: 0 B Other: 0 B
Avatar
Thanks for your input so far. We've tried running this config on iOS: [chunker] type = chunker remote = crypt_union: chunk_size = 1Mi [crypt_union] type = crypt filename_encryption = standard password = FQtZ1IPes8CpH88R9HzcP1GlmT06dQ remote = union: suffix = none directory_name_encryption = true filename_encoding = base64 [union] type = union upstreams = google1: google2: [google1] type = drive client_id = client_secret = token = [google2] type = drive client_id = client_secret = token = where we've written to chunker file of a certain size. From start both google1 and google2 were filling up evenly which is expected behaviour. At this stage not quite sure where the issue might be. (edited)
Avatar
I used Media backup feature for syncing. Did you do the same?
Avatar
Avatar
tothesky
I used Media backup feature for syncing. Did you do the same?
We've used standard file upload where we've selected a file from iOS Downloads folder. Haven't tried media backup just yet.
Avatar
I have an update. When I set upload transfers to 1, the media backup was able to spread the file over 2 remotes. The previous value was 4.
Avatar
Avatar
tothesky
I have an update. When I set upload transfers to 1, the media backup was able to spread the file over 2 remotes. The previous value was 4.
It's indeed interesting, does the issue still apply to Media Backup only? When you say Upload transfers do you mean the slider setting in the Profile settings?
Avatar
It's indeed interesting, does the issue still apply to Media Backup only?
Maybe, I don't know. I use Media backup only for upload.
When you say Upload transfers do you mean the slider setting in the Profile settings?
Yes.
Avatar
@Tom the issue persists. Now it could spread chunks over 2 remotes but free space was out and it failed. So it could not fill the third remote although it had enough space. Any news? Did you test Media backup?
Avatar
I have an update. 😄 I manually set create_policy = epmfs for union in .ini. That let rclone to use the first remote which is completely empty. So I will continue to test how it goes when first remote will be not enough for a large file. BTW epmfs is a default policy according rclone documentation. So the app set another policy?
Avatar
Avatar
tothesky
@Tom the issue persists. Now it could spread chunks over 2 remotes but free space was out and it failed. So it could not fill the third remote although it had enough space. Any news? Did you test Media backup?
Sorry, we didn't have chance, hopefully something we can get to next week. We'd love get it to work. We've already learned something about union and combine, thanks to your issue 🙂
Avatar
Avatar
tothesky
I have an update. 😄 I manually set create_policy = epmfs for union in .ini. That let rclone to use the first remote which is completely empty. So I will continue to test how it goes when first remote will be not enough for a large file. BTW epmfs is a default policy according rclone documentation. So the app set another policy?
It's interesting. We don't set anything explicitly regarding create_policy, we rely on defaults.
Avatar
I have an update, I hope the last one, because the app works good now. I figured out that mfs policy is better than epmfs for me because epmfs expects that the path (directory) exists on remotes what rclone will use for upload. That's why the 4th remote could not be used, no one had created the path. The new policy solved it. Now upload works as expected.
👍 1
Avatar
Avatar
tothesky
I have an update, I hope the last one, because the app works good now. I figured out that mfs policy is better than epmfs for me because epmfs expects that the path (directory) exists on remotes what rclone will use for upload. That's why the 4th remote could not be used, no one had created the path. The new policy solved it. Now upload works as expected.
Thank you for this update! Always good to learn about Rclone and it's very much relief to us that in fact there wasn't an issue with the S3Drive regarding behavior. It seems the issue was affecting media backup more than other functionalities, because it writes to same "Auto-upload" folder, so with epmfs it sticks to a single back-end where this path already existed, whereas with mfs it would distribute data evenly. If you have any other issues, ideas, or unique use cases, I’d love to hear about them.
Exported 39 message(s)
Timezone: UTC+0