

PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 23.155 GiB
Free: 36.844 GiB
Trashed: 8.369 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.1:
Total: 15 GiB
Used: 3.370 MiB
Free: 14.997 GiB
Trashed: 3.370 MiB
Other: 0 B
PS C:\Users\user> rclone about gdrive.2:
Total: 15 GiB
Used: 1.283 GiB
Free: 13.717 GiB
Trashed: 7.020 MiB
Other: 0 B
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 14.890 GiB
Free: 112.204 MiB
Trashed: 4.441 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.4:
Total: 15 GiB
Used: 6.979 GiB
Free: 8.021 GiB
Trashed: 3.918 GiB
Other: 0 B
When I'm trying to upload the large file (8.65GB), the app reports that a quota has been exceeded although there is a free space for that.
Chunk size is 128MB.
Let's try to check how original rclone utilizes large files.
I copied 17.4GB file.
PS C:\Users\user> rclone copy F:\video.mkv cloud-chunker:
You can see that rclone spreads file chunks over 3 remotes.
PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 40.606 GiB
Free: 19.394 GiB
Trashed: 8.369 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.1:
Total: 15 GiB
Used: 8.577 GiB
Free: 6.423 GiB
Trashed: 3.370 MiB
Other: 0 B
PS C:\Users\user> rclone about gdrive.2:
Total: 15 GiB
Used: 8.535 GiB
Free: 6.465 GiB
Trashed: 7.020 MiB
Other: 0 B
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 14.890 GiB
Free: 112.204 MiB
Trashed: 4.441 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.4:
Total: 15 GiB
Used: 8.604 GiB
Free: 6.396 GiB
Trashed: 3.918 GiB
Other: 0 B



chunker and union configuration either in JSON or INI format?
What's your rclone --version that you run locally?
[cloud]
type = union
upstreams = gdrive.1: gdrive.2: gdrive.3: gdrive.4:
[cloud-chunker]
type = chunker
remote = cloud-crypto:
chunk_size = 128Mi
[cloud-crypto]
type = crypt
remote = cloud:
password =
password2 =
rclone v1.69.1
- os/version: Microsoft Windows 11 Pro 24H2 24H2 (64 bit)
- os/kernel: 10.0.26100.3915 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.24.0
- go/linking: static
- go/tags: cmount
chunker -> crypt -> union -> [gdrive1: gdrive2: gdrive3: gdrive4:]
I am certainly not familiar that much with chunker and union to asses, but I've seen configurations where there is an individual chunker for each back-end origin and union distributes that over.
Without crypt that would look like: union -> chunker -> [chunker_gdrive1: chunker_gdrive2: chunker_gdrive3: chunker_gdrive4:]
b) Did you explicitly omit filepath encryption in crypt?
filename_encryption = standard
directory_name_encryption = true
c) Is crypt in the middle of the chain intentional? Isn't crypt at the top preferred version in most cases?
Having said that, your config should behave consistent between Rclone CLI and S3Drive, so we're going to check that out regardless of my above comment. (edited)
union -> chunker -> [chunker_gdrive1: chunker_gdrive2: chunker_gdrive3: chunker_gdrive4:] is like union -> [chunker_gdrive1: chunker_gdrive2: chunker_gdrive3: chunker_gdrive4:], where each chunker_gdrive* has its own gdrive*. If I don't catch you, correct me please.
This scheme does the same as mine, so why I should use it when mine is easier to use? crypt after chunker has some advantages.
Obvious crypt -> chunker firstly encrypts large file and then cuts it into chunks, what creates files like abracadabra.part1, abracadabra.part2 and so on. I don't like that, I don't want to provide the information about the sequence of chunks.
So chunker -> crypt firstly cuts the file and then encrypts chunks with their filenames too so there will be no .part* information in back-ends.
b) Right, so rclone should take default values according to its doc.
filename_encryption: standard
directory_name_encryption: true (Encrypt directory names)

PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 23.155 GiB
Free: 36.844 GiB
Trashed: 8.369 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.1:
Total: 15 GiB
Used: 3.370 MiB
Free: 14.997 GiB
Trashed: 3.370 MiB
Other: 0 B
PS C:\Users\user> rclone about gdrive.2:
Total: 15 GiB
Used: 1.283 GiB
Free: 13.717 GiB
Trashed: 7.020 MiB
Other: 0 B
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 14.890 GiB
Free: 112.204 MiB
Trashed: 4.441 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.4:
Total: 15 GiB
Used: 6.979 GiB
Free: 8.021 GiB
Trashed: 3.918 GiB
Other: 0 B
When I'm trying to upload the large file (8.65GB), the app reports that a quota has been exceeded although there is a free space for that.
Chunk size is 128MB.
Let's try to check how original rclone utilizes large files.
I copied 17.4GB file.
PS C:\Users\user> rclone copy F:\video.mkv cloud-chunker:
You can see that rclone spreads file chunks over 3 remotes.
PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 40.606 GiB
Free: 19.394 GiB
Trashed: 8.369 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.1:
Total: 15 GiB
Used: 8.577 GiB
Free: 6.423 GiB
Trashed: 3.370 MiB
Other: 0 B
PS C:\Users\user> rclone about gdrive.2:
Total: 15 GiB
Used: 8.535 GiB
Free: 6.465 GiB
Trashed: 7.020 MiB
Other: 0 B
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 14.890 GiB
Free: 112.204 MiB
Trashed: 4.441 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.4:
Total: 15 GiB
Used: 8.604 GiB
Free: 6.396 GiB
Trashed: 3.918 GiB
Other: 0 B cloud-crypto?
cloud-crypto, not cloud-chunker?

cloud-crypto, not cloud-chunker? 
64.42 GB
20.71 used, 43.71 GB free
whereas rclone reports
rclone about cloud-chunker:
Total: 60 GiB
Used: 19.291 GiB
Free: 40.708 GiB
Trashed: 4.501 GiB
Other: 197.915 KiB
BTW 64 GB is impossible because each gdrive provides 15 GB.
Aah, maybe you are summarizing it with the trash.
GiB, we report in GB, so there is a difference.
Some providers decided to display: GB unit, but use GiB value, as GiB isn't really used much outside of tech/IT environment. In other words, general public when sees GB, actually expects: GiB, even though this is technically incorrect. We've tried to be correct (hopefully).
UPDATE:
Perhaps in that particular place it has more sense to display: GiB, but then shall we follow the same pattern everywhere, including file listings? (edited)
GB is a marketing gigabyte that equals to 1000 MB?

GB is a marketing gigabyte that equals to 1000 MB? GB equals to real GB as displayed e.g. on Ubuntu file manager. G stands for giga which is 10x based multiplier, not 2^x
I believe that for many cloud products, when they advertise 1GB they actually provide more: 1GiB ~ 1,07374GB (edited)create = epmfs rule is used, which tries to upload to storage with most free space. I am wondering if Rclone selects the most free space storage every 120s, or as it uploads it updates the stats/counters on the fly. (edited)
10 * some number and not 1024 * some number
hmm, it seemed to me that Ubuntu has honest GB representation as GiB as Windows/Android/iOS
it would be better to have GiB representation (for files too) for me
but that's my opinion

10 * some number and not 1024 * some number
hmm, it seemed to me that Ubuntu has honest GB representation as GiB as Windows/Android/iOS
it would be better to have GiB representation (for files too) for me
but that's my opinion 

union selecting next least used storage it exceeds the one that was initially selected.
Still not sure about that one either. Your initial gdrive.1 has free 14.997 GiB, so 8.65GB file upload shouldn't fail even if reordering hasn't worked and it all ended up on that drive. (edited)


10 * some number and not 1024 * some number
hmm, it seemed to me that Ubuntu has honest GB representation as GiB as Windows/Android/iOS
it would be better to have GiB representation (for files too) for me
but that's my opinion 

PS C:\Users\user> # initial (before uploading)
PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 14.790 GiB
Free: 45.209 GiB
Trashed: 0 B
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.1:
Total: 15 GiB
Used: 0 B
Free: 15 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> rclone about gdrive.2:
Total: 15 GiB
Used: 1.277 GiB
Free: 13.723 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 10.452 GiB
Free: 4.548 GiB
Trashed: 0 B
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.4:
Total: 15 GiB
Used: 3.061 GiB
Free: 11.939 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> # opening the app, starting uploading
PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture)
02:18:11
PS C:\Users\user> # intermediate measure
PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture)
02:20:13
PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 15.916 GiB
Free: 44.084 GiB
Trashed: 0 B
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.1:
Total: 15 GiB
Used: 0 B
Free: 15 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> rclone about gdrive.2:
Total: 15 GiB
Used: 1.277 GiB
Free: 13.723 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 11.828 GiB
Free: 3.172 GiB
Trashed: 0 B
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.4:
Total: 15 GiB
Used: 3.061 GiB
Free: 11.939 GiB
Trashed: 0 B
Other: 0 BPS C:\Users\user> # intermediate measure
PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture)
02:22:23
PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 17.666 GiB
Free: 42.334 GiB
Trashed: 0 B
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.1:
Total: 15 GiB
Used: 0 B
Free: 15 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> rclone about gdrive.2:
Total: 15 GiB
Used: 1.277 GiB
Free: 13.723 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 13.703 GiB
Free: 1.297 GiB
Trashed: 0 B
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.4:
Total: 15 GiB
Used: 3.061 GiB
Free: 11.939 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> # intermediate measure
PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture)
02:24:01
PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 19.041 GiB
Free: 40.958 GiB
Trashed: 0 B
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 14.703 GiB
Free: 303.693 MiB
Trashed: 0 B
Other: 197.915 KiBPS C:\Users\user> # after fail
PS C:\Users\user> (Get-Date).ToString("HH:mm:ss", [System.Globalization.CultureInfo]::InvariantCulture)
02:25:07
PS C:\Users\user> rclone about cloud-chunker:
Total: 60 GiB
Used: 19.291 GiB
Free: 40.708 GiB
Trashed: 4.501 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.1:
Total: 15 GiB
Used: 0 B
Free: 15 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> rclone about gdrive.2:
Total: 15 GiB
Used: 1.277 GiB
Free: 13.723 GiB
Trashed: 0 B
Other: 0 B
PS C:\Users\user> rclone about gdrive.3:
Total: 15 GiB
Used: 14.953 GiB
Free: 47.630 MiB
Trashed: 4.501 GiB
Other: 197.915 KiB
PS C:\Users\user> rclone about gdrive.4:
Total: 15 GiB
Used: 3.061 GiB
Free: 11.939 GiB
Trashed: 0 B
Other: 0 B
[chunker]
type = chunker
remote = crypt_union:
chunk_size = 1Mi
[crypt_union]
type = crypt
filename_encryption = standard
password = FQtZ1IPes8CpH88R9HzcP1GlmT06dQ
remote = union:
suffix = none
directory_name_encryption = true
filename_encoding = base64
[union]
type = union
upstreams = google1: google2:
[google1]
type = drive
client_id =
client_secret =
token =
[google2]
type = drive
client_id =
client_secret =
token =
where we've written to chunker file of a certain size.
From start both google1 and google2 were filling up evenly which is expected behaviour.
At this stage not quite sure where the issue might be. (edited)








create_policy = epmfs for union in .ini.
That let rclone to use the first remote which is completely empty.
So I will continue to test how it goes when first remote will be not enough for a large file.
BTW epmfs is a default policy according rclone documentation. So the app set another policy?

union and combine, thanks to your issue 

create_policy = epmfs for union in .ini.
That let rclone to use the first remote which is completely empty.
So I will continue to test how it goes when first remote will be not enough for a large file.
BTW epmfs is a default policy according rclone documentation. So the app set another policy? create_policy, we rely on defaults.
mfs policy is better than epmfs for me because epmfs expects that the path (directory) exists on remotes what rclone will use for upload. That's why the 4th remote could not be used, no one had created the path. The new policy solved it. Now upload works as expected.

mfs policy is better than epmfs for me because epmfs expects that the path (directory) exists on remotes what rclone will use for upload. That's why the 4th remote could not be used, no one had created the path. The new policy solved it. Now upload works as expected. epmfs it sticks to a single back-end where this path already existed, whereas with mfs it would distribute data evenly.
If you have any other issues, ideas, or unique use cases, I’d love to hear about them.