Guild icon
S3Drive
Community / off-topic
For any conversation that doesn't fit in the #general channel.
Avatar
Avatar
otzibrod
Hi Tom 🙂 OMG, finally found the problem! It was the timezone returned in France had a char not supported "(" I could only reproduce the error if was located in France, and it would work elsewhere. Gosh, sorry for disturbing you and thanks a lot again for your help
No worries, I've suggested timezone, because one of our users had issues with SSL certificates (and it was also Windows 10): https://discord.com/channels/1069654792902815845/1364399116288917577/1367444257756942406 More on that: https://security.stackexchange.com/questions/72866/what-role-does-clock-synchronization-play-in-ssl-communcation (edited)
Avatar
Hey, just wanted to say I'm super-glad you developed the app. It's awesome! Now, after I tested it with 1month subscription, I will switch to either yearly subscription or a lifetime license 🙂 Thanks a lot! The only annoyance I feel is related to the UX: if I have 1000 photos in one directory and I'm browsing through them, and I open a photo in the middle (e.g., the 500th), then go back to the listing, it puts me back at the very beginning—so I have to scroll down again to where I was. (this is something I really don't like (CZ: to je prostě děsnej vopruz)). (edited)
❤️ 1
Avatar
Avatar
Pietron
Hey, just wanted to say I'm super-glad you developed the app. It's awesome! Now, after I tested it with 1month subscription, I will switch to either yearly subscription or a lifetime license 🙂 Thanks a lot! The only annoyance I feel is related to the UX: if I have 1000 photos in one directory and I'm browsing through them, and I open a photo in the middle (e.g., the 500th), then go back to the listing, it puts me back at the very beginning—so I have to scroll down again to where I was. (this is something I really don't like (CZ: to je prostě děsnej vopruz)). (edited)
Thanks for your kind words.
The only annoyance I feel is related to the UX: if I have 1000 photos in one directory and I'm browsing through them, and I open a photo in the middle (e.g., the 500th), then go back to the listing, it puts me back at the very beginning
We're actively working on this issue. It will be resolved in one of the future releases, as mentioned here: https://discord.com/channels/1069654792902815845/1069654792902815848/1400434567080443904
(edited)
👍 2
Avatar
Avatar
Tom
Thanks for your kind words.
The only annoyance I feel is related to the UX: if I have 1000 photos in one directory and I'm browsing through them, and I open a photo in the middle (e.g., the 500th), then go back to the listing, it puts me back at the very beginning
We're actively working on this issue. It will be resolved in one of the future releases, as mentioned here: https://discord.com/channels/1069654792902815845/1069654792902815848/1400434567080443904
(edited)
Yes please solve this as it is very annoying. Other than that the app is working great
Avatar
I made a script after some trial and error that will install all needed dependencies, or walk you through how to set them up. It will use your existing s3drive profile (allow you to choose one) to setup a V2 compatible mounted drive. By default its setup to retain the cache for a year and has a cache size of 150gigs you can customize this script to suite your needs and please ensure you review it and any script before running it on your personal machine. I found it works faster than the drive mounted by the s3 drive client but also gives you the ability to modify your cache size and location. This is and will not be maintained and i wont be able to provide support for it just wanted to make it available for anyone else struggling with large file storage and caching like i was until some of that is ironed out in the S3drive client. Thank you @Tom for allowing me to post this!
👍 1
Avatar
Hello @Tom , Hi, I’m Kateryna, a Flutter dev. We spoke a couple months ago about Flutter on PC — you were super helpful and actually solved my issue (thanks a million again!). I’m reaching out once more, again about Flutter on PC (always tricky…). This time it’s with the Microsoft Store. I saw your S3Drive is listed there too, so I thought you might know. My Issue:
  • New releases upload fine.
  • New users can install the latest version.
... But existing users only see Installed — no Update button, so they can’t update. Have you run into this before or know what could cause it? PS: I know about the certification route, but I would prefer to avoiding paying €400/year for a certificate if I can 🙂 Thanks a lot a in advance....
Avatar
@Tom I have an issue with Dropbox. The setup is simple: [dropbox_base] type = dropbox token = <token> [dropbox_store] type = alias remote = dropbox_base:Store [dropbox_crypt] type = crypt directory_name_encryption = true filename_encoding = base64 filename_encryption = standard password = <password> remote = dropbox_store: suffix = none cipher_version = 2 I am able to upload a file to dropbox_base (including the Store folder directly), but it doesn't work in both dropbox_store and dropbox_crypt. I am getting this error (per screenshot): Exception: upload failed: batcher is shutting down I was able to manually upload the same file using rclone copy to all three targets. When I copied the file to dropbox_crypt with rclone, the file appeared on S3Drive in dropbox_crypt with an encrypted file name instead of the original name. I cleared everything out, used cipher_version = 1, repeated the process, and nothing changed. The file is just test.txt with the contents test. I am using the Windows client v1.14.10 (build: 10141000). What might be wrong here? (edited)
Avatar
Sorry for the delay, but so far we couldn't reproduce this issue, tried on Linux and Windows 11. Bear in mind that: cipher_version parameter is only supported by S3Drive, for Rclone it requires custom build: https://discord.com/channels/1069654792902815845/1069654792902815848/1410540245946208306 Having said that: cipher_version = 2 doesn't influence object filepath encryption, but only contents encryption, so it shouldn't really matter. In our test case we've used below set: [dropbox_base] token = {"access_token":"<accessTokenHere>","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} type = dropbox [dropbox_crypt] directory_name_encryption = true filename_encoding = base64 filename_encryption = standard password = apCcmC0mg3RyMGsRtnunNz0xwxWEGA remote = dropbox_store: suffix = none type = crypt [dropbox_store] remote = dropbox_base:Store type = alias We could upload to both dropbox_crypt and dropbox_store on Windows using S3Drive without batcher issue and we could easily list these remotes, on e.g. Linux using Rclone CLI. Not entirely sure why are you experiencing this issue. Shortly we plan to release most recent 1.15.0 version, perhaps it might be worth to update once it's ready and we can then start further troubleshooting.
Avatar
Hey y’all, New to S3Drive. I couldn’t find any information on your website about this. Have y’all done any security audits on your closed source code? Any certifications?
Avatar
Avatar
Tom
Sorry for the delay, but so far we couldn't reproduce this issue, tried on Linux and Windows 11. Bear in mind that: cipher_version parameter is only supported by S3Drive, for Rclone it requires custom build: https://discord.com/channels/1069654792902815845/1069654792902815848/1410540245946208306 Having said that: cipher_version = 2 doesn't influence object filepath encryption, but only contents encryption, so it shouldn't really matter. In our test case we've used below set: [dropbox_base] token = {"access_token":"<accessTokenHere>","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} type = dropbox [dropbox_crypt] directory_name_encryption = true filename_encoding = base64 filename_encryption = standard password = apCcmC0mg3RyMGsRtnunNz0xwxWEGA remote = dropbox_store: suffix = none type = crypt [dropbox_store] remote = dropbox_base:Store type = alias We could upload to both dropbox_crypt and dropbox_store on Windows using S3Drive without batcher issue and we could easily list these remotes, on e.g. Linux using Rclone CLI. Not entirely sure why are you experiencing this issue. Shortly we plan to release most recent 1.15.0 version, perhaps it might be worth to update once it's ready and we can then start further troubleshooting.
Hmm I've tried it again and it seems to work now. Maybe it's a fluke, or maybe I just needed to restart the computer after installing. Thanks anyway!
Avatar
I have a couple more questions: 1. For iCloud Drive, S3Drive supports only the "drive" part of it, and doesn't include media like photos and videos, is that correct? 2. I just noticed that the file/directory name encryption comes up to the same name (at least on Dropbox), regardless of which directory it is in. This means that it provides an oracle to brute force a key. For example, the .blank file is ordinarily created with a new folder, and so just trying to guess that .blank encrypts to a specific string (and also other common names like Thumbs.db or System Volume Information, etc.). The encrypted file name should provide no information to what the actual one is. Things to consider are to (i) use a nonce and include it in the name, so that each file and folder name, even if they are the same in plaintext, would be different in ciphertext; and (ii) file names beyond a certain length are stored as part of the file's contents and excessively long folder names could include a metadata file inside; or (iii) all actual file/folder names within a folder are stored in a "header" file in the folder and encrypted there, and the "encrypted" names are just random identifiers referenced in the "header" file.
Avatar
Avatar
lhaley
Hey y’all, New to S3Drive. I couldn’t find any information on your website about this. Have y’all done any security audits on your closed source code? Any certifications?
Hi! As of now app is still during heavy development, including cipher improvements that are yet to be merged to Rclone repository: https://github.com/rclone/rclone/issues/7192 This would render any costly audit outdated soon after we merge all improvements to cipher and security components. We plan to provide audits in the future, but can't give any exact ETA at this stage. Having said that, in a significant part, we rely on security of open-source components and APIs e.g. https://rclone.org/crypt/ and https://pub.dev/packages/flutter_secure_storage Obviously there is more to it than just these two components, but this is to just give you an idea.
We&#39;re running S3Drive (GUI for S3 on desktop, mobile, web) and recently aligned with Rclone&#39;s encryption scheme for better interoperability and features like drive mount and Webdav that we ...
Encryption overlay remote
Flutter Secure Storage provides API to store data in secure storage. Keychain is used in iOS, KeyStore based solution is used in Android.
Avatar
Avatar
Isaac
I have a couple more questions: 1. For iCloud Drive, S3Drive supports only the "drive" part of it, and doesn't include media like photos and videos, is that correct? 2. I just noticed that the file/directory name encryption comes up to the same name (at least on Dropbox), regardless of which directory it is in. This means that it provides an oracle to brute force a key. For example, the .blank file is ordinarily created with a new folder, and so just trying to guess that .blank encrypts to a specific string (and also other common names like Thumbs.db or System Volume Information, etc.). The encrypted file name should provide no information to what the actual one is. Things to consider are to (i) use a nonce and include it in the name, so that each file and folder name, even if they are the same in plaintext, would be different in ciphertext; and (ii) file names beyond a certain length are stored as part of the file's contents and excessively long folder names could include a metadata file inside; or (iii) all actual file/folder names within a folder are stored in a "header" file in the folder and encrypted there, and the "encrypted" names are just random identifiers referenced in the "header" file.
For iCloud Drive, S3Drive supports only the "drive" part of it, and doesn't include media like photos and videos, is that correct?
Sorry, but we haven't really played with this integration: https://rclone.org/iclouddrive/ It's automatically included once we integrated with Rclone. We'll try to dedicate some time to play with it. Please be aware that it seems to be beta integration.
I just noticed that the file/directory name encryption comes up to the same name (at least on Dropbox), regardless of which directory it is in.
This is consequence of deliberate deterministic design of Rclone name encryption as mentioned in their docs https://rclone.org/crypt/#name-encryption
"This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system."
also
This means that filenames with the same name will encrypt the same filenames which start the same won't have a common prefix
In other words if: a encrypts to g1jy and b encrypts to ne2k then:
  • a/b would be g1jy/ne2k,
  • ab WOULDN'T be g1jyne2k, it would be entirely new string e.g. rnxu4qwe You're right that such design leaks some context and isn't ideal, but there isn't ideal algorithm. Each has some compromises to make. Rclone encryption is simple and robust. Its determinisic feature allows easy rename (e.g. if you rename a then you don't need necessarily need to rename all files/folder inside a). I sort of like your proposition:
(i) use a nonce and include it in the name, so that each file and folder name, even if they are the same in plaintext, would be different in ciphertext;
since it seems like a relatively simple change. I am not entirely sure what would be the consequence of losing determinisitic feature though. I might be worth running it on a public forum: https://github.com/rclone/rclone/issues though and gain some acceptance.
Rclone docs for iCloud Drive
"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Azure Blob, Azure Files, Yandex Files - rclone/rclone
Encryption overlay remote
Avatar
(ii) file names beyond a certain length are stored as part of the file's contents and excessively long folder names could include a metadata file inside; or (iii) all actual file/folder names within a folder are stored in a "header" file in the folder and encrypted there, and the "encrypted" names are just random identifiers referenced in the "header" file.
I mean, we're open for change and improvements, but we also plan to keep compatibility with Rclone and I am not sure if they would be willing to accept "small" privacy enhancement at relatively huge cost (complexity, performance, robustness). Regarding (ii), if file name is stored inside file's contents, then if rename happens, for many clouds and providers (S3 included) it means full object read/write, including multi GB media files. That's just not feasible. Regarding (iii), this idea is likely more feasible, but again, most cloud providers don't support atomic write, where both file contents and header file can be written with some guarantees, this quickly leads to complex code that tries to solve race conditions to be reliable enough. As far I am concerned, similar approach is used by Cryptomator and I often hear about users complaining about data corruption: https://github.com/cryptomator/cryptomator/issues/3296 Thank you for your feedback, and as always I would be willing to talk through any idea/suggestion!
Avatar
Avatar
Pietron
Hey, just wanted to say I'm super-glad you developed the app. It's awesome! Now, after I tested it with 1month subscription, I will switch to either yearly subscription or a lifetime license 🙂 Thanks a lot! The only annoyance I feel is related to the UX: if I have 1000 photos in one directory and I'm browsing through them, and I open a photo in the middle (e.g., the 500th), then go back to the listing, it puts me back at the very beginning—so I have to scroll down again to where I was. (this is something I really don't like (CZ: to je prostě děsnej vopruz)). (edited)
Hi @Pietron, this is to let you know that we've addressed this issue (scrolling persistence) partially in the most recent 1.15.0 release!
Exported 15 message(s)
Timezone: UTC+0