Guild icon
S3Drive
Community / general
For all on-topic discussion about S3Drive or related storage providers.
Avatar
Avatar
Deleted User
Hello @Tom, first of all thank you for creating such an amazing app. This is what I've been looking for ages since I can't run Termux and a-like apps on my device. I would like to report a crash issue I've noticed while using S3Drive on my P7P running GrapheneOS (Android 14) which I've been able to locate and send you the attached crash logs to further inspect the issue. Would that be ok? Where can I share those details? I would also like to know if the background sync should in fact work or not. On my device I'm able to confirm the APP was still running since I get those crashes but the sync will never happen except I open the app manually. Thanks for your hard work! (edited)
Hi there, thanks for you message. Please send it over to support in s3drive.app domain. It's indeed good to know that it somewhat works on GrapheneOS, as we haven't had a chance to try it out just yet. Background sync should work fine on Android, but as always the devil's is in the details, that is battery settings and how aggressively phone manufacturer interferes with the background framework... there is always a possibility that we've introduced some bug or there is some edge case that we haven't handled yet. We would be happy to try reproducing the "background" issue that you're experiencing and work on the solution. What's your back-end type, is that S3 or Rclone? Do you have E2E enabled? (edited)
Avatar
Deleted User 7/1/2024 4:50 PM
I'm currently using Rclone to sync over to Proton! I will open up an issue as soon as I have some free time. I'm right now trying with different exploit protection compatibility modes which are implemented by GrapheneOS itself to spot any differencies
4:50 PM
The crash itself seemed related to a service getting started when bluetooth is enabled (?) Unable to create service com.ryanheise.audioservice.AudioService: java.lang.ClassNotFoundException: Didn't find class "com.ryanheise.audioservice.AudioService"
Avatar
Avatar
Deleted User
The crash itself seemed related to a service getting started when bluetooth is enabled (?) Unable to create service com.ryanheise.audioservice.AudioService: java.lang.ClassNotFoundException: Didn't find class "com.ryanheise.audioservice.AudioService"
Oh dear, we've thought we've removed that completely, as it was causing different sorts of background issues (at the cost of audio background player support for Android, but that's lower priority then a robust app - https://pub.dev/packages/just_audio), yet it still haunts us. We've got one more report regarding Bluetooth causing issues, so it seems there was a regression in one of the past versions. We're going to redirect our efforts and hopefully develop a fix within next couple days. (edited)
🧠 1
Avatar
backblaze upload more than 100MB files still error anyway
Avatar
Avatar
mix9311
backblaze upload more than 100MB files still error anyway
In one of the next releases we will deploy improved multipart upload with per part auto-retry. Backblaze is facing consistent API failures, but eventually upload works fine. Their official stance (https://github.com/mastodon/mastodon/issues/30030#issuecomment-2123580531) is that software should issue infinite retries until it works and that's what we more or less plan doing which will improve situation. Backblaze:
We are currently engaged in improving performance across all data centers however this is a broad project and we do not specifically have a time frame available for when these service availability improvements may complete.
Also related: https://www.reddit.com/r/backblaze/comments/13eoicm/consistent_upload_failures/
I2rys (安全) joined the server. 7/2/2024 12:21 PM
Avatar
I2rys (安全) 7/2/2024 12:51 PM
@Tom Hello first off thank you very much for this very awesome program. Second I'm planning on buying S3Drive but may I ask how reliable S3Drive encryption is when it comes to corruption than Cryptomator (integrity)? Third, how many files can S3Drive handle? I'm pretty sure it depends on how much storage your cloud has but lets say a single folder contains about 300+ files, will it be able to handle such thing? (edited)
Avatar
Avatar
I2rys (安全)
@Tom Hello first off thank you very much for this very awesome program. Second I'm planning on buying S3Drive but may I ask how reliable S3Drive encryption is when it comes to corruption than Cryptomator (integrity)? Third, how many files can S3Drive handle? I'm pretty sure it depends on how much storage your cloud has but lets say a single folder contains about 300+ files, will it be able to handle such thing? (edited)
Hi, Both Cryptomator and S3Drive (which is directly compatible with Rclone - https://rclone.org/crypt/#chunk) are authenticated encryptions, so in the case there is any data corruption both encryption schemes can detect that. Each 64 KiB (that's quite small) of data is a separate block which is "authenticated" using 16 bytes, so if there is some corruption within that block you would still be able to decrypt it, however you wouldn't be able to confirm automatically if other data within that block is intact (you would need to verify yourself, e.g. play video). In the case of corruption within one block, other surrounding data 64 KiB data blocks would remain intact, decryptable and verifiable. Speaking of large files amount and huge folders, the answer is it depends. If you want to upload/download/browse it's usually fine. Challenging part on our end is copy/rename, as S3 protocol doesn't support bulk operations and rename natively. ... so if you have a folder with 300 files and you want to rename that folder, then S3Drive needs to issue 301 COPY operations and then 301 DELETE operations. That's 602 requests which depending on the network latency and the S3 API speed will take a while. We will be improving concurrency modes within app to improve the overall speed. We also plan to release managed storage plans S3 compatible, which will also support different protocols which will be more performant to run bulk operations or rename. Speaking of larger file amounts, it won't slow down the Files listings, as they're not recursive (they only fetch the amount of data to fit the screen), however functions like Storage stats, Recent, Trash, Cached mode index rebuild will gradually be slower with more files. It's hard to tell where's the tipping point, as we constantly release new improvements and ideas and have lots of features to improve overall perf: https://s3drive.canny.io/feature-requests/p/implement-searchorder-index-diff-rebuild I hope that helps a little! (edited)
catdance 1
Avatar
Avatar
Tom
Hi, Both Cryptomator and S3Drive (which is directly compatible with Rclone - https://rclone.org/crypt/#chunk) are authenticated encryptions, so in the case there is any data corruption both encryption schemes can detect that. Each 64 KiB (that's quite small) of data is a separate block which is "authenticated" using 16 bytes, so if there is some corruption within that block you would still be able to decrypt it, however you wouldn't be able to confirm automatically if other data within that block is intact (you would need to verify yourself, e.g. play video). In the case of corruption within one block, other surrounding data 64 KiB data blocks would remain intact, decryptable and verifiable. Speaking of large files amount and huge folders, the answer is it depends. If you want to upload/download/browse it's usually fine. Challenging part on our end is copy/rename, as S3 protocol doesn't support bulk operations and rename natively. ... so if you have a folder with 300 files and you want to rename that folder, then S3Drive needs to issue 301 COPY operations and then 301 DELETE operations. That's 602 requests which depending on the network latency and the S3 API speed will take a while. We will be improving concurrency modes within app to improve the overall speed. We also plan to release managed storage plans S3 compatible, which will also support different protocols which will be more performant to run bulk operations or rename. Speaking of larger file amounts, it won't slow down the Files listings, as they're not recursive (they only fetch the amount of data to fit the screen), however functions like Storage stats, Recent, Trash, Cached mode index rebuild will gradually be slower with more files. It's hard to tell where's the tipping point, as we constantly release new improvements and ideas and have lots of features to improve overall perf: https://s3drive.canny.io/feature-requests/p/implement-searchorder-index-diff-rebuild I hope that helps a little! (edited)
I2rys (安全) 7/2/2024 2:13 PM
Thankss tom for your reply! I'm wondering what do you mean by detect it? The reason I'm asking this is that I have been a long user of cryptomator and for some reason when renaming or moving large files sometimes corrupt them (unable to recover) and deleting them will just return an error. Also speaking of large files how does S3Drive handle when a network suddenly disconnects?
2:17 PM
2:19 PM
P.S After buying a subscription you will need to logout and login again for it to apply (Windows). Would be awesome if it automatically applies upon buying. ;) (edited)
Avatar
Avatar
I2rys (安全)
Thankss tom for your reply! I'm wondering what do you mean by detect it? The reason I'm asking this is that I have been a long user of cryptomator and for some reason when renaming or moving large files sometimes corrupt them (unable to recover) and deleting them will just return an error. Also speaking of large files how does S3Drive handle when a network suddenly disconnects?
Cryptomator cipher is somewhat more advanced which is the double-edged sword. I haven't exactly studied their model, but if you experience data loss during rename/move then it's likely related to additional protection of data movement mentioned here: https://docs.cryptomator.org/en/1.4/security/architecture/#filename-encryption I am not sure if failure that you experience is related to Cryptomator bug or design issue (E.g. expecting consistent storage, but cloud providers don't always guarantee consistency). Cryptomator needs to store additional file: masterkey.cryptomator located in the root directory of the vault. If for some reason this file isn't synced in your vault you won't be able to decrypt your files: https://docs.cryptomator.org/en/latest/security/architecture/#masterkey-file Directory contents move/rename protection isn't present for Rclone cipher, which I've mentioned here: https://github.com/rclone/rclone/issues/7192 (scroll down to: 4. No path protection), so there is no risk that data will be corrupted during move/rename. We receive bug reports frequently from our users, but there was no data corruption being mentioned (other than lost password), if that helps you to feel reassured. (edited)
Avatar
Avatar
I2rys (安全)
Thankss tom for your reply! I'm wondering what do you mean by detect it? The reason I'm asking this is that I have been a long user of cryptomator and for some reason when renaming or moving large files sometimes corrupt them (unable to recover) and deleting them will just return an error. Also speaking of large files how does S3Drive handle when a network suddenly disconnects?
I'm wondering what do you mean by detect it?
By saying detect, I mean that encryption scheme in both cases has built-in mechanism to verify data integrity, so it's not possible to flip bit of data in the middle of file (HDD corruption or deliberate attack by adversary) and expect that this will be undetected. In some use cases user must be ensured that content that they've encrypted actually belongs to them and wasn't altered. E.g. some legal text or some evidence etc.
2:40 PM
Also speaking of large files how does S3Drive handle when a network suddenly disconnects?
It depends if you refer to upload/download and whether you mention encrypted or unencrypted data. There are multiple safety mechanisms in place to prevent data corruption. For instance if file gets uploaded encrypted, S3Drive calculates the expected encrypted size (which differs from unencrypted size) and verifies with the S3 endpoint if it matches.
2:44 PM
P.S After buying a subscription you will need to logout and login again for it to apply (Windows). Would be awesome if it automatically applies upon buying. 😉
Thanks for the heads up and thank you for your purchase! ❤️ We haven't yet implemented reliable mechanism to auto-detect subscription purchase. As much as possible we try to keep app disconnected from any APIs which aren't required for day to day use, as more external requests means more questions and risks around data privacy.
(edited)
SA_rainbow_heart 1
PepeWideHeart 1
Avatar
Avatar
Tom
Cryptomator cipher is somewhat more advanced which is the double-edged sword. I haven't exactly studied their model, but if you experience data loss during rename/move then it's likely related to additional protection of data movement mentioned here: https://docs.cryptomator.org/en/1.4/security/architecture/#filename-encryption I am not sure if failure that you experience is related to Cryptomator bug or design issue (E.g. expecting consistent storage, but cloud providers don't always guarantee consistency). Cryptomator needs to store additional file: masterkey.cryptomator located in the root directory of the vault. If for some reason this file isn't synced in your vault you won't be able to decrypt your files: https://docs.cryptomator.org/en/latest/security/architecture/#masterkey-file Directory contents move/rename protection isn't present for Rclone cipher, which I've mentioned here: https://github.com/rclone/rclone/issues/7192 (scroll down to: 4. No path protection), so there is no risk that data will be corrupted during move/rename. We receive bug reports frequently from our users, but there was no data corruption being mentioned (other than lost password), if that helps you to feel reassured. (edited)
I2rys (安全) 7/2/2024 2:56 PM
Yeah and you're probably right about the failure but I'll just report it to them If I can find the exact issue. As for the rename that's a big ease for me because these data are precious Rainbow_skull and I'm also a bit paranoid. As for the detect I see thank you for the clarification, I'm not much in cryptography but I do have some knowledge about it. I was thinking It would automatically repair broken bytes whilst reading the file lol Last questions. Is there a way to donate to S3Drive with any amount? You see I'm a big fan of projects like these so I often donate to help the developers If I can. Also where can I suggest features or perhaps report bugs? Another P.S, uploaded files with dot can't be seen on transfers even though Hide Dotfiles is disabled. dog_laugh (edited)
Avatar
Avatar
I2rys (安全)
Yeah and you're probably right about the failure but I'll just report it to them If I can find the exact issue. As for the rename that's a big ease for me because these data are precious Rainbow_skull and I'm also a bit paranoid. As for the detect I see thank you for the clarification, I'm not much in cryptography but I do have some knowledge about it. I was thinking It would automatically repair broken bytes whilst reading the file lol Last questions. Is there a way to donate to S3Drive with any amount? You see I'm a big fan of projects like these so I often donate to help the developers If I can. Also where can I suggest features or perhaps report bugs? Another P.S, uploaded files with dot can't be seen on transfers even though Hide Dotfiles is disabled. dog_laugh (edited)
Yeah and you're probably right about the failure but I'll just report it to them If I can find the exact issue. As for the rename that's a big ease for me because these data are precious :Rainbow_skull: and I'm also a bit paranoid.
Honestly speaking, to be on the safe side regardless if it's Cryptomator, S3Drive or anything user should implement some backups. There are so many places where things can go wrong and even if it's not the S3Drive issue, the cloud provider might go bust or as simply DCs might caught fire (that was the cause with OVH recently).
As for the detect I see thank you for the clarification, I'm not much in cryptography but I do have some knowledge about it. I was thinking It would automatically repair broken bytes whilst reading the file lol
There isn't much redundancy built-in to the encryption scheme itself, however underlying storage has often the redundancy (E.g. Reed-Solomon) to recover lost bits/bytes which is usually enough for most cases.
Last questions. Is there a way to donate to S3Drive with any amount? You see I'm a big fan of projects like these so I often donate to help the developers If I can. Also where can I suggest features or perhaps report bugs?
It's really nice of you. We don't have any open donation channel, however we are funding development by selling subscriptions and licenses, if you would like to help out, feel free to spread the love, recommend our app or leave us a comment. This in turn brings us more users which may or may not decide to buy paid version.
Also where can I suggest features or perhaps report bugs?
Ideally please add them here: #support or if there is anything requiring more confidentiality, please send to support at s3drive.app or DM myself.
Another P.S, uploaded files with dot can't be seen on transfers even though Hide Dotfiles is disabled.
What's your OS and what functions have you used? Was it Upload files or Upload folder etc?.
(edited)
blob_dancing 1
Avatar
Avatar
Tom
Yeah and you're probably right about the failure but I'll just report it to them If I can find the exact issue. As for the rename that's a big ease for me because these data are precious :Rainbow_skull: and I'm also a bit paranoid.
Honestly speaking, to be on the safe side regardless if it's Cryptomator, S3Drive or anything user should implement some backups. There are so many places where things can go wrong and even if it's not the S3Drive issue, the cloud provider might go bust or as simply DCs might caught fire (that was the cause with OVH recently).
As for the detect I see thank you for the clarification, I'm not much in cryptography but I do have some knowledge about it. I was thinking It would automatically repair broken bytes whilst reading the file lol
There isn't much redundancy built-in to the encryption scheme itself, however underlying storage has often the redundancy (E.g. Reed-Solomon) to recover lost bits/bytes which is usually enough for most cases.
Last questions. Is there a way to donate to S3Drive with any amount? You see I'm a big fan of projects like these so I often donate to help the developers If I can. Also where can I suggest features or perhaps report bugs?
It's really nice of you. We don't have any open donation channel, however we are funding development by selling subscriptions and licenses, if you would like to help out, feel free to spread the love, recommend our app or leave us a comment. This in turn brings us more users which may or may not decide to buy paid version.
Also where can I suggest features or perhaps report bugs?
Ideally please add them here: #support or if there is anything requiring more confidentiality, please send to support at s3drive.app or DM myself.
Another P.S, uploaded files with dot can't be seen on transfers even though Hide Dotfiles is disabled.
What's your OS and what functions have you used? Was it Upload files or Upload folder etc?.
(edited)
I2rys (安全) 7/2/2024 9:48 PM
I understand and yeah you're right no matter the software that you use to upload encrypted files you must always have a backup, as for my case good thing I keep about 3 backups of the original data regularly Haahah, also thankss. Btw It's Windows 10 and I used upload folder. May I ask some questions again? XD
  • Why does a folder disappears after deleting the .empty file, is it automatically deleted by S3Drive?
  • Is it just me or Cyberduck seems faster than S3Drive? It seems to have something with how they upload. I get about 9-10/Mbps in Cyberduck while I only get around 3-5/Mbps in S3Drive
Avatar
Avatar
I2rys (安全)
I understand and yeah you're right no matter the software that you use to upload encrypted files you must always have a backup, as for my case good thing I keep about 3 backups of the original data regularly Haahah, also thankss. Btw It's Windows 10 and I used upload folder. May I ask some questions again? XD
  • Why does a folder disappears after deleting the .empty file, is it automatically deleted by S3Drive?
  • Is it just me or Cyberduck seems faster than S3Drive? It seems to have something with how they upload. I get about 9-10/Mbps in Cyberduck while I only get around 3-5/Mbps in S3Drive
Why does a folder disappears after deleting the .empty file, is it automatically deleted by S3Drive?
There isn't really concept of folders within S3 protocol. Folders are emulated and are direct consequence of Common Prefix for file or multiple files within "/" delimited path. We use .empty file as a single file from which its parent folder prefix is born. Alternatively we could try to insert: folder/ key as a folder placeholder, but that's not universally support across providers and isn't cross-compatible approach. E.g. MinIO (most common self-hosted file-based S3 server doesn't support it).
Is it just me or Cyberduck seems faster than S3Drive? It seems to have something with how they upload. I get about 9-10/Mbps in Cyberduck while I only get around 3-5/Mbps in S3Drive
What's your platform, is that macOS? I assume that E2E encryption is enabled? Do you use Multipart upload? Do you refer to upload speed through the app or mount write speed? Is it combined speed of multiple files upload or single big file upload? In 1.9.3 that we're just releasing (on macOS it shall be available within ~2 days) there is improved concurrency for multipart uploads: https://s3drive.app/changelog which might make multipart uploads faster than if it was disabled (it was the other round previously). The speed bottleneck for single big file might still be single-threaded encryption XSalsa20-Poly1305 (which is lighter than AES-GCM used by Cryptomator, but doesn't have benefits of hardware encryption which AES has). For multiple files upload encryption speed this shouldn't be an issue, as encryption load would be spread evenly on multiple CPU threads/cores. In the next releases we will be improving single big file encryption speeds, by providing multi-threaded chunk encryption. We're actively improving in all these areas.
(edited)
Avatar
Avatar
Tom
Why does a folder disappears after deleting the .empty file, is it automatically deleted by S3Drive?
There isn't really concept of folders within S3 protocol. Folders are emulated and are direct consequence of Common Prefix for file or multiple files within "/" delimited path. We use .empty file as a single file from which its parent folder prefix is born. Alternatively we could try to insert: folder/ key as a folder placeholder, but that's not universally support across providers and isn't cross-compatible approach. E.g. MinIO (most common self-hosted file-based S3 server doesn't support it).
Is it just me or Cyberduck seems faster than S3Drive? It seems to have something with how they upload. I get about 9-10/Mbps in Cyberduck while I only get around 3-5/Mbps in S3Drive
What's your platform, is that macOS? I assume that E2E encryption is enabled? Do you use Multipart upload? Do you refer to upload speed through the app or mount write speed? Is it combined speed of multiple files upload or single big file upload? In 1.9.3 that we're just releasing (on macOS it shall be available within ~2 days) there is improved concurrency for multipart uploads: https://s3drive.app/changelog which might make multipart uploads faster than if it was disabled (it was the other round previously). The speed bottleneck for single big file might still be single-threaded encryption XSalsa20-Poly1305 (which is lighter than AES-GCM used by Cryptomator, but doesn't have benefits of hardware encryption which AES has). For multiple files upload encryption speed this shouldn't be an issue, as encryption load would be spread evenly on multiple CPU threads/cores. In the next releases we will be improving single big file encryption speeds, by providing multi-threaded chunk encryption. We're actively improving in all these areas.
(edited)
I2rys (安全) 7/2/2024 11:27 PM
I'm kind off new to S3 and I did some experiment and yeah It does disappear if a folder doesn't have any files on it and .empty is used to make sure the folder can be identified If I'm not mistaken. And no, I'm using Windows and yes E2E encryption is enabled, as for multipart upload I decided to disabled it because It's much slower and I used the app because WebDav is slower. skull_black I see that make sense, well I'm excited for the new update!
Avatar
Avatar
I2rys (安全)
I'm kind off new to S3 and I did some experiment and yeah It does disappear if a folder doesn't have any files on it and .empty is used to make sure the folder can be identified If I'm not mistaken. And no, I'm using Windows and yes E2E encryption is enabled, as for multipart upload I decided to disabled it because It's much slower and I used the app because WebDav is slower. skull_black I see that make sense, well I'm excited for the new update!
This is to let you know that 1.9.3 is released on Windows, feel free to try out multipart mode. In next releases we will be tweaking concurrency settings, further improving multipart mode and expose settings to the user, so they can tweak it according to their desired use. If you face any issues with the app or would like to submit a new request, please visit: #support Thanks ! (edited)
chikadance 1
EXZ_FireSkyBlue 1
❤️ 1
johan joined the server. 7/4/2024 1:55 AM
Avatar
Avatar
johan
Click to see original message
I2rys (安全) 7/5/2024 3:17 AM
Welcome.
Slotherman joined the server. 7/6/2024 7:42 AM
Avatar
Question, under the Sync configuration menu I have the From end To "Local" button greyed out, meaning I cannot sync from/to local and only Remote/Remote is available. How/where do I configure a "Local" entry so that I can sync to a remote bucket? I'm on iOS, 🐸
Avatar
Avatar
Slotherman
Question, under the Sync configuration menu I have the From end To "Local" button greyed out, meaning I cannot sync from/to local and only Remote/Remote is available. How/where do I configure a "Local" entry so that I can sync to a remote bucket? I'm on iOS, 🐸
Local sync is available on all platforms but iOS, this is because iOS doesn't expose local file system which is required for current sync implementation. We're exploring workarounds, but most likely in the nearest future Local option will point to some in-app storage which could be shared with other apps, but as such making current Sync to access e.g. Download or Documents folder directly won't be possible soon.
Avatar
Is the problem with folders specifically? I know Strongbox (keepass) can refer to files in the local filesystem in other app directories (in my case the syncthing folder)
6:15 PM
A KeePass/Password Safe Client for iOS and OS X. Contribute to strongbox-password-safe/Strongbox development by creating an account on GitHub.
Avatar
Maybe it’s nice to have a text saying something like this in the sync interface or in the documentation, I tried looking it up but couldn’t find it
Avatar
Avatar
Slotherman
Is the problem with folders specifically? I know Strongbox (keepass) can refer to files in the local filesystem in other app directories (in my case the syncthing folder)
Cool, thanks for this resource. We'll definitely have a look and see what we can feasibly implement. We'll certainly allow way of accessing data from/to other apps in the future. Whether this cross-app access can be used to integrate with our current Sync (which is more of a desktop class sync, which was reworked to run on mobile), it's hard to tell, we may have to build some specific simplified Sync just for iOS.
Avatar
Avatar
Slotherman
Maybe it’s nice to have a text saying something like this in the sync interface or in the documentation, I tried looking it up but couldn’t find it
Sure, yes, it's bit confusing at the moment. We'll improve on that. The reason we haven't done already it likely because we'd hoped we can actually get this to work sooner. (edited)
neoOpus joined the server. 7/9/2024 6:55 AM
Avatar
Unlawful Cactus 7/10/2024 9:41 AM
Are you guys aware the storage information has started showing null values on Android recently? I noticed it yesterday in version 1.9.4, and it's still there in 1.9.5. On the S3 backend it's corrected after refreshing it, on the rclone WebDAV backend it isn't. Also what it says there makes no sense: 690.70 MB used, files: nullof 10 GB (edited)
9:43 AM
The null values on the S3 backend happen/return when switching from the rclone backend to S3.
Avatar
Avatar
Unlawful Cactus
Are you guys aware the storage information has started showing null values on Android recently? I noticed it yesterday in version 1.9.4, and it's still there in 1.9.5. On the S3 backend it's corrected after refreshing it, on the rclone WebDAV backend it isn't. Also what it says there makes no sense: 690.70 MB used, files: nullof 10 GB (edited)
Oh dear, thanks for finding that out. We're going to have it addressed in a next release. Sorry for this gibberish especially on Rclone back-ends.
Avatar
Avatar
Unlawful Cactus
Are you guys aware the storage information has started showing null values on Android recently? I noticed it yesterday in version 1.9.4, and it's still there in 1.9.5. On the S3 backend it's corrected after refreshing it, on the rclone WebDAV backend it isn't. Also what it says there makes no sense: 690.70 MB used, files: nullof 10 GB (edited)
Since we've added new files counter, they will show up as null for any preexisting stats until refreshed with new S3Drive version... however it seems there is a persistence issue (with files counter) when e.g. user switch other S3/Rclone account and returns. This will be fixed in a next release. Thanks for finding that out. (edited)
YFL joined the server. 7/11/2024 8:05 AM
Womanizer joined the server. 7/11/2024 4:12 PM
un1c0rn joined the server. 7/14/2024 7:21 AM
Dongchen Yue | 岳东辰 joined the server. 7/14/2024 9:57 AM
Avatar
Dongchen Yue | 岳东辰 7/14/2024 9:59 AM
Hi. Where is the decryption key persistently stored in device storage? Edit: I just head about "rclone obscure". So it's not safe to assume security of keys on the machine running S3Drive, then (edited)
Avatar
Avatar
Dongchen Yue | 岳东辰
Click to see original message
I2rys (安全) 7/14/2024 9:59 AM
Welcome
👋 1
Avatar
Avatar
Dongchen Yue | 岳东辰
Hi. Where is the decryption key persistently stored in device storage? Edit: I just head about "rclone obscure". So it's not safe to assume security of keys on the machine running S3Drive, then (edited)
I2rys (安全) 7/14/2024 10:01 AM
@Tom superYay
Avatar
Avatar
Dongchen Yue | 岳东辰
Hi. Where is the decryption key persistently stored in device storage? Edit: I just head about "rclone obscure". So it's not safe to assume security of keys on the machine running S3Drive, then (edited)
rclone obscure is meant to prevent "eyedropping" only. Imagine someone watches you behind your back.
Avatar
Avatar
Dongchen Yue | 岳东辰
Hi. Where is the decryption key persistently stored in device storage? Edit: I just head about "rclone obscure". So it's not safe to assume security of keys on the machine running S3Drive, then (edited)
Your back-end credentials and encryption key are stored in trusted platform depending on the OS. E.g. Keychain on iOS, KeyStore on Android or libsecret for linux. What's your OS? (edited)
Avatar
Dongchen Yue | 岳东辰 7/14/2024 11:44 PM
Thanks. So S3Drive does not use rclone config files for storing the key? Ubuntu 24.04
smantzavinos joined the server. 7/15/2024 12:19 AM
Satoshi joined the server. 7/15/2024 4:45 AM
Avatar
Avatar
Dongchen Yue | 岳东辰
Thanks. So S3Drive does not use rclone config files for storing the key? Ubuntu 24.04
If you setup S3 back-end from the S3 tab, then S3Drive doesn't use Rclone unless you use mount or sync. If you setup back-end from the Rclone tab or use mount/sync for any back-end, then S3Drive uses Rclone config stored in the config file in the rclone config file path, which likely resolves to: /home/<user>/.config/rclone/rclone.conf on Ubuntu. We will be implementing config encryption in order to secure the Rclone config: https://rclone.org/docs/#configuration-encryption EDIT: Added feature request: https://s3drive.canny.io/feature-requests/p/rclone-encrypted-config (edited)
Lord Wellen Dowd joined the server. 7/15/2024 10:00 AM
Brunvik joined the server. 7/15/2024 3:10 PM
Mitch Crimi (PrefersAwkward) joined the server. 7/15/2024 5:50 PM
SoulKeeper joined the server. 7/16/2024 12:16 AM
Avatar
@Tom I am sorry for bothering you. I just bought the monthly ultimate plan probably an hour ago to pair it with IDrive e2 on my Android phone. But after trying it out and comparing it with Round Sync, I do not think I need S3Drive app right now. I will probably come back later, but for now I would like to stick to Round Sync app as it is free and meets my need. I would really appreciate if you could kindly cancel my subscription and process a refund? But if you can not process a refund, I will respect your decision. Thank you!
12:24 AM
Sorry for messing here. I could not find your support email on the website
Avatar
Avatar
SoulKeeper
@Tom I am sorry for bothering you. I just bought the monthly ultimate plan probably an hour ago to pair it with IDrive e2 on my Android phone. But after trying it out and comparing it with Round Sync, I do not think I need S3Drive app right now. I will probably come back later, but for now I would like to stick to Round Sync app as it is free and meets my need. I would really appreciate if you could kindly cancel my subscription and process a refund? But if you can not process a refund, I will respect your decision. Thank you!
Hi, no worries, if you've ordered through Google Play you shall be able to easily cancel your fresh purchase: https://support.google.com/googleplay/workflow/9813244?hl=en If you've ordered through our website please send us a quick e-mail to support within s3drive.app domain. (edited)
Request a Google Play purchase refund with our easy, self-help flow that lets you skip the line and submit refund requests on Play purchases.
Avatar
Avatar
Tom
Hi, no worries, if you've ordered through Google Play you shall be able to easily cancel your fresh purchase: https://support.google.com/googleplay/workflow/9813244?hl=en If you've ordered through our website please send us a quick e-mail to support within s3drive.app domain. (edited)
Thank you Sir. I have sent an email to your support email along with a screenshot of the above message so whoever in your team reads my email will understand that I spoke with you.
Avatar
Avatar
SoulKeeper
Thank you Sir. I have sent an email to your support email along with a screenshot of the above message so whoever in your team reads my email will understand that I spoke with you.
No worries, we've refunded your order, you should've received confirmation on your e-mail. It usually takes couple days for funds to be processed back to your account.
Avatar
Avatar
SoulKeeper
Thank you Sir. I have sent an email to your support email along with a screenshot of the above message so whoever in your team reads my email will understand that I spoke with you.
Speaking of Windows issues that you've experienced on 1.9.6, you can always find previous versions on our Github page: https://github.com/s3drive/windows-app/releases By any chance, does issue exist on 1.9.4 as well?
Avatar
Avatar
Tom
No worries, we've refunded your order, you should've received confirmation on your e-mail. It usually takes couple days for funds to be processed back to your account.
Thank you! I will keep a close eye on your development and will subscribe once you start offering storage (apart from the 10GB free).
Avatar
Avatar
Tom
Speaking of Windows issues that you've experienced on 1.9.6, you can always find previous versions on our Github page: https://github.com/s3drive/windows-app/releases By any chance, does issue exist on 1.9.4 as well?
The issue I have been experiencing with your app is kinda strange (I have never come across this issue with any other app). So, this is my office laptop with a password protected admin access (I know the admin credential by the way). Usually, some apps ask me for for admin ID and password while installing and some apps do not ask them. But regardless of the admin permission at the the beginning, all the apps work flawlessly afterwards without asking for admin access again. However, your app does not ask me for any admin access while installing. When I open the app, it gives me a login page. I enter the login credential and it gets stuck at "Logging in" and after 10-15 seconds, the app force closes. Afterwards, whenever I try to open the app, it force closes with a black screen. Here comes the interesting part. If I right click on the app and "run as administrator" and enter the admin credential, it starts working normally. I have no idea why the app is acting this way.
8:16 AM
I don't know if it makes sense to you at all. I tried uninstalling and reinstalling the app multiple times but it's behaving the same way. WIthout running the app as administrator, it force closes.
Avatar
Avatar
SoulKeeper
The issue I have been experiencing with your app is kinda strange (I have never come across this issue with any other app). So, this is my office laptop with a password protected admin access (I know the admin credential by the way). Usually, some apps ask me for for admin ID and password while installing and some apps do not ask them. But regardless of the admin permission at the the beginning, all the apps work flawlessly afterwards without asking for admin access again. However, your app does not ask me for any admin access while installing. When I open the app, it gives me a login page. I enter the login credential and it gets stuck at "Logging in" and after 10-15 seconds, the app force closes. Afterwards, whenever I try to open the app, it force closes with a black screen. Here comes the interesting part. If I right click on the app and "run as administrator" and enter the admin credential, it starts working normally. I have no idea why the app is acting this way.
Thanks for that, that's really helpful. I will pass that to our team, so they can investigate that. Is that Windows 10 or 11?
Avatar
Avatar
Tom
Thanks for that, that's really helpful. I will pass that to our team, so they can investigate that. Is that Windows 10 or 11?
Windows 11 Pro
👌 1
Avatar
Avatar
Tom
If you setup S3 back-end from the S3 tab, then S3Drive doesn't use Rclone unless you use mount or sync. If you setup back-end from the Rclone tab or use mount/sync for any back-end, then S3Drive uses Rclone config stored in the config file in the rclone config file path, which likely resolves to: /home/<user>/.config/rclone/rclone.conf on Ubuntu. We will be implementing config encryption in order to secure the Rclone config: https://rclone.org/docs/#configuration-encryption EDIT: Added feature request: https://s3drive.canny.io/feature-requests/p/rclone-encrypted-config (edited)
Unlawful Cactus 7/16/2024 12:16 PM
I assume on Android it's stored in S3Drive's private app data directory?
Avatar
Avatar
SoulKeeper
@Tom I am sorry for bothering you. I just bought the monthly ultimate plan probably an hour ago to pair it with IDrive e2 on my Android phone. But after trying it out and comparing it with Round Sync, I do not think I need S3Drive app right now. I will probably come back later, but for now I would like to stick to Round Sync app as it is free and meets my need. I would really appreciate if you could kindly cancel my subscription and process a refund? But if you can not process a refund, I will respect your decision. Thank you!
Unlawful Cactus 7/16/2024 12:20 PM
Keep in mind that Round Sync has lots of bugs and isn't actively developed. When I tested it I ran into so many bugs I lost all trust in it. Of course your mileage may vary, it may depend on use-case.
Avatar
Avatar
Unlawful Cactus
I assume on Android it's stored in S3Drive's private app data directory?
Pretty much, yes: https://developer.android.com/training/data-storage/app-specific#internal-access-files
Other apps cannot access files stored within internal storage. This makes internal storage a good place for app data that other apps shouldn't access.
👍 1
Avatar
Riccardo Bellanova 7/16/2024 3:58 PM
Hello @Tom, is possible in your opinion have a loading status (%) for download/upload operations with the mounted drive? (edited)
Avatar
Avatar
Riccardo Bellanova
Hello @Tom, is possible in your opinion have a loading status (%) for download/upload operations with the mounted drive? (edited)
Hi, such loading status should already be provided by your operating OS. Depending on the VFS cache settings it may be true or not. If you disable VFS cache completely from the app, then loading indicator provided by the OS shall be blocking until files are copied. Does it work for you or perhaps I've misunderstood your question? Of course we could provide additional status indicators within the app. E.g. if there is some pending operation, then tray icon could show some additional icon and possibly, tray menu could show some text e.g.: 45MB out of 120MB etc.
Avatar
Avatar
Tom
Hi, such loading status should already be provided by your operating OS. Depending on the VFS cache settings it may be true or not. If you disable VFS cache completely from the app, then loading indicator provided by the OS shall be blocking until files are copied. Does it work for you or perhaps I've misunderstood your question? Of course we could provide additional status indicators within the app. E.g. if there is some pending operation, then tray icon could show some additional icon and possibly, tray menu could show some text e.g.: 45MB out of 120MB etc.
Riccardo Bellanova 7/16/2024 4:06 PM
I've cache mode set to Full, but with Nemo file explorer I can't see anything about loading
4:07 PM
btw could be cute add even the mount drive operations on transfers sections
Avatar
Avatar
Riccardo Bellanova
I've cache mode set to Full, but with Nemo file explorer I can't see anything about loading
With settings set to Full the copy will be very fast, because it will be within your HDD/SSD. If file isn't big the loading indicator may not even appear. Can you try disabling VFS cache (use off) and stop/start mount? Regardless, as you say, loading indicator would probably be helpful as well. We will add it to internal list, however can't promise ETA at this stage.
Avatar
Riccardo Bellanova 7/16/2024 4:09 PM
okok thank you
Ludwig joined the server. 7/16/2024 7:09 PM
M2B joined the server. 7/17/2024 3:05 PM
M2B joined the server. 7/17/2024 3:05 PM
MidnightSoup joined the server. 7/18/2024 6:55 PM
Midnight-Soup joined the server. 7/18/2024 6:55 PM
Avatar
Midnight-Soup 7/18/2024 7:01 PM
Hey, nice app! I have a multi-part question which i think part of it is addressed in the docs, but I'd just like to verify before purchase please. From the docs I understand I can Sync foo.txt from a device to 1 or many S3 providers at once which is great. Say I have syced a bunch of files already to a single S3 bucket only, is there the concept of syncing that bucket to 1 or many other buckets to bring them into unison? Also, this is more of a nice to have - can I configure sync rules for different directories to sync to different buckets? Many thanks
Avatar
Avatar
Midnight-Soup
Hey, nice app! I have a multi-part question which i think part of it is addressed in the docs, but I'd just like to verify before purchase please. From the docs I understand I can Sync foo.txt from a device to 1 or many S3 providers at once which is great. Say I have syced a bunch of files already to a single S3 bucket only, is there the concept of syncing that bucket to 1 or many other buckets to bring them into unison? Also, this is more of a nice to have - can I configure sync rules for different directories to sync to different buckets? Many thanks
I2rys (安全) 7/18/2024 9:46 PM
Hi! A customer here. Yes, you can sync a single bucket to other multiple buckets with S3Drive. You can also configure which directory you want the data to be synced to. Current modes are: Copy, Sync, Move, Two-way (edited)
Avatar
Avatar
I2rys (安全)
Hi! A customer here. Yes, you can sync a single bucket to other multiple buckets with S3Drive. You can also configure which directory you want the data to be synced to. Current modes are: Copy, Sync, Move, Two-way (edited)
Midnight-Soup 7/18/2024 10:02 PM
Spot on thanks a lot! I'll get my order in.
Deleted User joined the server. 7/19/2024 12:15 AM
Avatar
Avatar
Deleted User
Click to see original message
Avatar
Avatar
Midnight-Soup
Spot on thanks a lot! I'll get my order in.
I2rys (安全) 7/19/2024 1:41 AM
Anytime!
Avatar
Mount
Avatar
Avatar
MalionRay
Mount
I2rys (安全) 7/19/2024 9:00 AM
ain
Avatar
Midnight-Soup 7/19/2024 4:04 PM
This is exactly what I've been looking for, I can bin off my shonky colletion of cronjobs. Hopefully last question - I learned the hard way if you swap profile mid transfer, the transfer fails - I am guessing no mitigation for that?
Avatar
Avatar
Midnight-Soup
This is exactly what I've been looking for, I can bin off my shonky colletion of cronjobs. Hopefully last question - I learned the hard way if you swap profile mid transfer, the transfer fails - I am guessing no mitigation for that?
Hi! Founder here. Even though multiple accounts is most commonly used app feature, initially S3Drive supported only single S3 credentials. This makes some app functionality constrained to currently selected profile, as the UI and/or internal weren't built with multiple accounts in mind. Major exception to this rule is Sync feature which was built early this year and can already support multiple accounts at the time. We will improve in that area, we track this under this feature request: https://s3drive.canny.io/feature-requests/p/support-simultaneous-accounts-operation (edited)
Avatar
Avatar
Tom
Hi! Founder here. Even though multiple accounts is most commonly used app feature, initially S3Drive supported only single S3 credentials. This makes some app functionality constrained to currently selected profile, as the UI and/or internal weren't built with multiple accounts in mind. Major exception to this rule is Sync feature which was built early this year and can already support multiple accounts at the time. We will improve in that area, we track this under this feature request: https://s3drive.canny.io/feature-requests/p/support-simultaneous-accounts-operation (edited)
Midnight-Soup 7/20/2024 11:08 AM
Great, thank you!
Avatar
Hi, I recently updated S3 drive and now it's asking for my email address and password. Is this now required?
Avatar
Avatar
myfrogger
Hi, I recently updated S3 drive and now it's asking for my email address and password. Is this now required?
I2rys (安全) 7/23/2024 3:01 AM
Hello, as far as I know It's always required so S3Drive can check if you have the subscription or not. (edited)
Avatar
It wasn't in the past. It's my fault for not being active in this community when that tracking stuff was implemented. I am willing to buy a license but taking an email address now our files are associated to some personally identifiable information. (edited)
Avatar
Avatar
myfrogger
It wasn't in the past. It's my fault for not being active in this community when that tracking stuff was implemented. I am willing to buy a license but taking an email address now our files are associated to some personally identifiable information. (edited)
I2rys (安全) 7/23/2024 3:17 AM
That's only if you use S3Drive own storage but external storages from other providers nope thus if you're using other provider but S3Drive as client, it won't associate it with your account. (edited)
name joined the server. 7/23/2024 7:59 AM
Avatar
Avatar
myfrogger
Hi, I recently updated S3 drive and now it's asking for my email address and password. Is this now required?
Hi, welcome back! It's only required if you would like to use free managed 10GB account or would like to purchase all platforms license through our website. In-app purchases (iOS/Android) do not require e-mail/password account, as this is managed through platform respective billing API. Quick info about licenses: https://s3drive.app/faq?q=website_vs_inapp Obviously user can use S3Drive without license or e-mail and use only their S3 credentials.
It wasn't in the past.
The only change around setup that we've introduced recently was to switch default screen to standard login/password instead, however you can switch to previous/old S3 screen, by clicking: Connect link.
I am willing to buy a license but taking an email address now our files are associated to some personally identifiable information.
I take the point, but one could argue about it. If S3Drive team have access to your files, then personally identifiable information could likely (depending on the files sensitivity) be derived from the file contents/names, regardless of payment method. Conversely, if S3Drive team doesn't have access to your files, then there isn't really direct association between files and credit card personally identifiable information. Most importantly license is suitable for users who would like to use external S3/Rclone back-end, that implies: a) file contents and metadata aren't stored on S3Drive servers (data isn't in our hands), b) if E2E encryption is enabled (it's Rclone compatible BTW), then neither the storage operator nor S3Drive can see the file contents or file names. Password is managed solely by the user and files are decrypted/encrypted only within the client. As always, we're open for improvements. What would be your preferred payment method? One way to solve this is to offer payments via crypto, which we plan to implement at some point. Regardless how you decide to buy a license, we would be willing to give you -30% Early Adopters coupon code.
(edited)
hragon joined the server. 7/23/2024 11:56 AM
Avatar
Avatar
hragon
Click to see original message
dsus joined the server. 7/24/2024 5:23 AM
Arsi joined the server. 7/24/2024 9:29 PM
Exported 100 message(s)
Timezone: UTC+0