dxdiag
in the search bar.https
, where as IP addr will default to http
.
Filename encryption is on our Roadmap and we have a working prototype already. https://s3drive.canny.io/feature-requests/p/filenamefilepath-encryption (ETA ~April 2023).
We're making further research to understand standards or well established implementation in that area, so we can stay compatible.
The sharing functionality is based on S3 presign URLs, their limitation is that the signature can't be valid longer than 7 days, so every 7 days new link would have to be generated. We're researching how to overcome this limitation. For instance we could combine this with a link shortener, so there is single link that doesn't change, but under the hood we would regenerate the destination link as needed.
The encrypted share link has the master key at the end after the # character and looks like this:
https://s3.us-west-004.backblazeb2.com/my-s3drive/.aashare/hsnwye5bno3p/index.html?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=004060aad6064900000000044%2F20230214%2Fus-west-004%2Fs3%2Faws4_request&X-Amz-Date=20230214T095014Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=abdcd875e2106ee54c6a1d1851617c7e694e121464c5ca9023526ce2836be595#GKSGYX4HGNAd4nTcXb/GIA==
What it does it tries to load the encrypted asset as usual, but it's not aware per se if an asset is encrypted. In the background JavaScript tries to fetch the asset and replaces the one on the screen with decrypted version. It looks like it has failed on your side. Can you go to the console (right-click -> inspect element) to see if there is anything abnormal (that is error in the Console or different than 200 status code in any of the network requests).rclone
for accessing my files or backing up my photos on a day to day basis.... and I am not afraid of CLIs. (edited)garage
... and there is no way to provide region in S3Drive.
It seems that we may add additional form field to specify the region.toml
file like this: s3_region = "us-east-1"
.
We auto-detect region from the endpoint URL and have a way to detect custom region from MinIO.... and if it doesn't work we use the most common default which is us-east-1
..aa*
file and folder are about, but some "don't touch my bucket" parameter would be nice if the app doesn't strictly need them, otherwise that sounds like an additional bucket policy :D
EDIT: looks like the file is for some kind of init feature within the app, and one of the two folders is the trash. I've seen the versioning feature request, but the trash folder could be opt-in if possible. (edited).aainit
file is our write test, as well as ETag response validation (which is required for not yet released syncing features), as some providers (talking mostly about iDrive E2 with SSE enabled) don't generate valid ETags. BTW. Would you like S3Drive to support read-only mode?
Regardless, we will try to improve clarity of this operation, so user feels more confident that we're not doing some shady write/reads.
Speaking of Trash itself, likely this week starting on Android first there will be a Settings option to disable Trash feature altogether (which is a soft-delete emulation, but slow and pointless if bucket already supports versioning). Versioning UI with restore options will come little bit later. (edited).aainit
file is our write test, as well as ETag response validation (which is required for not yet released syncing features), as some providers (talking mostly about iDrive E2 with SSE enabled) don't generate valid ETags. BTW. Would you like S3Drive to support read-only mode?
Regardless, we will try to improve clarity of this operation, so user feels more confident that we're not doing some shady write/reads.
Speaking of Trash itself, likely this week starting on Android first there will be a Settings option to disable Trash feature altogether (which is a soft-delete emulation, but slow and pointless if bucket already supports versioning). Versioning UI with restore options will come little bit later. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. s3 ls
even though headObject
couldn't retrieve it as a valid S3 entry. I am curious if you came across of something similar. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .aainit
file being nuked (delete file itself + all its versions) once the init is done and raw presigned URL sharing headObject
and get the envelope AES keys.... so it must be a toggle with some warning. It would then simply return the Blob that's stored on S3, regardless of what's inside. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .s3drive_bucket_read_test
) and verify the response instead of trying to write a file.
Slider now works, so it's possible to set expiry time shorter than maximum of 7 days. There is an option to use raw preshared URLs.
We've also introduced basic Version UI. It is now possible to preview the revisions. In a next update we will allow opening, preview, deleting and restoring to particular version.
Thank you for these suggestions, they were great and helped us to validate it all !
... and as always we're open for a feedback.folder/file.txt
, but folder/
entry doesn't explicitly exists, it is still searchable)
There is an option to hide files starting with: .
As usual there are couple other performance improvements and bugfixes.
We would love to hear how are you finding new changes and if version management during file operations is what you would expect. (edited)Hide "." files
Show all files, including starting with the dot.
Hide files starting with the dot character
To:
Hide dotfiles
Show all files, including ones starting with a dot.
Hide files starting with the dot character.
feature_flags
int that computes to an array of pro features with bitwise operations, easy on your API and authentication gateway or whatever you do behind the scenes.feature_flags
int that computes to an array of pro features with bitwise operations, easy on your API and authentication gateway or whatever you do behind the scenes. MinioError: ListObjectsV2 search parameter maxKeys not implemented
(edited)s3.<region>.amazonaws.com
OS Error: CERTIFICATE_VERIFY_FAILED: self signed certificate
. And I indeed have self-signed certificate but I followed your instructions from https://github.com/s3drive/app/issues/19 (https://proxyman.io/posts/2020-09-29-Install-And-Trust-Self-Signed-Certificate-On-Android-11) and my browser on Andriod recognizes this certificate (if I go to minio browser, my Chrome is fine with the cert). But S3Drive continues to fail with the same error.
I'm using the latest version. (edited)null
response, which is somewhat expected. I would expect to get the SSL related error instead.OS Error: CERTIFICATE_VERIFY_FAILED: self signed certificate
. And I indeed have self-signed certificate but I followed your instructions from https://github.com/s3drive/app/issues/19 (https://proxyman.io/posts/2020-09-29-Install-And-Trust-Self-Signed-Certificate-On-Android-11) and my browser on Andriod recognizes this certificate (if I go to minio browser, my Chrome is fine with the cert). But S3Drive continues to fail with the same error.
I'm using the latest version. (edited)support-bugs-requests
is too long but there's no reason to have multiple channels for that either# Obscure password
echo "YourPlaintextPassword" | rclone obscure -
# Add it to Rclone config, config file location: `rclone config file`
[s3drive_remote]
type = s3
provider = Other
access_key_id = <access_key_id>
secret_access_key = <secret_access_key>
endpoint = <endpoint>
region = <region>
[s3drive_crypt]
type = crypt
filename_encoding = base64
remote = s3drive_remote:<bucket_name>
password = <obscuredPassword>
filename_encryption = standard
directory_name_encryption = true
suffix = none
Then you can use: s3drive_crypt
as your remote encrypted location.
Please note that whilst we support both encrypted and unencrypted files in the same location, Rclone doesn't seem to like the mix and won't display existing unencrypted files for the encrypted remote. In such case it's better to either keep everything encrypted globally or have dedicate paths with encrypted-only or unencrypted-only files. (edited)filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator.filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator. filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator. {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
${aws:username}
by anything you want, be it a variable or a fixed bucket name, there unfortunately isn't any group name variableusers
group to which I assign the selfservice
policy, then I add whoever I want to the users
group and they'll be able to manage their very own bucket${aws:username}
by anything you want, be it a variable or a fixed bucket name, there unfortunately isn't any group name variable {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}",
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
Contributor
role, it isn't much but still a nice way to recognize individuals who go out of their way to help the project out, what do you think about it?Contributor
role, it isn't much but still a nice way to recognize individuals who go out of their way to help the project out, what do you think about it? AppImage
you can find deb
package in the releases: https://github.com/s3drive/app/releases if that's any use for you.czNkcml2ZQ==
using this command: echo "czNkcml2ZQ==" | base64 -d | rclone obscure -
you can generate a password, e.g.: AQbZ5H8mrzlnkNj9MXnjpxS5QmxbRpw
which can be used in Rclone config: rclone config file
as indicated in this post: https://discord.com/channels/1069654792902815845/1069654792902815848/1135157727216279585
Speaking of decryption speeds in browser, let's continue in the support item that I've created: https://discord.com/channels/1069654792902815845/1140911911479808081 (edited)rclone password dump
gives obscured password. You need to use your original text password. Alternatively you'll need to use "password reveal" on your obscured password.
https://forum.rclone.org/t/how-to-retrieve-a-crypt-password-from-a-config-file/20051
We're not supporting Rclone 2nd password, but it's part of our roadmap: https://s3drive.canny.io/feature-requests/p/support-2nd-rclone-crypt-password
We're supporting default Rclone salt: https://forum.rclone.org/t/how-to-correctly-setup-the-salt-for-the-crypt-remote/4273/2
I've created additional two roadmap items to support your use case:
https://s3drive.canny.io/feature-requests/p/add-support-for-custom-rclone-salt
https://s3drive.canny.io/feature-requests/p/add-option-to-restore-rclone-password
Please vote on them, so the priority is pushed higher.
If you have any more issues with S3Drive, please create a support item: https://discord.com/channels/1069654792902815845/1102236355645419550
Thanks (edited)Support for AWS S3, Backblaze, Wasabi, Scaleway, Storj, MinIO and any other S3 compatible provider. Sign-up for a free account, no credit card required.Aug 22, 2023
S3Drive: Cloud storage on the App Store - Apple
1.5.3
- https://s3drive.app/changelog
Please try now install newest DMG from our website. It should resolve your issues.
What message did you get exactly from the app? Was it update more recent version available or perhaps that your version has expired? (edited)FCKGW-RHQQ2...
license.
This would either mean that you would have to generate some activation key on our website from time to time and paste it to the app... or once you activate features in your app with some activation key you would have to deactivate it before you could use it on some other Window's client.tocloud, // Upload to remote, delete remotely if file was deleted locally
tocloud_keepdeleted, // Won't remove file remotely if it was deleted locally
tocloud_compat, // If file is removed remotely, local won't know that, it will be reuploaded on a next ocassion
In principle:
"To remote" will upload file to remote and delete it remotely if it was deleted locally. If file is deleted remotely it won't get re-uploaded again.
"To remote (don't delete remotely)" - the same as "To remote", except it will keep file on the remote even if it was deleted locally.
The above 2 options require bucket versioning support.
The "compatibility mode" doesn't require versioning API, however that makes it not aware of any file changes in between, so it's simply blind one way copy instead of sync.
I hope that helps little bit. We'll build documentation once we sort couple challenges related to E2E encryption with syncing, as depending how we manage to solve these problems it may influence the available options.[
{
"bucketName": "acme-internal-files",
"keyId": "EVLJ2eXJukWUR9U17dyQqq6NPTi9mUu6scqpLCau",
"applicationKey": "X9EiaepygvDK2S0fmMmFayehHoETDOphNP1r96PI",
"endpoint": "https://s3.us-west-004.backblazeb2.com",
"region": "us-west-004",
"host": "s3.us-west-004.backblazeb2.com",
"port": 443,
"useSSL": true,
"encryptionKey": "cG90YXRv",
"rclonePlaintextKey": true,
"filepathEncryptionEnabled": true,
"rcloneDerivedKey": [
116,
85,
199,
26,
177,
124,
134,
91,
132,
...
]
}
]
This may be a good start.
We plan to implement QR code login, but the QR size limitation makes QR not a solution for all use cases.
There are other means, e.g. QR code could transfer the "place holder ID" which would be then used to fetch the required details, but then again this setup would require more moving parts.
We're very much open on this. (edited)[
{
"bucketName": "acme-internal-files",
"keyId": "EVLJ2eXJukWUR9U17dyQqq6NPTi9mUu6scqpLCau",
"applicationKey": "X9EiaepygvDK2S0fmMmFayehHoETDOphNP1r96PI",
"endpoint": "https://s3.us-west-004.backblazeb2.com",
"region": "us-west-004",
"host": "s3.us-west-004.backblazeb2.com",
"port": 443,
"useSSL": true,
"encryptionKey": "cG90YXRv",
"rclonePlaintextKey": true,
"filepathEncryptionEnabled": true,
"rcloneDerivedKey": [
116,
85,
199,
26,
177,
124,
134,
91,
132,
...
]
}
]
This may be a good start.
We plan to implement QR code login, but the QR size limitation makes QR not a solution for all use cases.
There are other means, e.g. QR code could transfer the "place holder ID" which would be then used to fetch the required details, but then again this setup would require more moving parts.
We're very much open on this. (edited)host
/ gateway
, or if you want to set the encryption key, both encryptionKey
and generated: rcloneDerivedKey
must be provided.
If there is a need we could certainly simplify the format, so things gets smartly derived if not present. (edited)host
/ gateway
, or if you want to set the encryption key, both encryptionKey
and generated: rcloneDerivedKey
must be provided.
If there is a need we could certainly simplify the format, so things gets smartly derived if not present. (edited)encryptionKey
field is required to setup the encryption.
Speaking of decryption, it's an open format. Naturally you can use S3Drive (on any platform) to access encrypted data (you'll need to access bucket with data and setup E2E with the same password that was initially used for encryption).
You can also mount data using as network drive (that's possible from S3Drive after clicking on tray icon).
Alternatively you can access data using rclone
command, as we're 1:1 compatible with their encryption: https://rclone.org/crypt/#file-encryption
In that case please visit our docs to understand how you can set up rclone
command: https://docs.s3drive.app/advanced/#setup-with-rclone
Then you would be able to use commands like copy: https://rclone.org/commands/rclone_copy/ or sync: https://rclone.org/commands/rclone_sync/ or couple others depending on your needs.
There are couple options out there. (edited)[
{
"bucketName": "bucket-photos",
"keyId": "keyId",
"applicationKey": "applicationKey",
"endpoint": "https://s3.pl-waw.scw.cloud",
"encryptionKey": "cG90YXRv"
}
]
This would configure all necessary things and enable encryption with password: potato
, the encryptionKey
is base64
encoded plaintext password. (edited)[
{
"bucketName": "bucket-photos",
"keyId": "keyId",
"applicationKey": "applicationKey",
"endpoint": "https://s3.pl-waw.scw.cloud",
"encryptionKey": "cG90YXRv"
}
]
This would configure all necessary things and enable encryption with password: potato
, the encryptionKey
is base64
encoded plaintext password. (edited)zenity
, qarma
and kdialog
.
https://github.com/miguelpruivo/flutter_file_picker/issues/1282#issuecomment-1551924613
I will add this item to our internal items and try to play around in Xubuntu. In the meantime would you be happy to try out the Flathub version? https://flathub.org/en-GB/apps/io.kapsa.drive (Please note that it awaits 1.6.4 release which will be likely available later today or tomorrow). (edited)zenity
or kdialog
on your OS and see if it solves the issue?
It it does we will add it as a dependency to .AppImage
.
https://forum.juce.com/t/native-filechooser-not-used-on-linux-xfce/26347zenity
or kdialog
on your OS and see if it solves the issue?
It it does we will add it as a dependency to .AppImage
.
https://forum.juce.com/t/native-filechooser-not-used-on-linux-xfce/26347 zenity
in our releases.zenity
in our releases. zenity
in our releases. {
"url": "/api/create-checkout-session",
"data": {
"price": {
"id": "price_1NyfLNEv31gUd4RDtzV41wix",
"interval": "year",
"currency": "EUR",
"unit_amount": 0
}
},
"res": {}
}
(edited)content://
, since we operate on network resources, what we get with S3 is just a network URL, that we don't store locally (except the video cache) and pass directly to the video player. Since data isn't stored on Android device locally I don't think there is a method to expose it as a content URI.
If I understand little bit more about your use case I might be able to come up with some other approach. (edited)glxinfo | grep "direct rendering"
? (edited)glxinfo | grep "direct rendering"
? (edited)glxinfo | grep "direct rendering"
? (edited)file.png
to test/
would rename it to testfile.png
and the file is not moved the directory1.7.1
sync feature to be able to interact with the local FS.file.png
to test/
would rename it to testfile.png
and the file is not moved the directory 1.7.0
, we've now prioritized this and shall be able to release a hotfix at some point today.libmpv2
as an alternative, but don't really have capacity at the moment to test things out.
Ideally movies should play out as normal, as MPV dependency is required by media library that we use: https://pub.dev/packages/media_kit (edited)libmpv2
as an alternative, but don't really have capacity at the moment to test things out.
Ideally movies should play out as normal, as MPV dependency is required by media library that we use: https://pub.dev/packages/media_kit (edited)libmpv
version. We're working to have it resolved promptly, please bear with us.git clone --recursive git@github.com:flathub/io.kapsa.drive.git
cd io.kapsa.drive
flatpak-builder --user --install --force-clean build-dir io.kapsa.drive.json
... however it does require some prior environment setup, like:
flatpak install flathub org.freedesktop.Sdk//23.08
flatpak install flathub org.freedesktop.Platform
flatpak install org.freedesktop.Sdk.Extension.vala/x86_64/23.08
We will be providing full guide, "how to compile Flatpak". (edited)git clone --recursive git@github.com:flathub/io.kapsa.drive.git
cd io.kapsa.drive
flatpak-builder --user --install --force-clean build-dir io.kapsa.drive.json
... however it does require some prior environment setup, like:
flatpak install flathub org.freedesktop.Sdk//23.08
flatpak install flathub org.freedesktop.Platform
flatpak install org.freedesktop.Sdk.Extension.vala/x86_64/23.08
We will be providing full guide, "how to compile Flatpak". (edited)./S3Drive-x86_64.AppImage
(kapsa:2730352): Gdk-CRITICAL **: 09:39:57.636: gdk_window_get_state: assertion 'GDK_IS_WINDOW (window)' failed
package:media_kit_libs_linux registered.
flutter: *** sqflite warning ***
You are changing sqflite default factory.
Be aware of the potential side effects. Any library using sqflite
will have this factory as the default for all operations.
*** sqflite warning ***
method call InitAppWindow
method call InitSystemTray
SystemTray::set_system_tray_info title: (null), icon_path: /tmp/.mount_S3DrivJ2GgY2/data/flutter_assets/assets/logos/logo_42.png, toolTip: (null)
method call CreateContextMenu
value_to_menu_item type:label, label:Show
value_to_menu_item type:label, label:Hide
value_to_menu_item type:label, label:Start drive mount
value_to_menu_item type:label, label:Stop drive mount
value_to_menu_item type:label, label:Start WebDav
value_to_menu_item type:label, label:Stop WebDav
value_to_menu_item type:label, label:Support
value_to_menu_item type:label, label:Visit Website
value_to_menu_item type:label, label:About
value_to_menu_item type:label, label:Changelog
value_to_menu_item type:label, label:Logs
value_to_menu_item type:label, label:Version 1.7.11
method call SetContextMenu
Just a question, did you try running Flatpak format? https://github.com/flathub/io.kapsa.drive/./S3Drive-x86_64.AppImage
(kapsa:2730352): Gdk-CRITICAL **: 09:39:57.636: gdk_window_get_state: assertion 'GDK_IS_WINDOW (window)' failed
package:media_kit_libs_linux registered.
flutter: *** sqflite warning ***
You are changing sqflite default factory.
Be aware of the potential side effects. Any library using sqflite
will have this factory as the default for all operations.
*** sqflite warning ***
method call InitAppWindow
method call InitSystemTray
SystemTray::set_system_tray_info title: (null), icon_path: /tmp/.mount_S3DrivJ2GgY2/data/flutter_assets/assets/logos/logo_42.png, toolTip: (null)
method call CreateContextMenu
value_to_menu_item type:label, label:Show
value_to_menu_item type:label, label:Hide
value_to_menu_item type:label, label:Start drive mount
value_to_menu_item type:label, label:Stop drive mount
value_to_menu_item type:label, label:Start WebDav
value_to_menu_item type:label, label:Stop WebDav
value_to_menu_item type:label, label:Support
value_to_menu_item type:label, label:Visit Website
value_to_menu_item type:label, label:About
value_to_menu_item type:label, label:Changelog
value_to_menu_item type:label, label:Logs
value_to_menu_item type:label, label:Version 1.7.11
method call SetContextMenu
Just a question, did you try running Flatpak format? https://github.com/flathub/io.kapsa.drive/ 1.7.16
released, with the next release awaiting Microsoft approval.echo "secretpassword" | rclone obscure -
Can you provide your full Rclone config for your remote / back-end and crypt (remove your password sensitive credentials / access key etc.)echo "secretpassword" | rclone obscure -
Can you provide your full Rclone config for your remote / back-end and crypt (remove your password sensitive credentials / access key etc.) rclone version
, I guess you've provided S3Drive version? 1.6.5
, I can't recall exactly, but there was some issue with S3Drive <> Rclone compatibility below that version.
Would you be keen to upgrade your Rclone version and see if that config works for you?1.6.5
, I can't recall exactly, but there was some issue with S3Drive <> Rclone compatibility below that version.
Would you be keen to upgrade your Rclone version and see if that config works for you? directory_name_encryption = true
- do you also have filename/filepath encryption enabled on the S3Drive side?directory_name_encryption = true
- do you also have filename/filepath encryption enabled on the S3Drive side? 1.6.5
, I can't recall exactly, but there was some issue with S3Drive <> Rclone compatibility below that version.
Would you be keen to upgrade your Rclone version and see if that config works for you? /home/user/.ssh
folder. Thanks./home/user/.ssh
folder. Thanks. rsync -av --exclude='cache' --exclude='build' source dest
to sync data to other local machine and then archive things and send it compressed and password protected to Backblaze:
7z -mhc=on -mhe=on -pVeryHardPasswordHere a $folder.7z /home/tom/$folder/*
AWS_ACCESS_KEY_ID=<key> AWS_SECRET_ACCESS_KEY=<access> aws --endpoint https://s3.eu-central-001.backblazeb2.com s3 cp $folder.7z s3://my-backup-bucket
I use S3Drive to backup media from my phone to cloud and for online access to other media files (mostly older photos).
I am yet to find perfect backup strategy for photos, but I would say at this stage bigger problem is to keep things tidy, organized and deduplicated.
Eventually I will get to that. (edited)server_side_encryption = aws:kms
in the config, which we've checked solves the issue, the challenge is that we don't know if user actually enabled that setting on the iDrive side.
The quick fix is to turn off the: "Default encryption" setting for the iDrive bucket, then the mount shall upload objects to iDrive without issues.
We need to spend more time on this to research if we can detect this setting or whether we need to implement prompt/question for the user and provide configurable setting. (edited)server_side_encryption = aws:kms
in the config, which we've checked solves the issue, the challenge is that we don't know if user actually enabled that setting on the iDrive side.
The quick fix is to turn off the: "Default encryption" setting for the iDrive bucket, then the mount shall upload objects to iDrive without issues.
We need to spend more time on this to research if we can detect this setting or whether we need to implement prompt/question for the user and provide configurable setting. (edited)Writes
or Full
(combined with not yet configurable via S3Drive: vfs-cache-max-age
setting, default 1h; In other words after 1h of not accessing files, they will be evicted from cache).
If you switch to: "Old mount experience" in the Settings and have Rclone CLI installed, you can then lookup the exact command in the Logs and play with the settings yourself (based on this doc: https://rclone.org/commands/rclone_mount/#vfs-file-caching)
We could then provide more configuration options in S3Drive ... or you are free to keep using Rclone outside of the S3Drive ecosystem. (edited)zenity
package missing on the host OS, alternatively kdialog
can be installed. What's your OS? (edited)zenity
package missing on the host OS, alternatively kdialog
can be installed. What's your OS? (edited)/home/jeannesbond/S3Drive
exist on your machine?
I would also recommend using external S3 account: https://docs.s3drive.app/setup/providers/ instead of testing account, as it's not always stable enough just yet.
It's great you've included logs !"smb": {
"host": "smb.hostname.com",
"pass": "<obscuredPass>",
"type": "smb",
"user": "usersomething"
}
Then you can set up Sync (from/to) or use the back-end in a same way as any other Rclone within S3Drive. (edited)"smb": {
"host": "smb.hostname.com",
"pass": "<obscuredPass>",
"type": "smb",
"user": "usersomething"
}
Then you can set up Sync (from/to) or use the back-end in a same way as any other Rclone within S3Drive. (edited)trashed_only = true
)
Stay tuned for the updates, in the meantime if you have any feedback don't hesitate to reach out.
... also I would like to thank you for your input. If you have registered an account I would happily assign you Ultimate license for one year - if that's something that would interest you. (edited)trashed_only = true
)
Stay tuned for the updates, in the meantime if you have any feedback don't hesitate to reach out.
... also I would like to thank you for your input. If you have registered an account I would happily assign you Ultimate license for one year - if that's something that would interest you. (edited)base
that our filesize
library is using, is 1024
instead of 1000
and there was a rounding issue as well. Expect this to be fixed in a next release.
Rclone initialization failed. Please contact support[...]
which indicates that after multiple tries the initialization failed, then nothing to worry about.
Rclone initialization failed. Please contact support[...]
which indicates that after multiple tries the initialization failed, then nothing to worry about.
Rclone initialization failed. Please contact support[...]
which indicates that after multiple tries the initialization failed, then nothing to worry about.
"kmsEncryption":true
in the json config, but may I also suggest writing server_side_encryption = aws:kms
in the rclone configserver_side_encryption = aws:kms
in the rclone config manually will be overwritten by s3drive removing it"kmsEncryption":true
in the json config, but may I also suggest writing server_side_encryption = aws:kms
in the rclone config kmsEncryption
is set to true
in the config, then we should already be setting: server_side_encryption = aws:kms
in the Rclone config.
Does S3Drive behave differently?
The issue that we're aware of is that we only display dialog which sets the: kmsEncryption
value when you mount a drive (we ask that for AWS and iDrive only).
We need to fix that, so dialog is displayed also for Sync and other functionalities which internally use Rclone.
Even though I don't necessarily recommend modifying app's config, a temporary solution might be setting: kmsEncryption: true
in the config (ideally when app is disabled) and then starting app.
What's your S3 provider by the way? (edited)server_side_encryption
being set in the Rclone config.
Isn't what you finding? (edited)server_side_encryption
setting properly. I don't know what happened. I still have the json config timestamped Sunday, 21 April 2024, 12:57:39 AM with "kmsEncryption":true,
inside of it.Rclone initialization failed. Please contact support[...]
which indicates that after multiple tries the initialization failed, then nothing to worry about.
rclone
first (there is no streaming interface that we could use). (edited)rclone
first (there is no streaming interface that we could use). (edited)S3Drive
and the name that we've chosen back in 2022 probably doesn't help here.
As such S3Drive is a simple back-end for Rclone, technically in some ways it is a GUI that sits on top of Rclone, but that's our additional feature, not the core one.
The core one revolves around S3 support and storage plans will be available later this year.
We still plan to expand support for Rclone back-ends, including preview, thumbnails etc.NoSuchKey
using XML format.
hcm.s3storage.vn
on the other hand returns invalid error code (500 instead of 4xx) and invalid format, HTML instead of XML:
Server: HyperCoreS3
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty/1.15.8.3</center>
</body>
</html>
Once I've skipped the S3Drive read check (not possible currently from the app itself) I've actually managed to run couple actions, that is, list, copy/rename, delete.
... so there are two non-exclusive solutions.
1. Contact: hcm.s3storage.vn
so they can fix the issue with their S3 API and make it compliant with the standard.
2. S3Drive to allow user to skip the read check <--- this is something we would be willing to allow, but it would take use a while, as we're busy with other work at the moment.NoSuchKey
using XML format.
hcm.s3storage.vn
on the other hand returns invalid error code (500 instead of 4xx) and invalid format, HTML instead of XML:
Server: HyperCoreS3
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty/1.15.8.3</center>
</body>
</html>
Once I've skipped the S3Drive read check (not possible currently from the app itself) I've actually managed to run couple actions, that is, list, copy/rename, delete.
... so there are two non-exclusive solutions.
1. Contact: hcm.s3storage.vn
so they can fix the issue with their S3 API and make it compliant with the standard.
2. S3Drive to allow user to skip the read check <--- this is something we would be willing to allow, but it would take use a while, as we're busy with other work at the moment. username
and password
from the Rclone configuration after setting up Proton, please see my comment here: https://www.reddit.com/r/ProtonMail/comments/18s211d/comment/kzfqub7/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
I haven't used that myself long enough, it may happen that at some point username
and password
will be required for the resetup if by any chance: client_refresh_token
expires.
In the future we will allow password striping in-app, so no manual step is required.
c) In general you may monitor any undesired file access to config file, on Windows: C:\Users\<user>\AppData\Roaming\rclone\rclone.conf
, as this is where sensitive data is stored.
In the future we will support Rclone encrypted config: https://rclone.org/docs/#configuration-encryption (edited)\\server\s3drive_proton
.
In 1.9.2 version which is available as pre-release (and will be released to general public in a few days): https://github.com/s3drive/windows-app/releases/tag/1.9.2 we've added an option to disable network (and made it default setting) mount in case you don't want to share it.
In order to start mount after reboot, you can use combination of: "Launch app at startup" and then: "Mount drive after app starts".
In the future we will add an option to start the app in tray: https://s3drive.canny.io/feature-requests/p/desktop-app-minimize-to-tray-dont-close so S3Drive windows doesn't pop up each time. (edited)bash
evidence and going to ask publicly on MinIO's Github.
The issue isn't complex at all. Basically you have trash on Windows/Linux/macOS whatever. If you delete files from your computer, they land in Trash. We can say they're versioned as their latest version is available for restore.
From a UI point of view, after deletion, you wouldn't expect for these deleted entries to appear in a location from where they were originally deleted. They're now in Trash (available for further deletion or restore) and shouldn't be present anywhere else.
MinIO shows the folder hierarchy in the original location despite that it was all deleted and it's correct place is Trash. I don't think it's a correct behavior from purely "files&directories" UI point of view. (edited)PUT folder/
PUT folder/file.txt
then: LIST folder
might not return you the folder/file.txt
Apparently they mention it in here: https://min.io/docs/minio/container/operations/checklists/thresholds.html#id6
I am not sure, but we may have to change the way we create folders to overcome this issue. E.g. instead of folder/
we would rather create folder/.empty
to not cause conflicting keys.alist
has its own Web front-end, so you can self-host it yourself and then expose back-end to other users: https://al.nn.ci/
With Rclone that's not possible yet, you can use: https://rclone.org/commands/rclone_serve/ however there is no documentation how to "host" serve command permanently e.g. on your self-hosted server.
One more difference is selection of supported back-ends. Alist seem to target Chinese providers which Rclone doesn't support at the moment.<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidRequest</Code><Message>Content-MD5 HTTP header is required for Put Object requests with Object Lock parameters</Message><RequestId>...</RequestId><HostId>...</HostId></Error>
I remember we've solved this issue in S3Drive's predecessor: https://play.google.com/store/apps/details?id=com.photosync.s3
but this didn't end up in S3Drive just yet. Fix: https://github.com/s3drive/app/issues/16#issuecomment-1257024140
In other words we need to add this header if compliance mode is enabled, but since we don't want to do it by default we'll likely add the configurable setting, which will get switched on automatically if we detect this error message. (edited)x-amz-object-lock-mode: ObjectLockMode
The Object Lock mode that you want to apply to this object.
Valid Values: GOVERNANCE | COMPLIANCE
x-amz-object-lock-retain-until-date: ObjectLockRetainUntilDate
The date and time when you want this object's Object Lock to expire. Must be formatted as a timestamp parameter.
x-amz-object-lock-legal-hold: ObjectLockLegalHoldStatus
Specifies whether a legal hold will be applied to this object. For more information about S3 Object Lock, see Object Lock.
Valid Values: ON | OFF
It's a matter of providing sane settings UI where these settings can be applied.
Depending on the requirements there could be multiple layers with override rules. For instance user could specify settings on the bucket level which would then be overridden by the settings on the folder level, then on the sub-folder level (and so on) down until the file level.
We're open for suggestions how this should/could work.deb
and AppImage
packages: https://github.com/s3drive/app/releases
cc @helios6509 cc @morethanevil (edited)Content-MD5
header to be provided with the request.
It was especially challenging when combined with E2E encryption, as this rendered the: "Chicken or the egg" dilemma, where we had to provide MD5
before sending any data, however when we encrypt data we auto-send it in chunks to not cause any memory issues.
The solution was to implement Multipart Upload. It's a native S3 feature where file is uploaded in chunks.
This allowed us to overcome any memory-hungry operations and divided upload of big files on a smaller manageable chunks.
The positive side-effect is that if your file upload fails, when you retry file it will start from the last failure point (currently only works without encryption enabled).
Finally, improving the encryption scheme allowed us to build decryption proxy, so we can convert the Rclone encrypted blob into video stream that's understandable by video players.
That's how encrypted videos playout was implemented. It was deployed experimentally to all platforms.
We didn't manage to build decryption proxy for Web and even if we did, the performance would be terrible (https://github.com/rclone/rclone/issues/7192), so we're temporarily hosting the proxy in our infrastructure. Since it poses some privacy risks, we've implemented BIG WARNING to the user.
We've also implemented ZIP download for multiple selected files and delivered lots of bugfixes and performance improvements as usual. (edited)Android/data
from native Files app, given you uninstall update first: https://youtu.be/I_1ng7IP38w?t=96
There isn't much we can do with S3Drive. It seems that Google isn't really keen on changing that, based on: https://issuetracker.google.com/issues/256669329?pli=1
As you rightly suggested this can be workarounded with some Shizuku dependencies: https://shizuku.rikka.app/guide/setup/
but vast majority of users won't be able to go that way and it's not really a reliable way forwards.
If there is any reliable way to have access to data
in the future then we're keen to have it implemented. At the moment we have no idea if there is any way / workaround.
More links:
https://www.reddit.com/r/Android/comments/173lsrc/android_14_storage_access_framework_no_longer/Android/data
from native Files app, given you uninstall update first: https://youtu.be/I_1ng7IP38w?t=96
There isn't much we can do with S3Drive. It seems that Google isn't really keen on changing that, based on: https://issuetracker.google.com/issues/256669329?pli=1
As you rightly suggested this can be workarounded with some Shizuku dependencies: https://shizuku.rikka.app/guide/setup/
but vast majority of users won't be able to go that way and it's not really a reliable way forwards.
If there is any reliable way to have access to data
in the future then we're keen to have it implemented. At the moment we have no idea if there is any way / workaround.
More links:
https://www.reddit.com/r/Android/comments/173lsrc/android_14_storage_access_framework_no_longer/ support
email in the s3drive.app
domain.
We will set up the Ultimate for you and then we could see if there are any options left to restore your account data (that would be any data stored on our test S3 - if you've used that). (edited)/public/
directory which is configured as world-readable.
Whenever I share a file from this bucket, I just want https://domain.tld/bucket/public/file.ext
*/foldername/file.ext
would be beneficial.
https://d843ae90cab33e54f4d284bc65d2fd6a.r2.cloudflarestorage.com/sharex/2adrRMxSvi?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=c8e7807dd8cf0a5f005fd526f3279679%2F20230915%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230915T112311Z&X-Amz-Expires=604679&X-Amz-SignedHeaders=host&X-Amz-Signature=47e2488e23043e971c1dc0d6b24235e225a18b5a4952efd6f263049c779b73e9
-
https://pub-0ea304dc97d1413588965fb731c2d5e3.r2.dev/2adrRMxSvi
-
https://i.cubity.dev/2adrRMxSviOffline
on the folder1
which is the root folder.
Then all its sub-folders were correctly offlined, this shall still be possible - offlining root folders.
The issue currently is that there is no way to offline folder2
alone, without offlining folder1
.
Did I get it right? (edited)f0 -> f1 -> file_1.txt
and then f2 -> f3 -> file_1.txt
I think app couldn't handle that and was overriding the previous: file_1.txt
even though that one is technically a different file from different source.
I am not sure if it was the only issue, we just need to spend some time to rethink and deliver.
Anyway, stay tuned. I am sure we'll get to that soon.AndroidManifest.xml
and this is how they do it (plus implementation of course).
https://github.com/bitfireAT/davx5-ose/blob/273deecbe49b9f0c5ae753353ad0f8a514c4c401/app/src/main/AndroidManifest.xml#L288-L296
Thank you for your hard work, using S3 Drive and liking it a lot!s3fs
on linux and expose it via a WebDav server (cause Davx can use that as file provider)s3fs
and run the WebDav server yourself as you say, but you could also achieve the same with our native rclone mount
which is likely going to be more performant than a POSIX compatible s3fs
.rclone
- do you happen to know how come it is more performant then a file system mount?s3fs
is indeed quite slow heregoofys
, whereas s3fs
offers maximum POSIX compatibility at huge cost. E.g. listing directory with 1000 files will take up to 1000! requests with s3fs
, however it will take just one with rclone
/ goofys
. (edited)s3fs
in order to be POSIX compatible in some cases needs to issue 1000x times more requests. This has dramatic performance consequences.
You're right you won't be able to specify group permissions for a dir. (Actually it may work, but such data won't be preserved when you remount). (edited)-rw-rw-r--
which is pretty default Rclone setting, but in principle we could add settings configuration to change it.rclone
command it's probably not a high priority.
When you go to the app logs, you will see exactly what commands app executed on rclone
binary. You can replicate the same on your server if you wish.goofys
and even had an AWS client-side encryption compatible prototype, ultimately we've replaced it with Rclone, however experience with goofys
and its codebase was pretty good. (edited)goofys
and even had an AWS client-side encryption compatible prototype, ultimately we've replaced it with Rclone, however experience with goofys
and its codebase was pretty good. (edited)goofys
was easier to setup - I just added a line in my /etc/fstab
rclone
would have been "a very tiny bit" more work (like, writing/finding a systemd service or a wrapper script) (edited)goofys
is that it does not seem to work with the systemd
automount feature (that mounts on demand)a
, but on my bucket I don't see a
, but it's subfolders are listed.
Is there a way to retain the parent folder?Automatic uploads
location (you can change it in the settings).
What "pay to use" option you have in mind? Background functions are in fact limited to Ultimate version, however honestly speaking you might not need them anyway if you start app from time to time.
I hope this helps, please let me know if you've managed to backup your media succesfully. Thanks.Automatic uploads
location (you can change it in the settings).
What "pay to use" option you have in mind? Background functions are in fact limited to Ultimate version, however honestly speaking you might not need them anyway if you start app from time to time.
I hope this helps, please let me know if you've managed to backup your media succesfully. Thanks. NoSuchKey
message. App as such wouldn't be usable with read permission, so we haven't really implemented support for listing only buckets. You may be better of using raw aws s3
or aws s3api
commands.
If you aim to mount your bucket you can do so outside of S3Drive, but in an S3Drive compatible manner, please find our guide how to configure bucket: https://docs.s3drive.app/advanced/#setup-with-rclone
I am not 100% sure whether Rclone requires anything else than listing permissions though, but in principle it should work.
Then you can issue: https://rclone.org/commands/rclone_mount/ manually. If you want to see the exact commands that S3Drive would've used, you can mount some other bucket from S3Drive and copy out commands from application logs (available on the about me page).
What's your use case by the way? This will certainly help me to come up with something that works for you ! (edited).s3drive_bucket_read_test
key. Once you get past that check your listings should work just fine.
We will add an option to get past that check in one of the next releases.{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:List*"
],
"Resource": "*"
}
]
}
bucket can be set up without problems despite the read check.
Upload/download naturally wouldn't work, but that's expected. (Please note that these error responses come from 1.6.1 version which is due to be released. In older version errors might be rendered differently).
Drive mount does also seem to mount properly and listing works.
What's your permission set and S3 provider which gets you to: "Access denied"? I would be happy to try that out. Thanks ! (edited) Main bucket policy, shared by all users
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Effect": "Allow",
"Action": [
"s3:GetBucketAcl",
"s3:GetBucketCORS",
"s3:GetBucketLogging",
"s3:GetBucketNotification",
"s3:GetBucketObjectLockConfiguration",
"s3:GetBucketPolicy",
"s3:GetBucketTagging",
"s3:GetBucketVersioning",
"s3:GetLifecycleConfiguration",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "AllowStatement2A",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::buckentname",
"Condition": {
"StringEquals": {
"s3:delimiter": "/",
"s3:prefix": ""
}
}
}
]
}
policy for one of the sub directories
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketname/Folder1/*"
},
{
"Sid": "AllowStatement2A",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucketname",
"Condition": {
"StringEquals": {
"s3:delimiter": "/",
"s3:prefix": [
"",
"Folder1"
]
}
}
},
{
"Sid": "AllowListingOfUserFolder",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucketname",
"Condition": {
"StringLike": {
"s3:prefix": "Folder1/*"
}
}
}
]
}
AccessDenied
when trying to login using your attached: Main bucket policy
.
We'll support this use case and it will work with Wasabi. After setting a bucket user will receive a message: Read check has failed. S3Drive functionality may not work properly.
, but then will be able to proceed and list files.
This will be available in a next 1.6.3 release available in a couple days. (edited).md
as if it was the installation package (e.g. .apk
) instead of opening list of apps, so you could select some text file editor.
I suspect that this has something todo with your phone security settings, but may well be something in the file.
Is it possible by any chance if you could send this file over?
Feel free to send it to me directly: tom@s3drive.app
NFS mount
is running. The temporary solution is to either use force exit or use macFUSE/FUSE-T mount as explained in our guide: https://docs.s3drive.app/install/#macos_1
We're working on improving this, but the ultimate solution is that macOS native integration with Finder, skipping the NFS/FUSE layes altogether.
https://s3drive.canny.io/feature-requests/p/macos-native-file-mount
EDIT: I've now realized that you're probably using Windows based on your other support item. In which case it seems like app may not want to close if mount
is performing any operations or e.g. file/folder is open without mount directory preventing mount
from finishing gracefully. We will aim to add a relevant prompt ! (edited)rclone
command installed on your desktop anywhere? Whilst running Rclone using Termux
on Android might be suficient we've never tried that.rclone ls driveName:
shall give you some files in the listing.
Alternatively you can use about
command, e.g.: rclone about driveName:
and you should get e.g.:
Total: 7 GiB
Used: 611.346 MiB
Free: 6.403 GiB
If you're not getting these results, STOP and try setting back-end again using: rclone config
as mentioned here: https://docs.s3drive.app/setup/import_rclone/
If things are working good for you at this stage, then use: rclone config dump
command in order to extract all configs, then manually select, copy and paste the relevant Google Drive config into S3Drive (click new "+" and import). (edited)rclone ls driveName:
shall give you some files in the listing.
Alternatively you can use about
command, e.g.: rclone about driveName:
and you should get e.g.:
Total: 7 GiB
Used: 611.346 MiB
Free: 6.403 GiB
If you're not getting these results, STOP and try setting back-end again using: rclone config
as mentioned here: https://docs.s3drive.app/setup/import_rclone/
If things are working good for you at this stage, then use: rclone config dump
command in order to extract all configs, then manually select, copy and paste the relevant Google Drive config into S3Drive (click new "+" and import). (edited)2024/04/29 07:39:31 ERROR : rc: "sync/sync": error: corrupted on transfer: sha1 hash differ "CB6D7913466AF524C195FF39306DDE07BF37133C" vs "cb6d7913466af524c195ff39306dde07bf37133c"
So the hash does match, it's just a matter of upper-/lowercase...remote
configured manually via Rclone as crypt
or is it configured by S3Drive automatically via E2E settings?
Did I understand correctly, that on Linux it behaves exactly the same?encryptionKey
and filePathEncryption
attributes respectively. I am encountering the same thing in the Linux flutter app (both are configured using the same json file if that may be causing it), where it seems to create the folder tree (though strangely there is no place holder file there, even though S3 is supposed to be a flat object store that emulates folders...), and then stop due to the MD5 mismatch. I can upload files and folders normally without any issue by using the manual upload button on both the Android phone and the Linux computer, it seems to be a problem specifically related to the custom path sync feature. (edited)1.65.1
to 1.65.2
in a next release end of this week / Monday, if this doesn't resolve this issue, then we'll escalate.PutBucketCors
request, so it can't be set from S3Drive, but they haven't provided an alternative way how to set up CORS in their e-mail response.
Most providers on top of S3 API to set up CORS from their admin panel and that's what Cubbit should also offer, however I haven't found any evidence of that.
Feel free to ask them how you can set up CORS settings, so your bucket can be accessed from web browser context.
The alternative way is to route traffic through some proxy (e.g. NginX) which can modify this response header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin
but I don't think this is a viable option for most users given the required overhead. (edited)(
and )
and $
characters only for buckets where versioning was disabled.
If you still face this problem even after updating the S3Drive, please send me the full filepath (including special characters - feel free to redact standard alphanumerics for privacy), bucket name and settings (versioning, object lock etc.). (edited)[1.4.0] - 2023-07-21
(https://s3drive.app/changelog), it can be enabled in the Settings (it's called E2E on our end, but it's essentialy 1:1 compatible).
Most recent release: [1.7.0] - 2023-12-29
provides full integration with Rclone allowing you to use 70+ back-ends on top of S3 (more on that here: https://docs.s3drive.app/setup/import_rclone/). One of the back-ends is crypt
(https://rclone.org/crypt/) which means you can use S3Drive to encrypt your data and store it on Dropbox or whenever you want.
In a 1.7.1
release which we will release in a few days there will be an option to sync from local file system as well as (on Android, iOS and macOS this option won't be initially available due to different permission systems, we'll need to provide workaround), between different back-ends, so you can e.g. upload some files to Dropbox, some files to Google Cloud and then sync certain folders between them as you need. (edited)1.7.1
release is now a thing !
We love the idea of permissions to only specific folder, the challenge is that these operate on so called Content URIs instead of classic file system (you can notice on your video it starts with content://).
That makes it incompatible with classic software, Rclone included.
That's why our best solution so far is to aim for MANAGE_EXTERNAL_STORAGE
permission which fortunately and unfortunately gives access to the filesystem: https://developer.android.com/training/data-storage/manage-all-files#operations-allowed-manage-external-storage
In the long run we could reimplement some syncing logic and make it compatible with these Content URIs... but since Rclone does damn good job already we're not really keen to reinvent the wheel, add maintenance/risks and spend at least couple months initially just to get it right. (edited){
"bucketName": "bucket",
"keyId": "somekey",
"applicationKey": "key",
"endpoint": "https://something.r2.cloudflarestorage.com/tomek",
"region": "us-east-1",
"host": "something.r2.cloudflarestorage.com",
"port": 443,
"useSSL": true,
"encryptionKey": "cG90YXRv",
"rclonePlaintextKey": false,
"filepathEncryptionEnabled": true,
"supportVersioning": false
}
The most important ones for you are:
"encryptionKey": "cG90YXRv",
"rclonePlaintextKey": false,
where it tells app to NOT USE Rclone and then encryptionKey
will be your base64
decoded AWS.
If you reimport this to desktop (Function available in the Profiles - it's actually paid one, but I would be more than happy to give you free month, so you can handle that). (edited)<AllowedOrigin>https://web.s3drive.app</AllowedOrigin>
to:
<AllowedOrigin>https://web.syncaware.com</AllowedOrigin>
created_at,level,context,message,stacktrace
2024-06-05 22:57:49.567665,LogLevel.INFO,"instance","RPC init (config/create)",""
2024-06-05 22:57:02.720062,LogLevel.INFO,"instance","RPC init (config/create)",""
2024-06-05 22:56:39.526929,LogLevel.INFO,"instance","RPC init (config/create)",""
2024-06-05 22:50:19.193307,LogLevel.INFO,"instance","RPC init (config/create)",""
2024-06-05 22:48:46.561695,LogLevel.INFO,"instance","RPC init (config/create)",""
2024-06-05 22:48:25.235644,LogLevel.INFO,"instance","RPC init (config/create)",""
~/Library/Preferences/com.s3.drive.file.explorer.storage.cloud.manager.plist
but also ~/Library/Containers/com.s3.drive.file.explorer.storage.cloud.manager/Data
(if exists)?/Applications/S3Drive.app/Contents/MacOS/S3Drive
so perhaps we can get anything interesting from the output?/Applications/S3Drive.app/Contents/MacOS/S3Drive
and let me know the output? Thanks <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<ID>S3Drive</ID>
<AllowedOrigin>https://web.s3drive.app</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3600</MaxAgeSeconds>
<ExposeHeader>etag</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-key</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-iv</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-cek-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-wrap-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-key-v2</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-tag-len</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-unencrypted-content-length</ExposeHeader>
<ExposeHeader>x-amz-version-id</ExposeHeader>
<ExposeHeader>x-amz-meta-key</ExposeHeader>
<ExposeHeader>x-amz-meta-iv</ExposeHeader>
<ExposeHeader>x-amz-meta-chunk</ExposeHeader>
<ExposeHeader>x-amz-meta-cek-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-s3drive</ExposeHeader>
<ExposeHeader>x-amz-meta-mtime</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
x-amz-meta-mtime
header (https://docs.aws.amazon.com/fsx/latest/LustreGuide/posix-metadata-support.html) and was 403 rejecting the request.
On the plus side this made us improve our error reporting, so if request fails like this it will be correctly captured in the Transfers error (1.7.8 release). (edited)[b2]
type = b2
account = 123
key = 123
hard_delete = true
#endpoint = https://s3.eu-central-003.backblazeb2.com
[b2-crypt]
type = crypt
remote = b2:mybucket
password = 123
base64
encoding for the: filename_encoding
Please find related issue:
https://discord.com/channels/1069654792902815845/1069654792902815848/1136223933201387550
Recommended full Rclone config: https://discord.com/channels/1069654792902815845/1069654792902815848/1135157727216279585
We will be aiming to improve our guides, so this step is better documented..deb
version, but my system is missing libmpv2
and there are too many dependencies to this to install it manually.
The AppImage is also not working:
/tmp/.mount_S3DrivCqi4BS/kapsa: symbol lookup error: /tmp/.mount_S3DrivCqi4BS/kapsa: undefined symbol: g_once_init_enter_pointer
So I have to stick with the flatpak version..deb
we've promised to spend some time and improve our Linux builds.
Can you please give it a go and try most recent .deb
? https://github.com/s3drive/deb-app/releases/tag/1.8.7%2B1
It's built on 22.04 LTS (we can't use any older machine due to some dependencies issues [libsodium at least if i I remember]) , in which case both libmpv1
and glibc
should no longer cause much issues.
Speaking of "opening links" on Flatpak, we're aware of that, however we haven't found any quick solution just yet. (edited).deb
we've promised to spend some time and improve our Linux builds.
Can you please give it a go and try most recent .deb
? https://github.com/s3drive/deb-app/releases/tag/1.8.7%2B1
It's built on 22.04 LTS (we can't use any older machine due to some dependencies issues [libsodium at least if i I remember]) , in which case both libmpv1
and glibc
should no longer cause much issues.
Speaking of "opening links" on Flatpak, we're aware of that, however we haven't found any quick solution just yet. (edited)suid
binaries like mount
, but I wasn't able to find any option to allow this kind of actions.
Anyway, the newest .deb
file worked like charm. Thanks a lot!.deb
version is now working just fine. Would you be happy to try out our new AppImage release?
I would be keen to know if it starts fine on your OS.
We've applied pretty much similar improvements as per .deb
and cleaned up dependencies to get slightly smaller size.
https://github.com/s3drive/appimage-app/releases/tag/1.8.7%2B1 (edited)mount helper error: ERROR: ld.so: object 'libapprun_hooks.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
flatpak install --user https://dl.flathub.org/build-repo/101822/io.kapsa.drive.flatpakref
Speaking of mount issue, we're using some host-level workarounds: https://github.com/flathub/io.kapsa.drive/blob/master/fusermount-wrapper.sh which in principle shall work as long as FUSE is available on your host.
Can you try running: fusermount3 --version
on your machine?fusermount3 version
is 3.10.5flatpak run io.kapsa.drive
returns some issues related to libsecret and couple other ones.
I believe that issues that you experience might be connected to Mint OS file manager.
Can you make sure that zenity
is installed on your OS?
Can you try running: flatpak run io.kapsa.drive
and let me know what errors/warnings you get, so I can understand which of them are "start blockers" and which of them are more like warning/info.flatpak run io.kapsa.drive
returns some issues related to libsecret and couple other ones.
I believe that issues that you experience might be connected to Mint OS file manager.
Can you make sure that zenity
is installed on your OS?
Can you try running: flatpak run io.kapsa.drive
and let me know what errors/warnings you get, so I can understand which of them are "start blockers" and which of them are more like warning/info. zenity
or kdialog
, please find other topic where used had this issue: https://discord.com/channels/1069654792902815845/1069654792902815848/1179417458185097247
Is it something you can install yourself? (edited)/run/user....
but it syncs the correct files/run/user....
but it syncs the correct files kdialog
actually helped?kdialog
actually helped? Failed: Null check operator used on a null value
Failed: Null check operator used on a null value
10080802
build. I would appreciate if you could refresh your tab and try again.10080802
build. I would appreciate if you could refresh your tab and try again. libc6
?
https://github.com/s3drive/appimage-app/releases/tag/1.7.11%2B1
Can you also give me an output of: ldd --version
?
https://lindevs.com/check-glibc-version-in-linux and output from libc
e.g.: /lib/x86_64-linux-gnu/libc.so.6
-> GNU C Library (Ubuntu GLIBC 2.31-0ubuntu9.7) stable release version 2.31.
... also if you could send me an output of: strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
this isn't related to AppImage libstdc
, but your OS, but may help me to understand this issue.
If we don't manage to solve it that way, then I will have to find test it on real Debian / Fedora, not sure if this is XFCE related though.
Sorry for not getting back to you sooner, but we're pretty low on resources at the moment. (edited)strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
command is used it shows: GLIBCXX_3.4.32
?
https://stackoverflow.com/a/77075793/2263395
Couple weeks ago, there was an upgrade to our build machine, which may or may not affected the newest glibc
required. If possible I would still advise to use Flatpak release which is supposed to solve these issues.
In the meantime we'll try to confirm if we can somehow bundle glibc
of fallback to older version.couldn't find backend for type "drive"
. My config that rclone config dump
spit out is similar to this:
{
"gdrive": {
"client_id": "x.apps.googleusercontent.com",
"client_secret": "y",
"root_folder_id": "",
"scope": "drive",
"team_drive": "z",
"token": "{properjson}",
"type": "drive"
}
}
app v1.9.1 10090104 (edited)/Users/<user>/Library/Application Support/com.s3.drive.file.explorer.storage.cloud.manager/logs
location.
Do you have any additional (or non-standard) sandbox settings / restrictions on your macOS? (edited)/Users/<user>/Library/Application Support/com.s3.drive.file.explorer.storage.cloud.manager/logs
location.
Do you have any additional (or non-standard) sandbox settings / restrictions on your macOS? (edited)~/Library/Application\ Support/com.s3.drive.file.explorer.storage.cloud.manager/logs/
folder exist and contain logs.isar
et logs.isar.lock
(chmod as it appears: .rw-------
) which are both empties
No specific sandbox restriction as I know (how to check?) just a casual computer ( edit: I have try too restarting computer, no change ) (edited)~/Library/Application\ Support/com.s3.drive.file.explorer.storage.cloud.manager/logs/
folder exist and contain logs.isar
et logs.isar.lock
(chmod as it appears: .rw-------
) which are both empties
No specific sandbox restriction as I know (how to check?) just a casual computer ( edit: I have try too restarting computer, no change ) (edited)size
and modtime
in order to determine whether file needs syncing. (edited)macFUSE
installed (in case you don't have it installed). Please find the instructions: https://github.com/macfuse/macfuse/wiki/Getting-Started
Once you have it installed, yet it still doesn't work, can you please go to application Logs and copy the rclone mount
line with all the parameters and execute in the terminal. Is there any additional info / error?-----BEGIN CERTIFICATE-----
MIIOSTCCDTGgAwIBAgIQFHitMmJOwc0JhD4dXvgHZTANBgkqhkiG9w0BAQsFADA7
...
-----END CERTIFICATE-----
You can verify contents of a public certificate from a console: openssl x509 -in certificate.crt -text -noout
or using a website: https://www.sslshopper.com/certificate-decoder.html
The certificate
Give it a go, if you have any troubles I am more than happy to assist. (edited)s3drive.app
as it is or replace it with anything else, but in principle such corporate proxy would replace any SSL certificate to its own.
openssl s_client -showcerts -connect s3drive.app:443 < /dev/null 2> /dev/null | sed -n '/^-----BEGIN CERT/,/^-----END CERT/p'
This command will result in one or more certificate. Use first only if it's the only one, otherwise either use second or last one. (I haven't tried myself and I am not sure if intermediary CA or root CA is needed)
TLS1.3 isn't needed.
===
It may well be that your S3 destination/endpoint doesn't have a publicly trusted SSL certificate, in such case you either need to make sure that is using trusted SSL certificate ... or you can trust it yourself by importing its root CA using the methods above (either through Chrome or command line).
Did you try extracting the CA and importing it into S3Drive?
If you don't mind telling, please let me know where are you trying to connect. (edited)flatpak install --user https://dl.flathub.org/build-repo/97278/io.kapsa.drive.flatpakref
command in order to install latest version or give it couple more hours to settle.
Regarding .deb
package, what dependencies you have trouble with, is that glibc
, libmpv2
or anything else?support
in the s3drive.app
domain..bin
extension it does look to me as if you weren't using the filename encryption for these files.
This is the config that works with S3Drive:
https://discord.com/channels/1069654792902815845/1069654792902815848/1135157727216279585
If filename encryption is off: https://rclone.org/crypt/#crypt-filename-encryption (default is standard), then the .bin
suffix gets added.
We don't really support stripping the .bin
suffix, that's why in the config that we recommend we suggest (Discord link above) we suggest to disable it: https://rclone.org/crypt/#crypt-suffix
Given that you already have some data, perhaps we could reconsider support for: .bin
stripping for users which used the default setting before running it all up in compatibility with S3Drive. (edited)msiextract winfsp-2.0.23075.msi
command to extract WinFSP files and uploading here (inside winfsp.zip
. You could try just for the sake of testing to copy the opt
folder into your FUSE installation path, so you end up with C:\Program Files (x86)\WinFsp\opt
. I haven't tried whether this would work, but it's worth trying.
Feel free to use the attached files or extract the .msi
if you don't feel like installing some files from the "internet".
I would be keen to know if this resolved your issue. (edited)msiextract winfsp-2.0.23075.msi
command to extract WinFSP files and uploading here (inside winfsp.zip
. You could try just for the sake of testing to copy the opt
folder into your FUSE installation path, so you end up with C:\Program Files (x86)\WinFsp\opt
. I haven't tried whether this would work, but it's worth trying.
Feel free to use the attached files or extract the .msi
if you don't feel like installing some files from the "internet".
I would be keen to know if this resolved your issue. (edited)last modified
field date that we display early on gets directly from S3 and is technically last modified on the remote side, but in fact the real local modification date is stored as: x-amz-meta-mtime
header. (edited)xxx@yyy:~/tmp> flatpak run io.kapsa.drive
Gtk-Message: 08:54:51.508: Failed to load module "canberra-gtk-module"
** (kapsa:2): CRITICAL **: 08:54:51.650: Failed to read XDG desktop portal settings: GDBus.Error:org.freedesktop.portal.Error.NotFound: Nie odnaleziono żądanego ustawienia
package:media_kit_libs_linux registered.
** (kapsa:2): WARNING **: 08:54:51.703: libsecret_error: \xea╗\u000dV
This libsecret error throws different artifacts on every run.
OpenSUSE Leap 15.5 (edited)seahorse
, but actually problem was on our end with the Flatpak configuration:
https://discord.com/channels/1069654792902815845/1201712855909670912/1201950218007101460
which we fixed here: https://github.com/flathub/io.kapsa.drive/pull/31/files
We would need to try out the fresh OpenSUSE install ourselves and play with the keychain, but given we're pretty busy currently, it likely won't happen before Easter break./tmp/.mount_S3Driv6vx0Em/kapsa: error while loading shared libraries: libmpv.so.2: cannot open shared object file: No such file or directory
/tmp/.mount_S3Driv6vx0Em/kapsa: error while loading shared libraries: libmpv.so.2: cannot open shared object file: No such file or directory
libmpv2
, however the AppImage
itself is built on machine where libmpv2
isn't available, therefore we can't easily include it.
We need to cleanup this mess / pay the debt... but before we do it user will likely need to install libmpv2
on their end. (edited)libmpv2
libraries. We've had to bump the build machine from 22.04
to 23.04
in order to load libmpv2
, so it should no longer fail on: error while loading shared libraries: libmpv.so.2
as previously.
I've then used that AppImage on Leap 15.5 and received pretty much the same error that you've received using Flatpak release.
I am not familiar with the DBus stuff, but it's likely related to: xdg-desktop-portal
. I've specifically installed GNOME portal: zypper install xdg-desktop-portal-gnome
, but it still hasn't resolved the issue.
I am sure we will get to that eventually. If you have any hints in the meantime please let me know. Thanks ! (edited)crypt
Rclone remote to the remote which stores the encrypted data, that could be an external remote or local remote (remote is just a name/concept of Rclone and even though FS is local, it is also called remote).
We provide a guide how to set this up and decrypt/encrypt files outside of S3Drive given they're present on some external remote, that is S3 server: https://docs.s3drive.app/advanced/#sample
In this guide: s3drive_crypt
points to a bucket within s3drive_remote
(which is a S3 provider).
If your files are already downloaded then you would need to point your s3drive_crypt
to your local FS remote instead.
That technically means that within: s3drive_crypt
you would replace line: remote = s3drive_remote:<bucket_name>
with path to your FS, e.g: remote = C:\MyEncryptedData
I hope that helps, if you need any assistance on that please let me know. (edited)/dev/fuse
needs to be accessible: https://forum.rclone.org/t/rclone-mount-inside-the-docker-container/40202/7 and FUSE installed on your host system NOT on your guest container. (edited)/dev/fuse
needs to be accessible: https://forum.rclone.org/t/rclone-mount-inside-the-docker-container/40202/7 and FUSE installed on your host system NOT on your guest container. (edited)Gtk-Message: 02:13:03.143: Failed to load module "canberra-gtk-module"
(kapsa:2): Gdk-CRITICAL **: 02:13:03.199: gdk_window_get_state: assertion 'GDK_IS_WINDOW (window)' failed
package:media_kit_libs_linux registered.
** (kapsa:2): WARNING **: 02:13:03.433: libsecret_error: \xa4Z\xe8\x94Ob
(edited)gnome-keyring
on your host OS?
Are you trying to run S3Drive for the first time or perhaps the issue is new?1:42.1
)
This is technically my 2nd time running s3drive, the first time I had an issue you already released a fix for, namely the libmpv.so.2 issueseahorse
package and the necessity of unlocking keyrings.
https://stackoverflow.com/a/77338413/2263395
As soon as I find a clear solution I will let you know.libsecret_error
to #8\xaf\u0006dY
Temporarily removing the password from the default keyring changes the libsecret_error
to R9\u001b\xc2\xc1[
flatpak install --user https://dl.flathub.org/build-repo/79739/io.kapsa.drive.flatpakref
cc @benoit_52236 <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<ID>S3Drive</ID>
<AllowedOrigin>https://web.s3drive.app</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3600</MaxAgeSeconds>
<ExposeHeader>etag</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-key</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-iv</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-cek-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-wrap-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-key-v2</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-tag-len</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-unencrypted-content-length</ExposeHeader>
<ExposeHeader>x-amz-version-id</ExposeHeader>
<ExposeHeader>x-amz-meta-key</ExposeHeader>
<ExposeHeader>x-amz-meta-iv</ExposeHeader>
<ExposeHeader>x-amz-meta-chunk</ExposeHeader>
<ExposeHeader>x-amz-meta-cek-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-s3drive</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Please note that in your configuration there are at least couple things missing which may affect some features.
For instance HEAD
HTTP operation or: x-amz-version-id
and x-amz-meta-s3drive
(optional) headers. (edited){"proton-vN": {"username": "myusername", "password": "mypassword", "2fa": "123123"}}
This seems to generate the other bits:
{
"2fa": "1231234",
"client_access_token": "xxxxxxxxxxxxxxxxxxxxx",
"client_refresh_token": "xxxxxxxxxxxxxxxxxxxxx",
"client_salted_key_pass": "xxxxxxxxxxxxxxxxxxxxx==",
"client_uid": "xxxxxxxxxxxxxxxxxxxxxx",
"password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"type": "protondrive",
"username": "myusername"
} (edited){"proton-vN": {"username": "myusername", "password": "mypassword", "2fa": "123123"}}
This seems to generate the other bits:
{
"2fa": "1231234",
"client_access_token": "xxxxxxxxxxxxxxxxxxxxx",
"client_refresh_token": "xxxxxxxxxxxxxxxxxxxxx",
"client_salted_key_pass": "xxxxxxxxxxxxxxxxxxxxx==",
"client_uid": "xxxxxxxxxxxxxxxxxxxxxx",
"password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"type": "protondrive",
"username": "myusername"
} (edited)[1.7.0] - 2023-12-29
release and then in February [1.7.9] - 2024-02-09
released Folder sync on Android along with first major redesign of Sync functionality on all platforms. History: https://s3drive.app/changelogcrypt
on top of webdav
: https://docs.s3drive.app/setup/providers/#client-side-encryption
Bear in mind, that encrypted video playout and thumbnails aren't yet supported for Rclone back-ends, as this requires streaming interface which Rclone doesn't expose through their API.
There are some other ways that we've already explored and it's possible to implement streaming, however we currently lack resources to implement that, as there are other priorities: https://s3drive.app/roadmap"icedrive": {
"type": "webdav",
"url": "https://webdav.icedrive.io",
"user": "yourusername",
"vendor": "something"
}
and the config gets added.
Alternatively you can use full JSON format (please note additional curly braces), which you can then validate e.g. here: https://jsonlint.com/ (don't provide your username / vendor though, as they're confidential to you).
{"icedrive": {
"type": "webdav",
"url": "https://webdav.icedrive.io",
"user": "yourusername",
"vendor": "something"
}}
(edited)"koofr": {
"endpoint": "https://app.koofr.net/",
"password": "<obscuredPassword>",
"type": "koofr",
"user": "myuser@email.de"
},
crypt
: https://docs.s3drive.app/setup/providers/#manual-setupcrypt
/ mount
combination. I am not sure exactly whether encrypted video streaming works seamlessly that way though. (edited)crypt
/ mount
combination. I am not sure exactly whether encrypted video streaming works seamlessly that way though. (edited)"koofr": {
"endpoint": "https://app.koofr.net/",
"password": "<obscuredPassword>",
"type": "koofr",
"user": "myuser@email.de"
},
For Vaults use Koofr generated config (as on the attached screenshot) converted from .ini format to .json.
"my-safe-box": {
"password": "Jlu2uhkjREtxwkmLAyflSM4YSuPy3KGiR_Q",
"password2": "lPdPuA0EnMp4I-aRBdczD4nR2oU5PJboEZj2THpKPPAqiKWf2I0dyA2SA1VO9YhjZFL6r0oJZJPN6Hdj9B1KE8JOt85qbnbDXk3_lyuQR-egy3U5lBpx6x-3ru28np2jVy0LH-fTb_lYiiImIquUIkIOOiAAKrwalIsK-tBGgz06bf4g_fe1fGx4AYUfH1MYo2tsUD8JVzPTK5rf4aX9BQxk0xjJ3dehfLrFjUV69XqNJB89_gh1To19lg",
"remote": "koofr:/My safe box",
"type": "crypt"
},
(edited)my-safe-box
).
I haven't modified anything in the Rclone config, which was generated by Koofr themselves, I think you need to provide safe box name, as it refers to encrypted folder within Koofr ecosystem.
Bear in mind that currently if you would like to switch between encrypted and non-encrypted listing you would need to switch configs (on the Profile screen).
Soon we will support vaults natively: https://s3drive.canny.io/feature-requests/p/rclone-configuration-support-encrypted-vaults
In the meantime the workaround would be to use Rclone combine modifier: https://rclone.org/combine/ but it's for users who are aware of its behavior, since it's not a perfect replacement of native experience."koofr": {
"endpoint": "https://app.koofr.net/",
"password": "<obscuredPassword>",
"type": "koofr",
"user": "myuser@email.de"
},
For Vaults use Koofr generated config (as on the attached screenshot) converted from .ini format to .json.
"my-safe-box": {
"password": "Jlu2uhkjREtxwkmLAyflSM4YSuPy3KGiR_Q",
"password2": "lPdPuA0EnMp4I-aRBdczD4nR2oU5PJboEZj2THpKPPAqiKWf2I0dyA2SA1VO9YhjZFL6r0oJZJPN6Hdj9B1KE8JOt85qbnbDXk3_lyuQR-egy3U5lBpx6x-3ru28np2jVy0LH-fTb_lYiiImIquUIkIOOiAAKrwalIsK-tBGgz06bf4g_fe1fGx4AYUfH1MYo2tsUD8JVzPTK5rf4aX9BQxk0xjJ3dehfLrFjUV69XqNJB89_gh1To19lg",
"remote": "koofr:/My safe box",
"type": "crypt"
},
(edited)1.8.5
we will correctly recognize if versioning is enabled for Storj, so for newly setup buckets this issue shouldn't appear.
Thanks. (edited)rclone v1.63.0
and configured my storage like:
[storj]
type = s3
provider = Other
access_key_id = <redacted>
secret_access_key = <redacted>
endpoint = https://gateway.storjshare.io
[storj_crypt]
type = crypt
filename_encoding = base64
remote = storj:my-photos
password = <redacted>
filename_encryption = standard
directory_name_encryption = true
suffix = none
I've then copied test pdf like this: rclone copy test.pdf storj_crypt:
and I get valid object with ETag and openable within S3Drive.
Can you post your Rclone storj
remote configuration? You've initially posted config, but that's just crypt: https://discord.com/channels/1069654792902815845/1159814485515710525/1159827592732479590Amazon S3 (or S3 compatible)
has: MD5
./home/jeannesbond/S3Drive
exist on your machine?ls -alt /home/jeannesbond/S3Drive
command? (edited)LE2123
..empty
. Removing that file manually (please note that you'll need different files in a folder to prevent folder deletion) may resolve this error, however run with: --resync
option might be required.
We will mark this feature as experimental in a next release, add --resilient
option and then will have to allocate more time to improve it.
Sorry for this subpar experience, I hope we can get that reworked later this year.--max-lock 5M --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case
.
--drive-skip-gdocs
is specific to Gdrive only.
Initially run must be executed with: --resync
and not entirely sure what: -MvP
stands for.
EDIT:
This one is blocker for S3Drive to integrate max-lock
and recover
option: https://github.com/rclone/rclone/issues/7799 (edited)MinioError: ListObjectsV2 search parameter maxKeys not implemented
content-length
header in the response (e.g. during file download/open). We rely on it to display transfer progress results as well as to make certain predicaments related to encryption. In theory we could implement content-length
workaround, but it would take us little bit while.
We're going investigate first whether it is possible to enable that header. I know that Cloudflare has some logic behind the content-length
header which in some cases is provided and in some isn't... we'll have a look on it as well, however if you're going to reach out Cloudflare it is something you can ask about as well. Thanks (edited)content-length
issue on mobile and desktop clients. Web client fix will have to wait little bit longer (because we've no control of content-length
header in the browser).
Basically if: accept-encoding
HTTP request header includes gzip
Cloudflare seem to skip content-length
altogether.
We're testing couple things right now, but if things go well we'll be able to release it promptly.
Related: https://community.cloudflare.com/t/no-content-length-header-when-content-type-gzip/492964 (edited)content-length
issue on mobile and desktop clients. Web client fix will have to wait little bit longer (because we've no control of content-length
header in the browser).
Basically if: accept-encoding
HTTP request header includes gzip
Cloudflare seem to skip content-length
altogether.
We're testing couple things right now, but if things go well we'll be able to release it promptly.
Related: https://community.cloudflare.com/t/no-content-length-header-when-content-type-gzip/492964 (edited).exe
release, we also have official Microsoft Store release: https://apps.microsoft.com/store/detail/s3drive-cloud-storage/9NX2DN9Q37NS which is unlikely to be flagged by your antivirus if that's causing any concern.
Tom from S3Drive (edited)libgcrypt-20.dll
file contained some Trojan horse, but that's not the file we supply with the app.
Here is the list of DLL we supply (part 1/2):
app_links_plugin.dll
battery_plus_plugin.dll
concrt140.dll
connectivity_plus_plugin.dll
d3dcompiler_47.dll
desktop_drop_plugin.dll
file_selector_windows_plugin.dll
flutter_secure_storage_windows_plugin.dll
flutter_windows.dll
image_compression_flutter_plugin.dll
isar.dll
isar_flutter_libs_plugin.dll
just_audio_windows_plugin.dll
libc++.dll
libEGL.dll
libGLESv2.dll
libmpv-2.dll
librclone.dll
libsodium.dll
media_kit_libs_windows_video_plugin.dll
media_kit_native_event_loop.dll
media_kit_video_plugin.dll
msvcp140.dll
msvcp140_1.dll
msvcp140_2.dll
msvcp140_atomic_wait.dll
msvcp140_codecvt_ids.dll
pdfium.dll
pdfx_plugin.dll
permission_handler_windows_plugin.dll
screen_brightness_windows_plugin.dll
sentry_flutter_plugin.dll
share_plus_plugin.dll
sodium_libs_plugin.dll
sqlite3.dll
sqlite3_flutter_libs_plugin.dll
system_tray_plugin.dll
ucrtbase.dll
ucrtbased.dll
uri_content_plugin.dll
url_launcher_windows_plugin.dll
vccorlib140.dll
vccorlib140d.dll
vcruntime140.dll
vcruntime140d.dll
vcruntime140_1.dll
vcruntime140_1d.dll
vk_swiftshader.dll
vulkan-1.dll
webcrypto.dll
webcrypto_plugin.dll
zlib.dll
api-ms-win-core-console-l1-1-0.dll
api-ms-win-core-console-l1-2-0.dll
api-ms-win-core-datetime-l1-1-0.dll
api-ms-win-core-debug-l1-1-0.dll
api-ms-win-core-errorhandling-l1-1-0.dll
api-ms-win-core-fibers-l1-1-0.dll
api-ms-win-core-file-l1-1-0.dll
api-ms-win-core-file-l1-2-0.dll
api-ms-win-core-file-l2-1-0.dll
api-ms-win-core-handle-l1-1-0.dll
api-ms-win-core-heap-l1-1-0.dll
api-ms-win-core-interlocked-l1-1-0.dll
api-ms-win-core-libraryloader-l1-1-0.dll
api-ms-win-core-localization-l1-2-0.dll
api-ms-win-core-memory-l1-1-0.dll
api-ms-win-core-namedpipe-l1-1-0.dll
api-ms-win-core-processenvironment-l1-1-0.dll
api-ms-win-core-processthreads-l1-1-0.dll
api-ms-win-core-processthreads-l1-1-1.dll
api-ms-win-core-profile-l1-1-0.dll
api-ms-win-core-rtlsupport-l1-1-0.dll
api-ms-win-core-string-l1-1-0.dll
api-ms-win-core-synch-l1-1-0.dll
api-ms-win-core-synch-l1-2-0.dll
api-ms-win-core-sysinfo-l1-1-0.dll
api-ms-win-core-timezone-l1-1-0.dll
api-ms-win-core-util-l1-1-0.dll
api-ms-win-crt-conio-l1-1-0.dll
api-ms-win-crt-convert-l1-1-0.dll
api-ms-win-crt-environment-l1-1-0.dll
api-ms-win-crt-filesystem-l1-1-0.dll
api-ms-win-crt-heap-l1-1-0.dll
api-ms-win-crt-locale-l1-1-0.dll
api-ms-win-crt-math-l1-1-0.dll
api-ms-win-crt-multibyte-l1-1-0.dll
api-ms-win-crt-private-l1-1-0.dll
api-ms-win-crt-process-l1-1-0.dll
api-ms-win-crt-runtime-l1-1-0.dll
api-ms-win-crt-stdio-l1-1-0.dll
api-ms-win-crt-string-l1-1-0.dll
api-ms-win-crt-time-l1-1-0.dll
api-ms-win-crt-utility-l1-1-0.dll
api-ms-win-downlevel-kernel32-l2-1-0.dll
api-ms-win-eventing-provider-l1-1-0.dll
Encryption related
webcrypto.dll
webcrypto_plugin.dll
libsodium.dll
Sync
or Sync (deprecated)
feature that you've been using?
Did you try to stop the app, delete everything inside: AppData\Roaming\com.s3.drive.file.explorer.storage.cloud.manager\S3Drive
and then start again?
Once app starts, even if screen is white, can you try resizing a window a little to see if it resolves your issue? We have intermittent issue where usually black screen appears and resize actually help.-a---- 17.03.2024 18:33 1886 s3drive_VGhpcyBpcyB0aGUgcHJlZml4IGZv_auth_session_key.secure
-a---- 17.03.2024 18:33 596 s3drive_VGhpcyBpcyB0aGUgcHJlZml4IGZv_credentials.secure
-a---- 17.03.2024 18:32 53 s3drive_VGhpcyBpcyB0aGUgcHJlZml4IGZv_installationId.secure
-a---- 17.03.2024 18:33 611 s3drive_VGhpcyBpcyB0aGUgcHJlZml4IGZv_profiles.secure
however I am not sure if this alone would suffice, as there maybe some other files required.
Give it a go and if it doesn't work, then sorry you will likely have to provide these credentials once again.
The other approach would be to copy back all that data as it was and then start removing individual files (especially .sql) until app starts, but that would be quite tedious task.writes
setting which is pretty much required for the write mode to function properly. We could possible disable it on Windows and Linux with some limitations: https://rclone.org/commands/rclone_mount/#limitations and still keep the writes mode, but on macOS it wouldn't be possible.
For Linux see: ~/.cache/rclone/vfs
I would suspect it's going to be similar on Windows: $HOME/.config/rclone/vfs
and macOS
.
Strange that S3Drive hangs, it's probably some bug where mount
process takes extre ordinary time to load? the cache before returning the mount to the app and it's blocking the main thread.
Deleting the VFS cache although not convenient shall resolve this issue.
We will be able to provide couple more options, e..g. disable cache, set max age... and most importantly set the max size: vfs-cache-max-size
.
Based on your comment I've increased priority on this and you can expect improvements in one of the next releases.
Just please let me know what's your OS for reference.
Thanks !minimal
solves your issue? (edited)