dxdiag
in the search bar.https
, where as IP addr will default to http
.
Filename encryption is on our Roadmap and we have a working prototype already. https://s3drive.canny.io/feature-requests/p/filenamefilepath-encryption (ETA ~April 2023).
We're making further research to understand standards or well established implementation in that area, so we can stay compatible.
The sharing functionality is based on S3 presign URLs, their limitation is that the signature can't be valid longer than 7 days, so every 7 days new link would have to be generated. We're researching how to overcome this limitation. For instance we could combine this with a link shortener, so there is single link that doesn't change, but under the hood we would regenerate the destination link as needed.
The encrypted share link has the master key at the end after the # character and looks like this:
https://s3.us-west-004.backblazeb2.com/my-s3drive/.aashare/hsnwye5bno3p/index.html?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=004060aad6064900000000044%2F20230214%2Fus-west-004%2Fs3%2Faws4_request&X-Amz-Date=20230214T095014Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=abdcd875e2106ee54c6a1d1851617c7e694e121464c5ca9023526ce2836be595#GKSGYX4HGNAd4nTcXb/GIA==
What it does it tries to load the encrypted asset as usual, but it's not aware per se if an asset is encrypted. In the background JavaScript tries to fetch the asset and replaces the one on the screen with decrypted version. It looks like it has failed on your side. Can you go to the console (right-click -> inspect element) to see if there is anything abnormal (that is error in the Console or different than 200 status code in any of the network requests).rclone
for accessing my files or backing up my photos on a day to day basis.... and I am not afraid of CLIs. (edited)garage
... and there is no way to provide region in S3Drive.
It seems that we may add additional form field to specify the region.toml
file like this: s3_region = "us-east-1"
.
We auto-detect region from the endpoint URL and have a way to detect custom region from MinIO.... and if it doesn't work we use the most common default which is us-east-1
..aa*
file and folder are about, but some "don't touch my bucket" parameter would be nice if the app doesn't strictly need them, otherwise that sounds like an additional bucket policy :D
EDIT: looks like the file is for some kind of init feature within the app, and one of the two folders is the trash. I've seen the versioning feature request, but the trash folder could be opt-in if possible. (edited).aainit
file is our write test, as well as ETag response validation (which is required for not yet released syncing features), as some providers (talking mostly about iDrive E2 with SSE enabled) don't generate valid ETags. BTW. Would you like S3Drive to support read-only mode?
Regardless, we will try to improve clarity of this operation, so user feels more confident that we're not doing some shady write/reads.
Speaking of Trash itself, likely this week starting on Android first there will be a Settings option to disable Trash feature altogether (which is a soft-delete emulation, but slow and pointless if bucket already supports versioning). Versioning UI with restore options will come little bit later. (edited).aainit
file is our write test, as well as ETag response validation (which is required for not yet released syncing features), as some providers (talking mostly about iDrive E2 with SSE enabled) don't generate valid ETags. BTW. Would you like S3Drive to support read-only mode?
Regardless, we will try to improve clarity of this operation, so user feels more confident that we're not doing some shady write/reads.
Speaking of Trash itself, likely this week starting on Android first there will be a Settings option to disable Trash feature altogether (which is a soft-delete emulation, but slow and pointless if bucket already supports versioning). Versioning UI with restore options will come little bit later. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. s3 ls
even though headObject
couldn't retrieve it as a valid S3 entry. I am curious if you came across of something similar. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .aainit
file being nuked (delete file itself + all its versions) once the init is done and raw presigned URL sharing headObject
and get the envelope AES keys.... so it must be a toggle with some warning. It would then simply return the Blob that's stored on S3, regardless of what's inside. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .s3drive_bucket_read_test
) and verify the response instead of trying to write a file.
Slider now works, so it's possible to set expiry time shorter than maximum of 7 days. There is an option to use raw preshared URLs.
We've also introduced basic Version UI. It is now possible to preview the revisions. In a next update we will allow opening, preview, deleting and restoring to particular version.
Thank you for these suggestions, they were great and helped us to validate it all !
... and as always we're open for a feedback.folder/file.txt
, but folder/
entry doesn't explicitly exists, it is still searchable)
There is an option to hide files starting with: .
As usual there are couple other performance improvements and bugfixes.
We would love to hear how are you finding new changes and if version management during file operations is what you would expect. (edited)Hide "." files
Show all files, including starting with the dot.
Hide files starting with the dot character
To:
Hide dotfiles
Show all files, including ones starting with a dot.
Hide files starting with the dot character.
feature_flags
int that computes to an array of pro features with bitwise operations, easy on your API and authentication gateway or whatever you do behind the scenes.feature_flags
int that computes to an array of pro features with bitwise operations, easy on your API and authentication gateway or whatever you do behind the scenes. MinioError: ListObjectsV2 search parameter maxKeys not implemented
(edited)s3.<region>.amazonaws.com
OS Error: CERTIFICATE_VERIFY_FAILED: self signed certificate
. And I indeed have self-signed certificate but I followed your instructions from https://github.com/s3drive/app/issues/19 (https://proxyman.io/posts/2020-09-29-Install-And-Trust-Self-Signed-Certificate-On-Android-11) and my browser on Andriod recognizes this certificate (if I go to minio browser, my Chrome is fine with the cert). But S3Drive continues to fail with the same error.
I'm using the latest version. (edited)null
response, which is somewhat expected. I would expect to get the SSL related error instead.OS Error: CERTIFICATE_VERIFY_FAILED: self signed certificate
. And I indeed have self-signed certificate but I followed your instructions from https://github.com/s3drive/app/issues/19 (https://proxyman.io/posts/2020-09-29-Install-And-Trust-Self-Signed-Certificate-On-Android-11) and my browser on Andriod recognizes this certificate (if I go to minio browser, my Chrome is fine with the cert). But S3Drive continues to fail with the same error.
I'm using the latest version. (edited)support-bugs-requests
is too long but there's no reason to have multiple channels for that either# Obscure password
echo "YourPlaintextPassword" | rclone obscure -
# Add it to Rclone config, config file location: `rclone config file`
[s3drive_remote]
type = s3
provider = Other
access_key_id = <access_key_id>
secret_access_key = <secret_access_key>
endpoint = <endpoint>
region = <region>
[s3drive_crypt]
type = crypt
filename_encoding = base64
remote = s3drive_remote:<bucket_name>
password = <obscuredPassword>
filename_encryption = standard
directory_name_encryption = true
suffix = none
Then you can use: s3drive_crypt
as your remote encrypted location.
Please note that whilst we support both encrypted and unencrypted files in the same location, Rclone doesn't seem to like the mix and won't display existing unencrypted files for the encrypted remote. In such case it's better to either keep everything encrypted globally or have dedicate paths with encrypted-only or unencrypted-only files. (edited)filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator.filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator. filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator. {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
${aws:username}
by anything you want, be it a variable or a fixed bucket name, there unfortunately isn't any group name variableusers
group to which I assign the selfservice
policy, then I add whoever I want to the users
group and they'll be able to manage their very own bucket${aws:username}
by anything you want, be it a variable or a fixed bucket name, there unfortunately isn't any group name variable {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}",
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
Contributor
role, it isn't much but still a nice way to recognize individuals who go out of their way to help the project out, what do you think about it?Contributor
role, it isn't much but still a nice way to recognize individuals who go out of their way to help the project out, what do you think about it? AppImage
you can find deb
package in the releases: https://github.com/s3drive/app/releases if that's any use for you.czNkcml2ZQ==
using this command: echo "czNkcml2ZQ==" | base64 -d | rclone obscure -
you can generate a password, e.g.: AQbZ5H8mrzlnkNj9MXnjpxS5QmxbRpw
which can be used in Rclone config: rclone config file
as indicated in this post: https://discord.com/channels/1069654792902815845/1069654792902815848/1135157727216279585
Speaking of decryption speeds in browser, let's continue in the support item that I've created: https://discord.com/channels/1069654792902815845/1140911911479808081 (edited)rclone password dump
gives obscured password. You need to use your original text password. Alternatively you'll need to use "password reveal" on your obscured password.
https://forum.rclone.org/t/how-to-retrieve-a-crypt-password-from-a-config-file/20051
We're not supporting Rclone 2nd password, but it's part of our roadmap: https://s3drive.canny.io/feature-requests/p/support-2nd-rclone-crypt-password
We're supporting default Rclone salt: https://forum.rclone.org/t/how-to-correctly-setup-the-salt-for-the-crypt-remote/4273/2
I've created additional two roadmap items to support your use case:
https://s3drive.canny.io/feature-requests/p/add-support-for-custom-rclone-salt
https://s3drive.canny.io/feature-requests/p/add-option-to-restore-rclone-password
Please vote on them, so the priority is pushed higher.
If you have any more issues with S3Drive, please create a support item: https://discord.com/channels/1069654792902815845/1102236355645419550
Thanks (edited)Support for AWS S3, Backblaze, Wasabi, Scaleway, Storj, MinIO and any other S3 compatible provider. Sign-up for a free account, no credit card required.Aug 22, 2023
S3Drive: Cloud storage on the App Store - Apple
1.5.3
- https://s3drive.app/changelog
Please try now install newest DMG from our website. It should resolve your issues.
What message did you get exactly from the app? Was it update more recent version available or perhaps that your version has expired? (edited)FCKGW-RHQQ2...
license.
This would either mean that you would have to generate some activation key on our website from time to time and paste it to the app... or once you activate features in your app with some activation key you would have to deactivate it before you could use it on some other Window's client.tocloud, // Upload to remote, delete remotely if file was deleted locally
tocloud_keepdeleted, // Won't remove file remotely if it was deleted locally
tocloud_compat, // If file is removed remotely, local won't know that, it will be reuploaded on a next ocassion
In principle:
"To remote" will upload file to remote and delete it remotely if it was deleted locally. If file is deleted remotely it won't get re-uploaded again.
"To remote (don't delete remotely)" - the same as "To remote", except it will keep file on the remote even if it was deleted locally.
The above 2 options require bucket versioning support.
The "compatibility mode" doesn't require versioning API, however that makes it not aware of any file changes in between, so it's simply blind one way copy instead of sync.
I hope that helps little bit. We'll build documentation once we sort couple challenges related to E2E encryption with syncing, as depending how we manage to solve these problems it may influence the available options.[
{
"bucketName": "acme-internal-files",
"keyId": "EVLJ2eXJukWUR9U17dyQqq6NPTi9mUu6scqpLCau",
"applicationKey": "X9EiaepygvDK2S0fmMmFayehHoETDOphNP1r96PI",
"endpoint": "https://s3.us-west-004.backblazeb2.com",
"region": "us-west-004",
"host": "s3.us-west-004.backblazeb2.com",
"port": 443,
"useSSL": true,
"encryptionKey": "cG90YXRv",
"rclonePlaintextKey": true,
"filepathEncryptionEnabled": true,
"rcloneDerivedKey": [
116,
85,
199,
26,
177,
124,
134,
91,
132,
...
]
}
]
This may be a good start.
We plan to implement QR code login, but the QR size limitation makes QR not a solution for all use cases.
There are other means, e.g. QR code could transfer the "place holder ID" which would be then used to fetch the required details, but then again this setup would require more moving parts.
We're very much open on this. (edited)[
{
"bucketName": "acme-internal-files",
"keyId": "EVLJ2eXJukWUR9U17dyQqq6NPTi9mUu6scqpLCau",
"applicationKey": "X9EiaepygvDK2S0fmMmFayehHoETDOphNP1r96PI",
"endpoint": "https://s3.us-west-004.backblazeb2.com",
"region": "us-west-004",
"host": "s3.us-west-004.backblazeb2.com",
"port": 443,
"useSSL": true,
"encryptionKey": "cG90YXRv",
"rclonePlaintextKey": true,
"filepathEncryptionEnabled": true,
"rcloneDerivedKey": [
116,
85,
199,
26,
177,
124,
134,
91,
132,
...
]
}
]
This may be a good start.
We plan to implement QR code login, but the QR size limitation makes QR not a solution for all use cases.
There are other means, e.g. QR code could transfer the "place holder ID" which would be then used to fetch the required details, but then again this setup would require more moving parts.
We're very much open on this. (edited)host
/ gateway
, or if you want to set the encryption key, both encryptionKey
and generated: rcloneDerivedKey
must be provided.
If there is a need we could certainly simplify the format, so things gets smartly derived if not present. (edited)host
/ gateway
, or if you want to set the encryption key, both encryptionKey
and generated: rcloneDerivedKey
must be provided.
If there is a need we could certainly simplify the format, so things gets smartly derived if not present. (edited)encryptionKey
field is required to setup the encryption.
Speaking of decryption, it's an open format. Naturally you can use S3Drive (on any platform) to access encrypted data (you'll need to access bucket with data and setup E2E with the same password that was initially used for encryption).
You can also mount data using as network drive (that's possible from S3Drive after clicking on tray icon).
Alternatively you can access data using rclone
command, as we're 1:1 compatible with their encryption: https://rclone.org/crypt/#file-encryption
In that case please visit our docs to understand how you can set up rclone
command: https://docs.s3drive.app/advanced/#setup-with-rclone
Then you would be able to use commands like copy: https://rclone.org/commands/rclone_copy/ or sync: https://rclone.org/commands/rclone_sync/ or couple others depending on your needs.
There are couple options out there. (edited)[
{
"bucketName": "bucket-photos",
"keyId": "keyId",
"applicationKey": "applicationKey",
"endpoint": "https://s3.pl-waw.scw.cloud",
"encryptionKey": "cG90YXRv"
}
]
This would configure all necessary things and enable encryption with password: potato
, the encryptionKey
is base64
encoded plaintext password. (edited)[
{
"bucketName": "bucket-photos",
"keyId": "keyId",
"applicationKey": "applicationKey",
"endpoint": "https://s3.pl-waw.scw.cloud",
"encryptionKey": "cG90YXRv"
}
]
This would configure all necessary things and enable encryption with password: potato
, the encryptionKey
is base64
encoded plaintext password. (edited)zenity
, qarma
and kdialog
.
https://github.com/miguelpruivo/flutter_file_picker/issues/1282#issuecomment-1551924613
I will add this item to our internal items and try to play around in Xubuntu. In the meantime would you be happy to try out the Flathub version? https://flathub.org/en-GB/apps/io.kapsa.drive (Please note that it awaits 1.6.4 release which will be likely available later today or tomorrow). (edited)zenity
or kdialog
on your OS and see if it solves the issue?
It it does we will add it as a dependency to .AppImage
.
https://forum.juce.com/t/native-filechooser-not-used-on-linux-xfce/26347zenity
or kdialog
on your OS and see if it solves the issue?
It it does we will add it as a dependency to .AppImage
.
https://forum.juce.com/t/native-filechooser-not-used-on-linux-xfce/26347 zenity
in our releases.zenity
in our releases. zenity
in our releases. {
"url": "/api/create-checkout-session",
"data": {
"price": {
"id": "price_1NyfLNEv31gUd4RDtzV41wix",
"interval": "year",
"currency": "EUR",
"unit_amount": 0
}
},
"res": {}
}
(edited)content://
, since we operate on network resources, what we get with S3 is just a network URL, that we don't store locally (except the video cache) and pass directly to the video player. Since data isn't stored on Android device locally I don't think there is a method to expose it as a content URI.
If I understand little bit more about your use case I might be able to come up with some other approach. (edited)glxinfo | grep "direct rendering"
? (edited)glxinfo | grep "direct rendering"
? (edited)glxinfo | grep "direct rendering"
? (edited)file.png
to test/
would rename it to testfile.png
and the file is not moved the directory1.7.1
sync feature to be able to interact with the local FS.file.png
to test/
would rename it to testfile.png
and the file is not moved the directory 1.7.0
, we've now prioritized this and shall be able to release a hotfix at some point today.libmpv2
as an alternative, but don't really have capacity at the moment to test things out.
Ideally movies should play out as normal, as MPV dependency is required by media library that we use: https://pub.dev/packages/media_kit (edited)libmpv2
as an alternative, but don't really have capacity at the moment to test things out.
Ideally movies should play out as normal, as MPV dependency is required by media library that we use: https://pub.dev/packages/media_kit (edited)libmpv
version. We're working to have it resolved promptly, please bear with us.git clone --recursive git@github.com:flathub/io.kapsa.drive.git
cd io.kapsa.drive
flatpak-builder --user --install --force-clean build-dir io.kapsa.drive.json
... however it does require some prior environment setup, like:
flatpak install flathub org.freedesktop.Sdk//23.08
flatpak install flathub org.freedesktop.Platform
flatpak install org.freedesktop.Sdk.Extension.vala/x86_64/23.08
We will be providing full guide, "how to compile Flatpak". (edited)git clone --recursive git@github.com:flathub/io.kapsa.drive.git
cd io.kapsa.drive
flatpak-builder --user --install --force-clean build-dir io.kapsa.drive.json
... however it does require some prior environment setup, like:
flatpak install flathub org.freedesktop.Sdk//23.08
flatpak install flathub org.freedesktop.Platform
flatpak install org.freedesktop.Sdk.Extension.vala/x86_64/23.08
We will be providing full guide, "how to compile Flatpak". (edited)./S3Drive-x86_64.AppImage
(kapsa:2730352): Gdk-CRITICAL **: 09:39:57.636: gdk_window_get_state: assertion 'GDK_IS_WINDOW (window)' failed
package:media_kit_libs_linux registered.
flutter: *** sqflite warning ***
You are changing sqflite default factory.
Be aware of the potential side effects. Any library using sqflite
will have this factory as the default for all operations.
*** sqflite warning ***
method call InitAppWindow
method call InitSystemTray
SystemTray::set_system_tray_info title: (null), icon_path: /tmp/.mount_S3DrivJ2GgY2/data/flutter_assets/assets/logos/logo_42.png, toolTip: (null)
method call CreateContextMenu
value_to_menu_item type:label, label:Show
value_to_menu_item type:label, label:Hide
value_to_menu_item type:label, label:Start drive mount
value_to_menu_item type:label, label:Stop drive mount
value_to_menu_item type:label, label:Start WebDav
value_to_menu_item type:label, label:Stop WebDav
value_to_menu_item type:label, label:Support
value_to_menu_item type:label, label:Visit Website
value_to_menu_item type:label, label:About
value_to_menu_item type:label, label:Changelog
value_to_menu_item type:label, label:Logs
value_to_menu_item type:label, label:Version 1.7.11
method call SetContextMenu
Just a question, did you try running Flatpak format? https://github.com/flathub/io.kapsa.drive/./S3Drive-x86_64.AppImage
(kapsa:2730352): Gdk-CRITICAL **: 09:39:57.636: gdk_window_get_state: assertion 'GDK_IS_WINDOW (window)' failed
package:media_kit_libs_linux registered.
flutter: *** sqflite warning ***
You are changing sqflite default factory.
Be aware of the potential side effects. Any library using sqflite
will have this factory as the default for all operations.
*** sqflite warning ***
method call InitAppWindow
method call InitSystemTray
SystemTray::set_system_tray_info title: (null), icon_path: /tmp/.mount_S3DrivJ2GgY2/data/flutter_assets/assets/logos/logo_42.png, toolTip: (null)
method call CreateContextMenu
value_to_menu_item type:label, label:Show
value_to_menu_item type:label, label:Hide
value_to_menu_item type:label, label:Start drive mount
value_to_menu_item type:label, label:Stop drive mount
value_to_menu_item type:label, label:Start WebDav
value_to_menu_item type:label, label:Stop WebDav
value_to_menu_item type:label, label:Support
value_to_menu_item type:label, label:Visit Website
value_to_menu_item type:label, label:About
value_to_menu_item type:label, label:Changelog
value_to_menu_item type:label, label:Logs
value_to_menu_item type:label, label:Version 1.7.11
method call SetContextMenu
Just a question, did you try running Flatpak format? https://github.com/flathub/io.kapsa.drive/ bash
evidence and going to ask publicly on MinIO's Github.
The issue isn't complex at all. Basically you have trash on Windows/Linux/macOS whatever. If you delete files from your computer, they land in Trash. We can say they're versioned as their latest version is available for restore.
From a UI point of view, after deletion, you wouldn't expect for these deleted entries to appear in a location from where they were originally deleted. They're now in Trash (available for further deletion or restore) and shouldn't be present anywhere else.
MinIO shows the folder hierarchy in the original location despite that it was all deleted and it's correct place is Trash. I don't think it's a correct behavior from purely "files&directories" UI point of view. (edited)PUT folder/
PUT folder/file.txt
then: LIST folder
might not return you the folder/file.txt
Apparently they mention it in here: https://min.io/docs/minio/container/operations/checklists/thresholds.html#id6
I am not sure, but we may have to change the way we create folders to overcome this issue. E.g. instead of folder/
we would rather create folder/.empty
to not cause conflicting keys.<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidRequest</Code><Message>Content-MD5 HTTP header is required for Put Object requests with Object Lock parameters</Message><RequestId>...</RequestId><HostId>...</HostId></Error>
I remember we've solved this issue in S3Drive's predecessor: https://play.google.com/store/apps/details?id=com.photosync.s3
but this didn't end up in S3Drive just yet. Fix: https://github.com/s3drive/app/issues/16#issuecomment-1257024140
In other words we need to add this header if compliance mode is enabled, but since we don't want to do it by default we'll likely add the configurable setting, which will get switched on automatically if we detect this error message. (edited)x-amz-object-lock-mode: ObjectLockMode
The Object Lock mode that you want to apply to this object.
Valid Values: GOVERNANCE | COMPLIANCE
x-amz-object-lock-retain-until-date: ObjectLockRetainUntilDate
The date and time when you want this object's Object Lock to expire. Must be formatted as a timestamp parameter.
x-amz-object-lock-legal-hold: ObjectLockLegalHoldStatus
Specifies whether a legal hold will be applied to this object. For more information about S3 Object Lock, see Object Lock.
Valid Values: ON | OFF
It's a matter of providing sane settings UI where these settings can be applied.
Depending on the requirements there could be multiple layers with override rules. For instance user could specify settings on the bucket level which would then be overridden by the settings on the folder level, then on the sub-folder level (and so on) down until the file level.
We're open for suggestions how this should/could work.deb
and AppImage
packages: https://github.com/s3drive/app/releases
cc @helios6509 cc @morethanevil (edited)Content-MD5
header to be provided with the request.
It was especially challenging when combined with E2E encryption, as this rendered the: "Chicken or the egg" dilemma, where we had to provide MD5
before sending any data, however when we encrypt data we auto-send it in chunks to not cause any memory issues.
The solution was to implement Multipart Upload. It's a native S3 feature where file is uploaded in chunks.
This allowed us to overcome any memory-hungry operations and divided upload of big files on a smaller manageable chunks.
The positive side-effect is that if your file upload fails, when you retry file it will start from the last failure point (currently only works without encryption enabled).
Finally, improving the encryption scheme allowed us to build decryption proxy, so we can convert the Rclone encrypted blob into video stream that's understandable by video players.
That's how encrypted videos playout was implemented. It was deployed experimentally to all platforms.
We didn't manage to build decryption proxy for Web and even if we did, the performance would be terrible (https://github.com/rclone/rclone/issues/7192), so we're temporarily hosting the proxy in our infrastructure. Since it poses some privacy risks, we've implemented BIG WARNING to the user.
We've also implemented ZIP download for multiple selected files and delivered lots of bugfixes and performance improvements as usual. (edited)/public/
directory which is configured as world-readable.
Whenever I share a file from this bucket, I just want https://domain.tld/bucket/public/file.ext
*/foldername/file.ext
would be beneficial.
https://d843ae90cab33e54f4d284bc65d2fd6a.r2.cloudflarestorage.com/sharex/2adrRMxSvi?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=c8e7807dd8cf0a5f005fd526f3279679%2F20230915%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230915T112311Z&X-Amz-Expires=604679&X-Amz-SignedHeaders=host&X-Amz-Signature=47e2488e23043e971c1dc0d6b24235e225a18b5a4952efd6f263049c779b73e9
-
https://pub-0ea304dc97d1413588965fb731c2d5e3.r2.dev/2adrRMxSvi
-
https://i.cubity.dev/2adrRMxSviAndroidManifest.xml
and this is how they do it (plus implementation of course).
https://github.com/bitfireAT/davx5-ose/blob/273deecbe49b9f0c5ae753353ad0f8a514c4c401/app/src/main/AndroidManifest.xml#L288-L296
Thank you for your hard work, using S3 Drive and liking it a lot!s3fs
on linux and expose it via a WebDav server (cause Davx can use that as file provider)s3fs
and run the WebDav server yourself as you say, but you could also achieve the same with our native rclone mount
which is likely going to be more performant than a POSIX compatible s3fs
.rclone
- do you happen to know how come it is more performant then a file system mount?s3fs
is indeed quite slow heregoofys
, whereas s3fs
offers maximum POSIX compatibility at huge cost. E.g. listing directory with 1000 files will take up to 1000! requests with s3fs
, however it will take just one with rclone
/ goofys
. (edited)s3fs
in order to be POSIX compatible in some cases needs to issue 1000x times more requests. This has dramatic performance consequences.
You're right you won't be able to specify group permissions for a dir. (Actually it may work, but such data won't be preserved when you remount). (edited)-rw-rw-r--
which is pretty default Rclone setting, but in principle we could add settings configuration to change it.rclone
command it's probably not a high priority.
When you go to the app logs, you will see exactly what commands app executed on rclone
binary. You can replicate the same on your server if you wish.goofys
and even had an AWS client-side encryption compatible prototype, ultimately we've replaced it with Rclone, however experience with goofys
and its codebase was pretty good. (edited)goofys
and even had an AWS client-side encryption compatible prototype, ultimately we've replaced it with Rclone, however experience with goofys
and its codebase was pretty good. (edited)goofys
was easier to setup - I just added a line in my /etc/fstab
rclone
would have been "a very tiny bit" more work (like, writing/finding a systemd service or a wrapper script) (edited)goofys
is that it does not seem to work with the systemd
automount feature (that mounts on demand)NoSuchKey
message. App as such wouldn't be usable with read permission, so we haven't really implemented support for listing only buckets. You may be better of using raw aws s3
or aws s3api
commands.
If you aim to mount your bucket you can do so outside of S3Drive, but in an S3Drive compatible manner, please find our guide how to configure bucket: https://docs.s3drive.app/advanced/#setup-with-rclone
I am not 100% sure whether Rclone requires anything else than listing permissions though, but in principle it should work.
Then you can issue: https://rclone.org/commands/rclone_mount/ manually. If you want to see the exact commands that S3Drive would've used, you can mount some other bucket from S3Drive and copy out commands from application logs (available on the about me page).
What's your use case by the way? This will certainly help me to come up with something that works for you ! (edited).s3drive_bucket_read_test
key. Once you get past that check your listings should work just fine.
We will add an option to get past that check in one of the next releases.{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:List*"
],
"Resource": "*"
}
]
}
bucket can be set up without problems despite the read check.
Upload/download naturally wouldn't work, but that's expected. (Please note that these error responses come from 1.6.1 version which is due to be released. In older version errors might be rendered differently).
Drive mount does also seem to mount properly and listing works.
What's your permission set and S3 provider which gets you to: "Access denied"? I would be happy to try that out. Thanks ! (edited) Main bucket policy, shared by all users
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Effect": "Allow",
"Action": [
"s3:GetBucketAcl",
"s3:GetBucketCORS",
"s3:GetBucketLogging",
"s3:GetBucketNotification",
"s3:GetBucketObjectLockConfiguration",
"s3:GetBucketPolicy",
"s3:GetBucketTagging",
"s3:GetBucketVersioning",
"s3:GetLifecycleConfiguration",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "AllowStatement2A",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::buckentname",
"Condition": {
"StringEquals": {
"s3:delimiter": "/",
"s3:prefix": ""
}
}
}
]
}
policy for one of the sub directories
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketname/Folder1/*"
},
{
"Sid": "AllowStatement2A",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucketname",
"Condition": {
"StringEquals": {
"s3:delimiter": "/",
"s3:prefix": [
"",
"Folder1"
]
}
}
},
{
"Sid": "AllowListingOfUserFolder",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucketname",
"Condition": {
"StringLike": {
"s3:prefix": "Folder1/*"
}
}
}
]
}
AccessDenied
when trying to login using your attached: Main bucket policy
.
We'll support this use case and it will work with Wasabi. After setting a bucket user will receive a message: Read check has failed. S3Drive functionality may not work properly.
, but then will be able to proceed and list files.
This will be available in a next 1.6.3 release available in a couple days. (edited).md
as if it was the installation package (e.g. .apk
) instead of opening list of apps, so you could select some text file editor.
I suspect that this has something todo with your phone security settings, but may well be something in the file.
Is it possible by any chance if you could send this file over?
Feel free to send it to me directly: tom@s3drive.app
NFS mount
is running. The temporary solution is to either use force exit or use macFUSE/FUSE-T mount as explained in our guide: https://docs.s3drive.app/install/#macos_1
We're working on improving this, but the ultimate solution is that macOS native integration with Finder, skipping the NFS/FUSE layes altogether.
https://s3drive.canny.io/feature-requests/p/macos-native-file-mount
EDIT: I've now realized that you're probably using Windows based on your other support item. In which case it seems like app may not want to close if mount
is performing any operations or e.g. file/folder is open without mount directory preventing mount
from finishing gracefully. We will aim to add a relevant prompt ! (edited)rclone
command installed on your desktop anywhere? Whilst running Rclone using Termux
on Android might be suficient we've never tried that.rclone ls driveName:
shall give you some files in the listing.
Alternatively you can use about
command, e.g.: rclone about driveName:
and you should get e.g.:
Total: 7 GiB
Used: 611.346 MiB
Free: 6.403 GiB
If you're not getting these results, STOP and try setting back-end again using: rclone config
as mentioned here: https://docs.s3drive.app/setup/import_rclone/
If things are working good for you at this stage, then use: rclone config dump
command in order to extract all configs, then manually select, copy and paste the relevant Google Drive config into S3Drive (click new "+" and import). (edited)rclone ls driveName:
shall give you some files in the listing.
Alternatively you can use about
command, e.g.: rclone about driveName:
and you should get e.g.:
Total: 7 GiB
Used: 611.346 MiB
Free: 6.403 GiB
If you're not getting these results, STOP and try setting back-end again using: rclone config
as mentioned here: https://docs.s3drive.app/setup/import_rclone/
If things are working good for you at this stage, then use: rclone config dump
command in order to extract all configs, then manually select, copy and paste the relevant Google Drive config into S3Drive (click new "+" and import). (edited)remote
configured manually via Rclone as crypt
or is it configured by S3Drive automatically via E2E settings?
Did I understand correctly, that on Linux it behaves exactly the same?encryptionKey
and filePathEncryption
attributes respectively. I am encountering the same thing in the Linux flutter app (both are configured using the same json file if that may be causing it), where it seems to create the folder tree (though strangely there is no place holder file there, even though S3 is supposed to be a flat object store that emulates folders...), and then stop due to the MD5 mismatch. I can upload files and folders normally without any issue by using the manual upload button on both the Android phone and the Linux computer, it seems to be a problem specifically related to the custom path sync feature. (edited)1.65.1
to 1.65.2
in a next release end of this week / Monday, if this doesn't resolve this issue, then we'll escalate.(
and )
and $
characters only for buckets were versioning was disabled.
If you still face this problem even after updating the S3Drive, please send me the full filepath (including special characters - feel free to redact standard alphanumerics for privacy), bucket name and settings (versioning, object lock etc.).[1.4.0] - 2023-07-21
(https://s3drive.app/changelog), it can be enabled in the Settings (it's called E2E on our end, but it's essentialy 1:1 compatible).
Most recent release: [1.7.0] - 2023-12-29
provides full integration with Rclone allowing you to use 70+ back-ends on top of S3 (more on that here: https://docs.s3drive.app/setup/import_rclone/). One of the back-ends is crypt
(https://rclone.org/crypt/) which means you can use S3Drive to encrypt your data and store it on Dropbox or whenever you want.
In a 1.7.1
release which we will release in a few days there will be an option to sync from local file system as well as (on Android, iOS and macOS this option won't be initially available due to different permission systems, we'll need to provide workaround), between different back-ends, so you can e.g. upload some files to Dropbox, some files to Google Cloud and then sync certain folders between them as you need. (edited)1.7.1
release is now a thing !
We love the idea of permissions to only specific folder, the challenge is that these operate on so called Content URIs instead of classic file system (you can notice on your video it starts with content://).
That makes it incompatible with classic software, Rclone included.
That's why our best solution so far is to aim for MANAGE_EXTERNAL_STORAGE
permission which fortunately and unfortunately gives access to the filesystem: https://developer.android.com/training/data-storage/manage-all-files#operations-allowed-manage-external-storage
In the long run we could reimplement some syncing logic and make it compatible with these Content URIs... but since Rclone does damn good job already we're not really keen to reinvent the wheel, add maintenance/risks and spend at least couple months initially just to get it right. (edited)x-amz-meta-mtime
header (https://docs.aws.amazon.com/fsx/latest/LustreGuide/posix-metadata-support.html) and was 403 rejecting the request.
On the plus side this made us improve our error reporting, so if request fails like this it will be correctly capture in the Transfers error (1.7.8 release).[b2]
type = b2
account = 123
key = 123
hard_delete = true
#endpoint = https://s3.eu-central-003.backblazeb2.com
[b2-crypt]
type = crypt
remote = b2:mybucket
password = 123
base64
encoding for the: filename_encoding
Please find related issue:
https://discord.com/channels/1069654792902815845/1069654792902815848/1136223933201387550
Recommended full Rclone config: https://discord.com/channels/1069654792902815845/1069654792902815848/1135157727216279585
We will be aiming to improve our guides, so this step is better documented.libc6
?
https://github.com/s3drive/appimage-app/releases/tag/1.7.11%2B1
Can you also give me an output of: ldd --version
?
https://lindevs.com/check-glibc-version-in-linux and output from libc
e.g.: /lib/x86_64-linux-gnu/libc.so.6
-> GNU C Library (Ubuntu GLIBC 2.31-0ubuntu9.7) stable release version 2.31.
... also if you could send me an output of: strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
this isn't related to AppImage libstdc
, but your OS, but may help me to understand this issue.
If we don't manage to solve it that way, then I will have to find test it on real Debian / Fedora, not sure if this is XFCE related though.
Sorry for not getting back to you sooner, but we're pretty low on resources at the moment. (edited)strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
command is used it shows: GLIBCXX_3.4.32
?
https://stackoverflow.com/a/77075793/2263395
Couple weeks ago, there was an upgrade to our build machine, which may or may not affected the newest glibc
required. If possible I would still advise to use Flatpak release which is supposed to solve these issues.
In the meantime we'll try to confirm if we can somehow bundle glibc
of fallback to older version.macFUSE
installed (in case you don't have it installed). Please find the instructions: https://github.com/macfuse/macfuse/wiki/Getting-Started
Once you have it installed, yet it still doesn't work, can you please go to application Logs and copy the rclone mount
line with all the parameters and execute in the terminal. Is there any additional info / error?.bin
extension it does look to me as if you weren't using the filename encryption for these files.
This is the config that works with S3Drive:
https://discord.com/channels/1069654792902815845/1069654792902815848/1135157727216279585
If filename encryption is off: https://rclone.org/crypt/#crypt-filename-encryption (default is standard), then the .bin
suffix gets added.
We don't really support stripping the .bin
suffix, that's why in the config that we recommend we suggest (Discord link above) we suggest to disable it: https://rclone.org/crypt/#crypt-suffix
Given that you already have some data, perhaps we could reconsider support for: .bin
stripping for users which used the default setting before running it all up in compatibility with S3Drive. (edited)last modified
field date that we display early on gets directly from S3 and is technically last modified on the remote side, but in fact the real local modification date is stored as: x-amz-meta-mtime
header. (edited)crypt
Rclone remote to the remote which stores the encrypted data, that could be an external remote or local remote (remote is just a name/concept of Rclone and even though FS is local, it is also called remote).
We provide a guide how to set this up and decrypt/encrypt files outside of S3Drive given they're present on some external remote, that is S3 server: https://docs.s3drive.app/advanced/#sample
In this guide: s3drive_crypt
points to a bucket within s3drive_remote
(which is a S3 provider).
If your files are already downloaded then you would need to point your s3drive_crypt
to your local FS remote instead.
That technically means that within: s3drive_crypt
you would replace line: remote = s3drive_remote:<bucket_name>
with path to your FS, e.g: remote = C:\MyEncryptedData
I hope that helps, if you need any assistance on that please let me know. (edited)Gtk-Message: 02:13:03.143: Failed to load module "canberra-gtk-module"
(kapsa:2): Gdk-CRITICAL **: 02:13:03.199: gdk_window_get_state: assertion 'GDK_IS_WINDOW (window)' failed
package:media_kit_libs_linux registered.
** (kapsa:2): WARNING **: 02:13:03.433: libsecret_error: \xa4Z\xe8\x94Ob
(edited)gnome-keyring
on your host OS?
Are you trying to run S3Drive for the first time or perhaps the issue is new?1:42.1
)
This is technically my 2nd time running s3drive, the first time I had an issue you already released a fix for, namely the libmpv.so.2 issueseahorse
package and the necessity of unlocking keyrings.
https://stackoverflow.com/a/77338413/2263395
As soon as I find a clear solution I will let you know.libsecret_error
to #8\xaf\u0006dY
Temporarily removing the password from the default keyring changes the libsecret_error
to R9\u001b\xc2\xc1[
flatpak install --user https://dl.flathub.org/build-repo/79739/io.kapsa.drive.flatpakref
cc @benoit_52236 <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<ID>S3Drive</ID>
<AllowedOrigin>https://web.s3drive.app</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3600</MaxAgeSeconds>
<ExposeHeader>etag</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-key</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-iv</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-cek-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-wrap-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-key-v2</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-tag-len</ExposeHeader>
<ExposeHeader>x-amz-meta-x-amz-unencrypted-content-length</ExposeHeader>
<ExposeHeader>x-amz-version-id</ExposeHeader>
<ExposeHeader>x-amz-meta-key</ExposeHeader>
<ExposeHeader>x-amz-meta-iv</ExposeHeader>
<ExposeHeader>x-amz-meta-chunk</ExposeHeader>
<ExposeHeader>x-amz-meta-cek-alg</ExposeHeader>
<ExposeHeader>x-amz-meta-s3drive</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Please note that in your configuration there are at least couple things missing which may affect some features.
For instance HEAD
HTTP operation or: x-amz-version-id
and x-amz-meta-s3drive
(optional) headers. (edited)rclone v1.63.0
and configured my storage like:
[storj]
type = s3
provider = Other
access_key_id = <redacted>
secret_access_key = <redacted>
endpoint = https://gateway.storjshare.io
[storj_crypt]
type = crypt
filename_encoding = base64
remote = storj:my-photos
password = <redacted>
filename_encryption = standard
directory_name_encryption = true
suffix = none
I've then copied test pdf like this: rclone copy test.pdf storj_crypt:
and I get valid object with ETag and openable within S3Drive.
Can you post your Rclone storj
remote configuration? You've initially posted config, but that's just crypt: https://discord.com/channels/1069654792902815845/1159814485515710525/1159827592732479590Amazon S3 (or S3 compatible)
has: MD5
.LE2123
.MinioError: ListObjectsV2 search parameter maxKeys not implemented
content-length
header in the response (e.g. during file download/open). We rely on it to display transfer progress results as well as to make certain predicaments related to encryption. In theory we could implement content-length
workaround, but it would take us little bit while.
We're going investigate first whether it is possible to enable that header. I know that Cloudflare has some logic behind the content-length
header which in some cases is provided and in some isn't... we'll have a look on it as well, however if you're going to reach out Cloudflare it is something you can ask about as well. Thanks (edited)content-length
issue on mobile and desktop clients. Web client fix will have to wait little bit longer (because we've no control of content-length
header in the browser).
Basically if: accept-encoding
HTTP request header includes gzip
Cloudflare seem to skip content-length
altogether.
We're testing couple things right now, but if things go well we'll be able to release it promptly.
Related: https://community.cloudflare.com/t/no-content-length-header-when-content-type-gzip/492964 (edited)content-length
issue on mobile and desktop clients. Web client fix will have to wait little bit longer (because we've no control of content-length
header in the browser).
Basically if: accept-encoding
HTTP request header includes gzip
Cloudflare seem to skip content-length
altogether.
We're testing couple things right now, but if things go well we'll be able to release it promptly.
Related: https://community.cloudflare.com/t/no-content-length-header-when-content-type-gzip/492964 (edited)writes
setting which is pretty much required for the write mode to function properly. We could possible disable it on Windows and Linux with some limitations: https://rclone.org/commands/rclone_mount/#limitations and still keep the writes mode, but on macOS it wouldn't be possible.
For Linux see: ~/.cache/rclone/vfs
I would suspect it's going to be similar on Windows: $HOME/.config/rclone/vfs
and macOS
.
Strange that S3Drive hangs, it's probably some bug where mount
process takes extre ordinary time to load? the cache before returning the mount to the app and it's blocking the main thread.
Deleting the VFS cache although not convenient shall resolve this issue.
We will be able to provide couple more options, e..g. disable cache, set max age... and most importantly set the max size: vfs-cache-max-size
.
Based on your comment I've increased priority on this and you can expect improvements in one of the next releases.
Just please let me know what's your OS for reference.
Thanks !