dxdiag
in the search bar.https
, where as IP addr will default to http
.
Filename encryption is on our Roadmap and we have a working prototype already. https://s3drive.canny.io/feature-requests/p/filenamefilepath-encryption (ETA ~April 2023).
We're making further research to understand standards or well established implementation in that area, so we can stay compatible.
The sharing functionality is based on S3 presign URLs, their limitation is that the signature can't be valid longer than 7 days, so every 7 days new link would have to be generated. We're researching how to overcome this limitation. For instance we could combine this with a link shortener, so there is single link that doesn't change, but under the hood we would regenerate the destination link as needed.
The encrypted share link has the master key at the end after the # character and looks like this:
https://s3.us-west-004.backblazeb2.com/my-s3drive/.aashare/hsnwye5bno3p/index.html?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=004060aad6064900000000044%2F20230214%2Fus-west-004%2Fs3%2Faws4_request&X-Amz-Date=20230214T095014Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=abdcd875e2106ee54c6a1d1851617c7e694e121464c5ca9023526ce2836be595#GKSGYX4HGNAd4nTcXb/GIA==
What it does it tries to load the encrypted asset as usual, but it's not aware per se if an asset is encrypted. In the background JavaScript tries to fetch the asset and replaces the one on the screen with decrypted version. It looks like it has failed on your side. Can you go to the console (right-click -> inspect element) to see if there is anything abnormal (that is error in the Console or different than 200 status code in any of the network requests).rclone
for accessing my files or backing up my photos on a day to day basis.... and I am not afraid of CLIs. (edited)garage
... and there is no way to provide region in S3Drive.
It seems that we may add additional form field to specify the region.toml
file like this: s3_region = "us-east-1"
.
We auto-detect region from the endpoint URL and have a way to detect custom region from MinIO.... and if it doesn't work we use the most common default which is us-east-1
..aa*
file and folder are about, but some "don't touch my bucket" parameter would be nice if the app doesn't strictly need them, otherwise that sounds like an additional bucket policy :D
EDIT: looks like the file is for some kind of init feature within the app, and one of the two folders is the trash. I've seen the versioning feature request, but the trash folder could be opt-in if possible. (edited).aainit
file is our write test, as well as ETag response validation (which is required for not yet released syncing features), as some providers (talking mostly about iDrive E2 with SSE enabled) don't generate valid ETags. BTW. Would you like S3Drive to support read-only mode?
Regardless, we will try to improve clarity of this operation, so user feels more confident that we're not doing some shady write/reads.
Speaking of Trash itself, likely this week starting on Android first there will be a Settings option to disable Trash feature altogether (which is a soft-delete emulation, but slow and pointless if bucket already supports versioning). Versioning UI with restore options will come little bit later. (edited).aainit
file is our write test, as well as ETag response validation (which is required for not yet released syncing features), as some providers (talking mostly about iDrive E2 with SSE enabled) don't generate valid ETags. BTW. Would you like S3Drive to support read-only mode?
Regardless, we will try to improve clarity of this operation, so user feels more confident that we're not doing some shady write/reads.
Speaking of Trash itself, likely this week starting on Android first there will be a Settings option to disable Trash feature altogether (which is a soft-delete emulation, but slow and pointless if bucket already supports versioning). Versioning UI with restore options will come little bit later. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. s3 ls
even though headObject
couldn't retrieve it as a valid S3 entry. I am curious if you came across of something similar. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .aainit
file being nuked (delete file itself + all its versions) once the init is done and raw presigned URL sharing headObject
and get the envelope AES keys.... so it must be a toggle with some warning. It would then simply return the Blob that's stored on S3, regardless of what's inside. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .s3drive_bucket_read_test
) and verify the response instead of trying to write a file.
Slider now works, so it's possible to set expiry time shorter than maximum of 7 days. There is an option to use raw preshared URLs.
We've also introduced basic Version UI. It is now possible to preview the revisions. In a next update we will allow opening, preview, deleting and restoring to particular version.
Thank you for these suggestions, they were great and helped us to validate it all !
... and as always we're open for a feedback.folder/file.txt
, but folder/
entry doesn't explicitly exists, it is still searchable)
There is an option to hide files starting with: .
As usual there are couple other performance improvements and bugfixes.
We would love to hear how are you finding new changes and if version management during file operations is what you would expect. (edited)Hide "." files
Show all files, including starting with the dot.
Hide files starting with the dot character
To:
Hide dotfiles
Show all files, including ones starting with a dot.
Hide files starting with the dot character.
feature_flags
int that computes to an array of pro features with bitwise operations, easy on your API and authentication gateway or whatever you do behind the scenes.feature_flags
int that computes to an array of pro features with bitwise operations, easy on your API and authentication gateway or whatever you do behind the scenes. MinioError: ListObjectsV2 search parameter maxKeys not implemented
(edited)s3.<region>.amazonaws.com
OS Error: CERTIFICATE_VERIFY_FAILED: self signed certificate
. And I indeed have self-signed certificate but I followed your instructions from https://github.com/s3drive/app/issues/19 (https://proxyman.io/posts/2020-09-29-Install-And-Trust-Self-Signed-Certificate-On-Android-11) and my browser on Andriod recognizes this certificate (if I go to minio browser, my Chrome is fine with the cert). But S3Drive continues to fail with the same error.
I'm using the latest version. (edited)null
response, which is somewhat expected. I would expect to get the SSL related error instead.OS Error: CERTIFICATE_VERIFY_FAILED: self signed certificate
. And I indeed have self-signed certificate but I followed your instructions from https://github.com/s3drive/app/issues/19 (https://proxyman.io/posts/2020-09-29-Install-And-Trust-Self-Signed-Certificate-On-Android-11) and my browser on Andriod recognizes this certificate (if I go to minio browser, my Chrome is fine with the cert). But S3Drive continues to fail with the same error.
I'm using the latest version. (edited)support-bugs-requests
is too long but there's no reason to have multiple channels for that either# Obscure password
echo "YourPlaintextPassword" | rclone obscure -
# Add it to Rclone config, config file location: `rclone config file`
[s3drive_remote]
type = s3
provider = Other
access_key_id = <access_key_id>
secret_access_key = <secret_access_key>
endpoint = <endpoint>
region = <region>
[s3drive_crypt]
type = crypt
filename_encoding = base64
remote = s3drive_remote:<bucket_name>
password = <obscuredPassword>
filename_encryption = standard
directory_name_encryption = true
suffix = none
Then you can use: s3drive_crypt
as your remote encrypted location.
Please note that whilst we support both encrypted and unencrypted files in the same location, Rclone doesn't seem to like the mix and won't display existing unencrypted files for the encrypted remote. In such case it's better to either keep everything encrypted globally or have dedicate paths with encrypted-only or unencrypted-only files. (edited)filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator.filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator. filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator. {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
${aws:username}
by anything you want, be it a variable or a fixed bucket name, there unfortunately isn't any group name variableusers
group to which I assign the selfservice
policy, then I add whoever I want to the users
group and they'll be able to manage their very own bucket${aws:username}
by anything you want, be it a variable or a fixed bucket name, there unfortunately isn't any group name variable {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}",
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
Contributor
role, it isn't much but still a nice way to recognize individuals who go out of their way to help the project out, what do you think about it?Contributor
role, it isn't much but still a nice way to recognize individuals who go out of their way to help the project out, what do you think about it?