Hi, I'm getting a "Error: Exception: corrupted on transfer: md5 encrypted hash differ" on a one-way copy-mode sync (local->remote) to an encrypted vault in a S3 bucket (versioning on, object lock off) in IDrive E2 on android and Linux. Any idea how to fix this? Thanks!(edited)
Thanks for your time.
- what's the use case for: "Custom host name" for you?
We use our buckets to provide downloads of our assets for our customers. Videos and photos (Video production company)
We use iDRIVE with multiple regions. All I'm looking to do is mask the domain name with our domain name. Everything else stays the same. I'm only using S3 APP to upload to our bucket and grab the original link from iDrice to give to the cutomer to download.
Example S3 link from iDrive:
dfgd1fg45d.la.idrive2-57.com/filname-123.zip
When we convert to our own domain it looks like this:
http:// la.domainname.com/filname-123.zip
Would that be, so in the shared URLs you have a shorter name?
Yes - It also looks legitimate from our domain name.
Also when i Drive uses the cNAME domain name, the original idrive url is hidden. (And no one knows the actual bucket link)
What's the list of URLs on your screen?
All S3 browser does is mask the original link. The screenshot shows the original link, however, if you type your own domain name into the custom domain host name, those links will change from amazonaws.com to your domain name. That's all it does.
Can you connect to your bucket using your "Custom host name"?
Yes - because we already have a CNAME for the domain example: cdn.domainname.com configured. idrive guide here but I'm assuming others have a similar solution to cname: https://www.idrive.com/s3-storage-e2/cname-guide
Use your own domain name with IDrive® e2. Add and manage a CNAME record.
/Chris
Thanks for your time.
- what's the use case for: "Custom host name" for you?
We use our buckets to provide downloads of our assets for our customers. Videos and photos (Video production company)
We use iDRIVE with multiple regions. All I'm looking to do is mask the domain name with our domain name. Everything else stays the same. I'm only using S3 APP to upload to our bucket and grab the original link from iDrice to give to the cutomer to download.
Example S3 link from iDrive:
dfgd1fg45d.la.idrive2-57.com/filname-123.zip
When we convert to our own domain it looks like this:
http:// la.domainname.com/filname-123.zip
Would that be, so in the shared URLs you have a shorter name?
Yes - It also looks legitimate from our domain name.
Also when i Drive uses the cNAME domain name, the original idrive url is hidden. (And no one knows the actual bucket link)
What's the list of URLs on your screen?
All S3 browser does is mask the original link. The screenshot shows the original link, however, if you type your own domain name into the custom domain host name, those links will change from amazonaws.com to your domain name. That's all it does.
Can you connect to your bucket using your "Custom host name"?
Yes - because we already have a CNAME for the domain example: cdn.domainname.com configured. idrive guide here but I'm assuming others have a similar solution to cname: https://www.idrive.com/s3-storage-e2/cname-guide
Hi @/Chris, we've got preview of this feature deployed to our web client. There are two options, one to not create a signature, the other one is to set custom domain. Once custom domain is set once, this setting will be preserved for this bucket.
Please let me know whether this works for you and if you have any thoughts, then we can apply tweaks and include it in next release to other platforms.
cc @Xenthys® Hi there, correct me if I'm wrong, but haven't you requested similar feature in the past?
Yes, you may want to note that unsigned URLs don't have an expiration since it's linked to the signature, I don't know if it's possible to make the UI actually show that to the end user
Xenthys®
Yes, you may want to note that unsigned URLs don't have an expiration since it's linked to the signature, I don't know if it's possible to make the UI actually show that to the end user
Otherwise good thanks, raw presigned + unsigned was indeed what I'm after!
Tom
Hi @/Chris, we've got preview of this feature deployed to our web client. There are two options, one to not create a signature, the other one is to set custom domain. Once custom domain is set once, this setting will be preserved for this bucket.
Please let me know whether this works for you and if you have any thoughts, then we can apply tweaks and include it in next release to other platforms.
cc @Xenthys® Hi there, correct me if I'm wrong, but haven't you requested similar feature in the past?
@triggy from what I can see everything is uploaded into the same folder, also hi
1:27 AM
Lemme see if just selecting recent will actually capture most folders properly or if it'll be unreliable
1:30 AM
Seems to successfully grab everything, though again it's all just placed in the root of the backup location instead of being organised the same as it would be locally
Đunk
Seems to successfully grab everything, though again it's all just placed in the root of the backup location instead of being organised the same as it would be locally
Hey Everyone.
Been researching for days for ways to make S3 storage somewhat usable and still secure as well as looking into alternatives to S3 altogether but nothing came close to the requirements ...
What I am missing is Mobile and and SSO on Mobile.
I saw there has been talk about it a while back as well as some vague mentions on the website. I would be grateful to hear what the current status is on the matter of SSO in S3Drive(edited)
triggy
if i select to auto backup recent which contains everything would it back all of the seperate folders or combine it into one giant glob
Hi @triggy,
In a next couple months we plan to improve media backup tool, so it preserves the album names as folders. Current version was primarily focused on reliably backing up photos/videos (incl. background mode) and there aren't any additional photo management options. Stay tuned !(edited)
Hi @triggy,
In a next couple months we plan to improve media backup tool, so it preserves the album names as folders. Current version was primarily focused on reliably backing up photos/videos (incl. background mode) and there aren't any additional photo management options. Stay tuned ! (edited)
tbh it's pretty great so far good job focusing on making things work first before adding 38261 features
1
Lomsor
Hey Everyone.
Been researching for days for ways to make S3 storage somewhat usable and still secure as well as looking into alternatives to S3 altogether but nothing came close to the requirements ...
What I am missing is Mobile and and SSO on Mobile.
I saw there has been talk about it a while back as well as some vague mentions on the website. I would be grateful to hear what the current status is on the matter of SSO in S3Drive (edited)
Hi @Lomsor, We haven't exactly started work on SSO. Would you be happy to tell me little more about your use case? We plan to execute work on SSO in a next couple months, although we don't have a clear designs just yet.
In principle the idea is that you user will provide the SSO endpoint which after successfull auth could then inject S3 credentials (and custom config) into S3Drive and login user to the specified bucket.(edited)
hi, I am testing available SW products for S3 storage /idrive e2/ direct mount to Linux/Mac/Win. For some strange reason Mountainduck is not working asking for Region server for every file/folder created on client OSs. So far only Expandrive is working. On reddit I get link to S3drive and now here I got problem: S3 connect is working, it mounts drive in Win/Linux but when I create folder/file in OS Explorer/Dolphin - the change does not amke into real S storage bucket. What I am doing wrong ?
2:07 PM
When I create file in S3drive app - change is immediately visible also in S3 bucket
Tom will have to confirm, but mounts typically see a lot of file changes compared to individual uploads, it's probable files are flushed from time to time or on exit to reduce bucket operations costs
I tested to manualy invoke SYNC from Local to Remote to get changes "applied" but that does not take OS mounted drive changes into account. maybe I am understanding S3drive functionality wrong ? I wanted to have just mounted S3 bucket to local PC/notebooks the same way as Mapped drives in Windows.
2:17 PM
I am open to pay for such functionality if it requires Ultimate account
Not sure about anything in the current state as I only use the mobile app myself, it could also be an issue, I recomment posting in #support so Tom can handle it as soon as he's available, it'll end up there if it's a bug and will stay for future reference(edited)
Hi @Lomsor, We haven't exactly started work on SSO. Would you be happy to tell me little more about your use case? We plan to execute work on SSO in a next couple months, although we don't have a clear designs just yet.
In principle the idea is that you user will provide the SSO endpoint which after successfull auth could then inject S3 credentials (and custom config) into S3Drive and login user to the specified bucket. (edited)
Thanks for the update Tom.
TLDR: We would like to use the temporary credentials generated trough "IAM Identity Center" or "AWS CLI Login" to access and mount S3 Buckets in S3Drive.
If I interpret your plan correctly this sounds like what I am looking for.
rclone seems to support this type of authentication (https://rclone.org/s3/#authentication) trough I don't see a (non headache inducing) way for me to get the credentials to rclone on mobile and I would like to have minimal setup for future users.
We are using AWS best practices which state that the bucket shouldn't be publicly accessible and that there shouldn't be any long term credentials for it. This means no IAM Users with permanent access to resources like S3, instead IAM roles should be assumed trough IAM Identity center (SSO). Which generates temporary credentials with three elements with one of them being a time limited token.
I would like to have all business applications be reachable trough SSO, either SAML2 or OAuth2. There are a couple of desktop apps for S3 that can work with temporary credentials but none for mobile.
Currently on desktop it works like this: either the application or some script triggers "aws login" with the aws profile being preconfigured once beforehand. A browser opens and the user approves or logs in (password, MFA, etc.), then the three element credentials are generated (access key id, secret access key, session token) and saved in a credentials file and/or the environment variables. Then the app either looks at the environment or directly at the file and uses these credentials for access to S3. Ideally the token would be refreshed before it expires.(edited)
cdprotector
hi, I am testing available SW products for S3 storage /idrive e2/ direct mount to Linux/Mac/Win. For some strange reason Mountainduck is not working asking for Region server for every file/folder created on client OSs. So far only Expandrive is working. On reddit I get link to S3drive and now here I got problem: S3 connect is working, it mounts drive in Win/Linux but when I create folder/file in OS Explorer/Dolphin - the change does not amke into real S storage bucket. What I am doing wrong ?
On what OS you find this behavior? Can you please make sure that your: "Mount cache mode" is set to "Minimal"?
This shall skip the Rclone VFS cache and make operations blocking until they end up on S3.
cdprotector
I tested to manualy invoke SYNC from Local to Remote to get changes "applied" but that does not take OS mounted drive changes into account. maybe I am understanding S3drive functionality wrong ? I wanted to have just mounted S3 bucket to local PC/notebooks the same way as Mapped drives in Windows.
Hi, can you please create #support item?
In principle sync and mount maybe used for similar purposes, but they are fundamentally different.
You would usually use mount if you want to interact with remote file system directly. If you need "blocking" behaviour, please set cache mode to Minimal (In the Settings). If you want to work as if it was "local" path, which will be eventually consistent on the remote side (once Rclone finalizes the upload), then use "Writes" or even "Full" cache - the issue with these cache modes is that there is no clear indication if process to copy changes to remote has finished.
Another issue with mount cache is that your directory listing might be stale, especially if there is other user or process which modified changes remotely (S3) without your knowledge.
You would usually use sync if you want to work locally and flush changes automatically (with file watchers) and periodically (with the timer set in the settings).
Lomsor
Thanks for the update Tom.
TLDR: We would like to use the temporary credentials generated trough "IAM Identity Center" or "AWS CLI Login" to access and mount S3 Buckets in S3Drive.
If I interpret your plan correctly this sounds like what I am looking for.
rclone seems to support this type of authentication (https://rclone.org/s3/#authentication) trough I don't see a (non headache inducing) way for me to get the credentials to rclone on mobile and I would like to have minimal setup for future users.
We are using AWS best practices which state that the bucket shouldn't be publicly accessible and that there shouldn't be any long term credentials for it. This means no IAM Users with permanent access to resources like S3, instead IAM roles should be assumed trough IAM Identity center (SSO). Which generates temporary credentials with three elements with one of them being a time limited token.
I would like to have all business applications be reachable trough SSO, either SAML2 or OAuth2. There are a couple of desktop apps for S3 that can work with temporary credentials but none for mobile.
Currently on desktop it works like this: either the application or some script triggers "aws login" with the aws profile being preconfigured once beforehand. A browser opens and the user approves or logs in (password, MFA, etc.), then the three element credentials are generated (access key id, secret access key, session token) and saved in a credentials file and/or the environment variables. Then the app either looks at the environment or directly at the file and uses these credentials for access to S3. Ideally the token would be refreshed before it expires. (edited)
I will get back to you on that once I manage to try out couple things with our team. We would be really keen to push some SSO workflow forward.
Speaking of desktop apps that you've used and worked for your use case, do you happen to recommend any of them? I would be happy to test that workflow personally and see if that's something we would be willing to incorporate in S3Drive.
I will get back to you on that once I manage to try out couple things with our team. We would be really keen to push some SSO workflow forward.
Speaking of desktop apps that you've used and worked for your use case, do you happen to recommend any of them? I would be happy to test that workflow personally and see if that's something we would be willing to incorporate in S3Drive.
Thanks! Back when I first set this up I didn't find many so I went with a tool called 'TntDrive'. It does work but I wouldn't recommend it. For my use case I had to write a script that injects the credentials into the environment, the tool was able to be set to read from them. It also required to be elevated. S3Browser from the same devs has an integrated authentication solution that works with SSO, I asked them why it wasn't in TntDrive and they said that the focus there was to be a service without requiring user interaction.
In my recent research I came across a few more that seem a bit more streamlined. Cyberduck and Mountain Duck seems alright. I know they can do SSO but don't know if they also require an external script.
I was on the lookout for mobile so didn't try anything new that didn't at least have support for Mobile OS.
There are two options for backing up folders with media files, the first is using the "Media backup" mechanism, the second is simply adding the necessary folders to synchronization jobs with the "Sync" mode. I have a question. What is the difference between "Media backup mode" and folder synchronization in "Sync" mode? What are the pros and cons?
Hello team,
I don't have much of a technical background but I have managed to connect my s3 storage (free backblaze for now) to s3drive. Thanks for your instructions in the docs
I would like to buy the lifetime offer, but I have one question first.
Is my s3 storage automatically e2e encrypted after the purchase? Or do I have to do something technical?
I mainly want to use S3Drive as a drive mount under windows and also be able to access the storage via your app on my smartphone. And the entire cloud should of course be encrypted.
ReplaX
Hello team,
I don't have much of a technical background but I have managed to connect my s3 storage (free backblaze for now) to s3drive. Thanks for your instructions in the docs
I would like to buy the lifetime offer, but I have one question first.
Is my s3 storage automatically e2e encrypted after the purchase? Or do I have to do something technical?
I mainly want to use S3Drive as a drive mount under windows and also be able to access the storage via your app on my smartphone. And the entire cloud should of course be encrypted.
Thanks for you message. In order to use E2E you will need to set it up on your devices using same passphrase. You can enable it in the settings and once it's set up, an indicator on the Files screen (lock icon) will appear.(edited)
1
Nitrotoluol
There are two options for backing up folders with media files, the first is using the "Media backup" mechanism, the second is simply adding the necessary folders to synchronization jobs with the "Sync" mode. I have a question. What is the difference between "Media backup mode" and folder synchronization in "Sync" mode? What are the pros and cons?
Media backup mode is tightly integrated with the Android/iOS ecosystem, it allows you to select specific albums, supports background backup (on Android photo is uploaded almost instantly), displays statistics, has wifi/charger/low battery constraints etc.
Sync is "just" an operation of files and folders powered by Rclone library. It's not aware of the underlying platform. It needs raw file system to work.
Mobile OS's doesn't easily expose raw file system. We've managed to get accepted on Google recently, but on iOS that's not possible, so media backup is the only option.
Since it's a new feature it doesn't yet support background backup or upload constraints.
Feel free to try both and decide which works for you better.(edited)
Thank you for your report. Are you using mount feature? If so, what are your settings are you using FUSE mount (recommended) or NFS (experimental)?
App freezes during normal quite when NFS mount is used, we're looking to fix that.
Can you please let me know if possible under what conditions the app freezes, is it when you upload multiple files/folders, sync folders etc.?
Thank you for your report. Are you using mount feature? If so, what are your settings are you using FUSE mount (recommended) or NFS (experimental)?
App freezes during normal quite when NFS mount is used, we're looking to fix that.
Can you please let me know if possible under what conditions the app freezes, is it when you upload multiple files/folders, sync folders etc.?
Does it freeze immediately after app starts or at some point during run? Do you have any sync settings? Can you verify whether you don't have mount autostart setting enabled - in that case mount would start even if you don't use it causing potentially app freeze during quit.
Thanks for you message. In order to use E2E you will need to set it up on your devices using same passphrase. You can enable it in the settings and once it's set up, an indicator on the Files screen (lock icon) will appear. (edited)
Bought lifetime yesterday and activated e2ee at my storage
Were files that were uploaded before the e2ee was activated now subsequently encrypted? Or are they still unencrypted?
If so, how can I see whether a file is encrypted or not?
1
ReplaX
Bought lifetime yesterday and activated e2ee at my storage
Were files that were uploaded before the e2ee was activated now subsequently encrypted? Or are they still unencrypted?
If so, how can I see whether a file is encrypted or not?
Thank you for your purchase. After enabling E2E only newly uploaded files will be encrypted. In order to encrypt existing files you would have to download them and then reupload.
We haven't exactly provided a clear workflow, but I would suggest downloading all your data from the bucket using Rclone and then reuploading them to a different bucket where encryption is enabled.
You can do so from the app itself using Sync functionality (we're using Rclone internally).
You could set up two buckets in S3Drive, one source bucket (your existing) with encryption disabled and one destination bucket with encryption enabled and then set up Remote -> Remote sync.
There isn't convenient way to tell if file is encrypted or not, because in order to get that information file would have to be queried individually to check the content, but there are two things that can be observed.
1) File which content is encrypted is 32 bytes bigger.
2) If file has filename encrypted, than after disabling filename encryption in S3Drive you can see its name turning into some random looking characters.
If you've had both content/filename encryption enabled than you could technically use these two indicators to determine if a single file was encrypted or not... but a fool proof way for multiple ways is to reupload it altogether.
We'll be providing more detailed and tested workflow how to migrate data.
In the meantime I would be keen to know if you manage to reencrypt your data.
It's not recommended to mix encrypted data with not encrypted data within same bucket. S3Drive can deal with it, however internal tools that we're integrated with are more strict.... so for instance if you enable E2E and use drive mount, your unencrypted data won't be listed in your virtual drive... despite that they're shown by S3Drive.(edited)
Man, it's been a while since I've said it but I love S3Drive. You really handled the raw signed / unsigned URLs well.
3
Tom
Thank you for your purchase. After enabling E2E only newly uploaded files will be encrypted. In order to encrypt existing files you would have to download them and then reupload.
We haven't exactly provided a clear workflow, but I would suggest downloading all your data from the bucket using Rclone and then reuploading them to a different bucket where encryption is enabled.
You can do so from the app itself using Sync functionality (we're using Rclone internally).
You could set up two buckets in S3Drive, one source bucket (your existing) with encryption disabled and one destination bucket with encryption enabled and then set up Remote -> Remote sync.
There isn't convenient way to tell if file is encrypted or not, because in order to get that information file would have to be queried individually to check the content, but there are two things that can be observed.
1) File which content is encrypted is 32 bytes bigger.
2) If file has filename encrypted, than after disabling filename encryption in S3Drive you can see its name turning into some random looking characters.
If you've had both content/filename encryption enabled than you could technically use these two indicators to determine if a single file was encrypted or not... but a fool proof way for multiple ways is to reupload it altogether.
We'll be providing more detailed and tested workflow how to migrate data.
In the meantime I would be keen to know if you manage to reencrypt your data.
It's not recommended to mix encrypted data with not encrypted data within same bucket. S3Drive can deal with it, however internal tools that we're integrated with are more strict.... so for instance if you enable E2E and use drive mount, your unencrypted data won't be listed in your virtual drive... despite that they're shown by S3Drive. (edited)
We haven't planned anything around iOS shortcuts/automation just yet. Can you tell us little bit more how would you want to use this all together?
We'll certainly add a feature request: https://s3drive.canny.io/feature-requests and based on the couple factors execute sooner or later.(edited)
We haven't planned anything around iOS shortcuts/automation just yet. Can you tell us little bit more how would you want to use this all together?
We'll certainly add a feature request: https://s3drive.canny.io/feature-requests and based on the couple factors execute sooner or later. (edited)
Thank you for taking time to request new feature. I need to say I am personally quite new to the iOS world and haven't used shortcuts myself, I am going to research and play with this little more.
We'll certainly have it implemented, it's just hard to tell at this stage when exactly, given lots of other features we try to squeeze in.
Link to other app is fine, it serves here as an example. Thanks !
It’s understandable if it’s a low priority item anyway, I initially found your app while looking for something specifically for iOS and only later realized your app is cross platform