Guild icon
S3Drive
Community / support / cgofuse: cannot find FUSE on MacOs
Avatar
I have installed fuse-t via pkg - do not see any options to configure FUSE with the latest app version, and getting the attached error.
11:18 PM
My suspicion is that it's probably not finding the .dylib of libfuse?
Avatar
Rebooting doesn't change things.
Avatar
Looking inside https://github.com/winfsp/cgofuse/blob/master/fuse/host_cgo.go I can see the expected /usr/local/lib/libfuse-t.dylib path on the filesystem (and a valid dylib with the right arches too)
Cross-platform FUSE library for Go - Works on Windows, macOS, Linux, FreeBSD, NetBSD, OpenBSD - winfsp/cgofuse
8:44 PM
Will be trying sshfs now to see if this is a general issue with fuse-t on 15.2
8:48 PM
Yep, sshfs from fuse-t package works
Avatar
@vlad what did you do exactly? install fuse-t from brew or from the github installer?
10:45 AM
having the same error with the latest version of s3drive @Tom
Avatar
Sorry for late reply. I know guys that you've installed fuse-t, just so you don't need to install macFUSE, but out of curiosity have you tried macFUSE instead? Obviously do it only if it's viable on your end, as it requires Kext installation. In a most recent release we in fact changed some ways how do we initialize Rclone mount, so no Rclone CLI is needed (it's now bundled in the app). This is because we've had to include couple improvements which aren't present in the official Rclone release (mostly encryption: https://github.com/rclone/rclone/issues/7192) and had to provide our version of Rclone. We will try on our ends if we can fix this issue related to fuse-t not being detected properly, if this'll prove to be challenging we'll have to restore the previous CLI mode on macOS, but this would mean no V2 cipher is supported (therefore not compatible where we're headed). Tough calls to make. We'll prioritise this issue and see what we can do. In the meantime you can fallback to 1.10.4 release, since this change was introduced in: 1.10.5. (edited)
👍 1
Avatar
macOS: 14.7.2 macFuse installed: NO (that's what I used previously, same issue) fuse-t installed: YES (homebrew) fuse-t-sshfs installed: YES (homebrew) s3drive: 1.11.0 rclone (was already installed) rclone v1.68.2 - os/version: darwin 14.7.2 (64 bit) - os/kernel: 23.6.0 (arm64) - os/type: darwin - os/arch: arm64 (ARMv8 compatible) - go/version: go1.23.3 - go/linking: dynamic - go/tags: cmount (edited)
10:59 AM
how can I download previous versions @Tom?
Avatar
Avatar
Chris (L3)
macOS: 14.7.2 macFuse installed: NO (that's what I used previously, same issue) fuse-t installed: YES (homebrew) fuse-t-sshfs installed: YES (homebrew) s3drive: 1.11.0 rclone (was already installed) rclone v1.68.2 - os/version: darwin 14.7.2 (64 bit) - os/kernel: 23.6.0 (arm64) - os/type: darwin - os/arch: arm64 (ARMv8 compatible) - go/version: go1.23.3 - go/linking: dynamic - go/tags: cmount (edited)
Please visit our releases page: https://github.com/s3drive/macos-app/releases It seems we've been dealing with this issue previously: https://github.com/rclone/rclone/issues/7508 We aim to fix that soon, will get back to you promptly once we know the solution.
Avatar
i've been bashing my head into the keyboard for the past 2 days trying to make any app (cyberduck, vanilla rclone) work with backblaze hard deletes. even with the -b2-hard-delete flag, hidden files still appear on backblaze web dashboard. the only way to actually delete the files is either using their dashboard, or using s3drive. i assume you are sending API requests to backblaze instead of the standard s3 api to make this work? is this something you've seen in the past? sorry to derail the convo but i'm talking to the creator so might as well ask
11:03 AM
note that revisions and backblaze encryption is turned off for my bucket
Avatar
Avatar
Chris (L3)
i've been bashing my head into the keyboard for the past 2 days trying to make any app (cyberduck, vanilla rclone) work with backblaze hard deletes. even with the -b2-hard-delete flag, hidden files still appear on backblaze web dashboard. the only way to actually delete the files is either using their dashboard, or using s3drive. i assume you are sending API requests to backblaze instead of the standard s3 api to make this work? is this something you've seen in the past? sorry to derail the convo but i'm talking to the creator so might as well ask
We're using native S3 API as most stuff can be achieved that way. If bucket isn't versioned then we issue simple DELETE request. If bucket is versioned then user have two options, soft delete (which is the same DELETE request, but Backblaze will version the old file and create a delete marker) and hard delete where we also delete the version, so file is gone completely.
Avatar
update: version 1.10.4 works. it even does the hard deletes correctly when the bucket is mounted, not just using the material webapp
👍 1
Avatar
Avatar
Tom
We're using native S3 API as most stuff can be achieved that way. If bucket isn't versioned then we issue simple DELETE request. If bucket is versioned then user have two options, soft delete (which is the same DELETE request, but Backblaze will version the old file and create a delete marker) and hard delete where we also delete the version, so file is gone completely.
it's a mystery to me then why rclone does not do the same even when explicitly told so using the flag
Avatar
b2-hard-delete shouldn't be required really if you don't have revisions enabled.
Avatar
Avatar
Chris (L3)
it's a mystery to me then why rclone does not do the same even when explicitly told so using the flag
What's your Rclone delete command?
Avatar
i'm not deleting using rclone command, i'm just mounting the bucket with rclone mount and deleting from finder (edited)
11:07 AM
which is what i just did with s3drive too, and it worked
11:08 AM
will try without hard-delete and maybe using vanilla s3 instead of the backblaze integration in rclone
11:08 AM
something somewhere is smelly
11:09 AM
also, starting to bundle your own version of rclone and doing mods to it is a slippery slope. you'll start liking it and stray away from the open source/verified software mission. just my 2c
Avatar
Avatar
Chris (L3)
also, starting to bundle your own version of rclone and doing mods to it is a slippery slope. you'll start liking it and stray away from the open source/verified software mission. just my 2c
Bundling own version is what we need to do anyway to enable Rclone on mobile, so if we we stay transparent about what we bundle I think this shouldn't be an issue. Diverging from the Rclone is certainly something which we don't aim doing. It's a temporary measure, as things were holding us with implementing secure file sharing (and folder sharing) due to limitations of Rclone cipher. We collaborate with Rclone maintainer (and they're mostly happy about it - https://github.com/rclone/rclone/pull/8105) and these changes will remain 100% open-source, we hope they will be merged eventually. They just need more time.
Avatar
Avatar
Chris (L3)
will try without hard-delete and maybe using vanilla s3 instead of the backblaze integration in rclone
Please let us know if it resolved your issue. We don't really do anything special with the mount itself on configuration. Not sure why B2 API does seem to behave differently. In fact we've used it in the past with our early version in 2022 (called PhotoSync - for unification purposes we've later switched to S3). We haven't spotted any issues regarding DELETE... but that was a while ago. (edited)
Avatar
i think the issue is not with the delete actually, it's with the rclone upload. so what i'm doing is simply mounting a bucket and then uploading a file using finder
11:20 AM
but when i go to the dashboard, i see the (2) which means it has 2 version, but i only uploaded it once obviously
11:20 AM
11:20 AM
now after i delete it in finder, it appears once in backblaze, but doesn't appear in the mount
11:20 AM
11:20 AM
only after unmounting and re-mounting does the first version appear, which I then delete, and it's deleted from backblaze too
Avatar
Can you click on (2) and see the version? Perhaps it's a "delete marker" only?
Avatar
could be a finder issue but again, s3drive works perfeclty
Avatar
Avatar
Tom
Can you click on (2) and see the version? Perhaps it's a "delete marker" only?
it's not a delete marker, as nothing was deleted
11:21 AM
i clean the bucket on backblaze, mount it with rclone, upload the file, and it appears twice on backblaze
11:21 AM
versioning disabled on backblaze
11:21 AM
11:23 AM
i could send you my key for this bucket if you want to spend time on this, created buckets from scratch and this issue still persists no matter what
11:23 AM
files get uploaded twice, so the deletion only deletes one
Avatar
I understand, but it doesn't make much sense. Are we still talking about B2 API or is it S3 API now?
Avatar
i am using the backblaze integration in rclone, didn't switch to plain s3 yet
Avatar
I see, if you face similar issue with S3 API (that is S3Drive works fine, but other integration doesn't for some reason) I would happy to look into it (in such case please do send credentials over DM or other channel). Don't have much time to investigate B2 API, as this isn't really something we're keen using/researching. (edited)
Avatar
sure, will try with s3 next. for now i've created a new remote to the same bucket without hard-delete and files get written 4 times now..
11:28 AM
Avatar
Avatar
Chris (L3)
sure, will try with s3 next. for now i've created a new remote to the same bucket without hard-delete and files get written 4 times now..
Two strange issues I see here: a) You've said you have Versioning disabled, so Backblaze shouldn't create more than 1 version regardless of what's being uploaded, b) Multiple versions uploaded may be indication of Rclone post-upload verification failing (perhaps there is some issue with B2 API uploaded file not being immediately available - just a blind guess) therefore reuploading file until it somehow succeeds. You could try using it in Rclone CLI e.g.: rclone copy and see if it works. You can consider adding: -vvv to get more info. (edited)
Avatar
yup, something is definitely broken somewhere and i think it's on my end (maybe finder?) but dunno what to check
11:33 AM
11:33 AM
created a plain s3 remote with all the default options, only key and application ID, and this is what gets uploaded
11:34 AM
console logs for the mount command, as the files get uploaded
11:35 AM
those errors are for the ds_store file i reckon, but i don't have anything more to go off by
Avatar
You can try using either rclone config file or rclone config dump to get the contents of the Rclone conf, you can then compare Rclone config entry created by S3Drive and yourself. You could also try mounting the S3Drive created entry and see if this issue also appears. (edited)
Avatar
working with s3 and rclone forces you to become zen i think
12:53 PM
have a bug report for you which i'm re-creating. i completely uninstalled rclone from my system since i'm not using it. only the S3Drive app, which behaves differently depending on the version. weird part is that the only time it behaves correctly (actually deleting the files) is when the wrong option is enabled (versioning, on a bucket with versioning disabled at the provider level, backblaze)
12:54 PM
take a look at these 2 videos and i'll let you be the judge:
12:54 PM
V1.11
12:55 PM
V1.10
12:55 PM
lmk if i can provide more info for you @Tom
12:58 PM
for now i'm gonna try a different app that references a japanese monster, hopefully it doesn't use rclone otherwise i'm writing my own wrapper because this is infuriating (s3drive is the one that worked best thus far, but still shows non-deterministic behavior which I really cannot afford on these files). lmk if this is fixable i'd be a happy lifetime customer of yours. (edited)
1:00 PM
bucket settings
Avatar
so in conclusion, the latest version works correctly (if versioning is enabled, despite being disabled on the bucket), but i'm not able to use it with mounts due to the fuse-t error at the start of this thread
Avatar
Hi @Chris (L3), we've found a fix for this cgofuse error and included it in the newest: 1.11.1 release: https://github.com/s3drive/macos-app/releases/tag/1.11.1 We will be providing changelog update later today, but I can already say that this version also contains very much awaited Folder sharing: https://s3drive.canny.io/feature-requests/p/folder-sharing Can you please update and let me know if you can start the mount? also thanks for your initial report @vlad! (edited)
Avatar
@Tom i can confirm the fuse-t fix works, however the deletion of files using mounted devices is still borked. the only way that files actually get deleted in the same way as they would be using the backblaze dashboard is if versions are turned on (in the s3drive app) and i select Hard Delete in the s3drive app itself. versions are always turned off on backblaze. app & versions ON - ✅ app & versions OFF - ❌ mounted & versions ON - ❌ mounted & versions OFF - ❌ (edited)
Avatar
Thanks for letting me know, to be honest I haven't looked on the videos just yet, so I can clearly understand the problem. I will get back to you later.
💪 1
Avatar
sent you a friend request, feel free to DM me for any additional info
Avatar
Avatar
Chris (L3)
@Tom i can confirm the fuse-t fix works, however the deletion of files using mounted devices is still borked. the only way that files actually get deleted in the same way as they would be using the backblaze dashboard is if versions are turned on (in the s3drive app) and i select Hard Delete in the s3drive app itself. versions are always turned off on backblaze. app & versions ON - ✅ app & versions OFF - ❌ mounted & versions ON - ❌ mounted & versions OFF - ❌ (edited)
Hi @Chris (L3), It seems that Backblaze bucket always behave as if the versioning was enabled, however if you select: "Keep only the last version" it sets the Lifecycle policy to cleanup the deleted file/version after 1 day. This is a difference to how non-versioned bucket behaves for other S3 providers, where on upload/delete no new version gets created, but the existing one gets immediately overwritten/deleted. More on that here: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
Keep only the last version of the file This rule keeps only the most current version of a file. The previous version of the file is "hidden" for one day and then deleted.
[ { "daysFromHidingToDeleting": 1, "daysFromUploadingToHiding": null, "fileNamePrefix": "" } ] === Regarding your tests cases:
app & versions ON - ✅
Happy that this works!
app & versions OFF - ❌
It seems there is an UI issue on the S3Drive side. Once versioning is disabled in the Profile settings, we should no longer show hard delete, but just delete. In principle this setting should always match the bucket configuration, which for Backblaze means it should be always ON, as every Backblaze bucket is versioned by default.
mounted & versions ON - ❌ mounted & versions OFF - ❌
We're going to investigate if Rclone supports version delete. It seems that similarly to: https://rclone.org/b2/#b2-hard-delete there is https://tip.rclone.org/s3/#s3-versions option, however it doesn't seem it would affect how mount deletes files. The quick workarounds would be to either run: a) https://rclone.org/commands/rclone_cleanup/ or b) Wait for lifecycle policy to kickin and delete non-latest versions (which should happen after a day). We have an interest in making this work, so if Rclone doesn't support it by any chance, we'll plan contribution to the project, however can't promise anything at this stage.
(edited)
Learn everything about adding lifecycle rules to enable your Backblaze B2 Cloud Storage service to automatically hide and delete older versions of files that are stored in Backblaze B2.
Clean up the remote if possible.
Avatar
Avatar
Tom
Hi @Chris (L3), we've found a fix for this cgofuse error and included it in the newest: 1.11.1 release: https://github.com/s3drive/macos-app/releases/tag/1.11.1 We will be providing changelog update later today, but I can already say that this version also contains very much awaited Folder sharing: https://s3drive.canny.io/feature-requests/p/folder-sharing Can you please update and let me know if you can start the mount? also thanks for your initial report @vlad! (edited)
Thanks for fixing this so quickly - glad it wasn't some weird system config issue
Avatar
Avatar
Chris (L3)
@Tom i can confirm the fuse-t fix works, however the deletion of files using mounted devices is still borked. the only way that files actually get deleted in the same way as they would be using the backblaze dashboard is if versions are turned on (in the s3drive app) and i select Hard Delete in the s3drive app itself. versions are always turned off on backblaze. app & versions ON - ✅ app & versions OFF - ❌ mounted & versions ON - ❌ mounted & versions OFF - ❌ (edited)
Hi @Chris (L3), Just to follow up on my previous comment. New S3Drive release will display two different delete pop-ups depending on the versioning configuration in the Profile Settings (we perform auto-detection, however it's up to user to decide about the behavior). In the Profile settings there is respective warning that if versioning is disabled on the S3Drive end, but enabled on the bucket side some actions might not work properly (e.g. move). In a next release we explicitly metion that delete is also affected. Regarding mount behavior this is yet to be investigated if we can configure it in a way that old versions gets deleted immediately from a versioned bucket.
Avatar
such a quick update, ty! will be keeping an eye out for changes to mount behavior in your releases as well
Exported 69 message(s)
Timezone: UTC+0