How does modern NFS compare to modern Samba these days? It's been ages since I've had to tinker with either, but back when I was messing around with them, it seemed like Samba had "won" both due...
How does modern NFS compare to modern Samba these days?
It's been ages since I've had to tinker with either, but back when I was messing around with them, it seemed like Samba had "won" both due to the network effects (heh) of Windows clients, plus at least a vibes-based perception based on personal experience that Samba was more reliable and less likely to get wedged or leave files in an indeterminate state.
These days, I try to avoid both due to a gut feeling that trying to model filesystem semantics over a network is inherently problematic and not something that most applications expect. But not having to have touched these technologies in about a half-decade means that my gut feeling might be out of date.
My perception has always been the opposite. NFS was always solid whereas samba has always been hit or miss. Even on windows, even on NT4, I felt their NFS implementation was more reliable than...
My perception has always been the opposite. NFS was always solid whereas samba has always been hit or miss. Even on windows, even on NT4, I felt their NFS implementation was more reliable than SMB. Something about NetBIOS name resolution, I dunno. Working in call centres doing IT support for businesses, I can assure you: still, to this day, no one knows how to successfully and reliably map a network drive, after all these years.
I've never hesitated to map a NFS share in my fstab in linux, whereas smbfs has always been finicky. I dunno. Could just be me.
Performance is closer than you might think: https://blog.ja-ke.tech/2019/08/27/nas-performance-sshfs-nfs-smb.html Personally, I like sshfs because it is pretty much set it and forget it. I can be...
The only weird quirk is that systemd requires that the .mount file be named exactly the same as the mountpoint; eg. /net/backup becomes net-backup.mount
The only other caveat, is that eza --git --long /net/backup hangs for a long time. I'm not really sure why... eza --git /net/backup works fast. eza --long /net/backup works fast. So that's weird. But other applications that I've used don't have this problem. I can stream 16 720p videos at the same time in mpv from the other side of the planet without any hiccups but eza --git --long makes it shudder...
Looks neat! I wonder how performance will compare. It will be a game changer for Windows! SSHFS is definitely compatible with Tailscale. Just change the above: What=backup: To your tailnet address...
Looks neat! I wonder how performance will compare. It will be a game changer for Windows!
SSHFS is definitely compatible with Tailscale. Just change the above:
I used NFS on my local LAN from my NAS due to years and years of administering NIS and NFS on a heterogenous setup of Solaris, AIX, HP-UX, Dynix/ptx, DRS/nx and Linux clients. My world was forever...
I used NFS on my local LAN from my NAS due to years and years of administering NIS and NFS on a heterogenous setup of Solaris, AIX, HP-UX, Dynix/ptx, DRS/nx and Linux clients. My world was forever filled with joy and happiness.
A few years ago though, I switched to Samba on my lan because it was just easier. The morass of NFSv3, v4, access permissions and whatnot was becoming intolerably complex.
I'm slowly deprecating out the use of SAMBA in favour of WebDAV these days, as most of my important storage is now remote, on a DAV server.
Edit: Just as I wrote this, I realise it was likely the NIS that was holding back the tide of permissions issues from my old admin days. That came with its own set of problems though, oh boy did it. NIS maps, primary and secondary servers will haunt my death.
Still though, it unified the UID and GIDs over so many systems, home directories were mounted etc. It all worked.
Remove the NIS though, and the permissions go to pot faster than a rat up a rhododendron.
I don't personally find the user situation on NFS to be too bad. Its UID/GID based as far as I remember, so as long as you're accessing from 1000 on both systems, everything maps just as you'd...
I don't personally find the user situation on NFS to be too bad. Its UID/GID based as far as I remember, so as long as you're accessing from 1000 on both systems, everything maps just as you'd expect. Many apps will use custom UID/GID values these days so they're more likely to remain the same across devices. The only time I personally ran into issues was over a decade ago when I migrated to a new user account on one desktop for myself.
I can understand that on an enterprise level it might be more complicated, but you probably have good systems in place to keep UID/GID consistent.
I honestly can't remember the exact reason why I got in to UID/GID hell with NFS on my personal servers, I reckon it was some time around the transition to V4, and the introduction of a commercial...
I honestly can't remember the exact reason why I got in to UID/GID hell with NFS on my personal servers, I reckon it was some time around the transition to V4, and the introduction of a commercial NAS into my home setup. Various computers running various OSs, I had hp-ux and Solaris servers in the garage, various kinds of Linux, and the odd Mac.
But yes, now that I think on it, having all the enterprise users served from a central directory server using NIS solved /those/ issues, but introduced enough of their own that I didn't want to go near it for personal use.
NFS on MacOS is atrocious, it just hangs weirdly and stops working intermittently. There are some mount option tricks to make it a bit less shit, but it's still shit. I've found that copyparty...
NFS on MacOS is atrocious, it just hangs weirdly and stops working intermittently. There are some mount option tricks to make it a bit less shit, but it's still shit.
I've found that copyparty running a WebDAB with Finder linked to that is more performant in many cases...
I find remote file systems on Mac OS are generally poor. Finder is supposed to do WebDav , but it's so slow for me that it's unusable. It's the only platform where I use the NextCloud syncing app,...
I find remote file systems on Mac OS are generally poor. Finder is supposed to do WebDav , but it's so slow for me that it's unusable. It's the only platform where I use the NextCloud syncing app, rather than just live mounting my files over WebDav.
And doubly unhappily, it's the platform where duplicating your remote files to local storage is the most expensive, due to the apple tax.
How does modern NFS compare to modern Samba these days?
It's been ages since I've had to tinker with either, but back when I was messing around with them, it seemed like Samba had "won" both due to the network effects (heh) of Windows clients, plus at least a vibes-based perception based on personal experience that Samba was more reliable and less likely to get wedged or leave files in an indeterminate state.
These days, I try to avoid both due to a gut feeling that trying to model filesystem semantics over a network is inherently problematic and not something that most applications expect. But not having to have touched these technologies in about a half-decade means that my gut feeling might be out of date.
My perception has always been the opposite. NFS was always solid whereas samba has always been hit or miss. Even on windows, even on NT4, I felt their NFS implementation was more reliable than SMB. Something about NetBIOS name resolution, I dunno. Working in call centres doing IT support for businesses, I can assure you: still, to this day, no one knows how to successfully and reliably map a network drive, after all these years.
I've never hesitated to map a NFS share in my fstab in linux, whereas smbfs has always been finicky. I dunno. Could just be me.
Performance is closer than you might think: https://blog.ja-ke.tech/2019/08/27/nas-performance-sshfs-nfs-smb.html
Personally, I like sshfs because it is pretty much set it and forget it. I can be anywhere in the world and I can securely connect to it.
The only weird quirk is that systemd requires that the
.mountfile be named exactly the same as the mountpoint; eg./net/backupbecomesnet-backup.mountThe only other caveat, is that
eza --git --long /net/backuphangs for a long time. I'm not really sure why...eza --git /net/backupworks fast.eza --long /net/backupworks fast. So that's weird. But other applications that I've used don't have this problem. I can stream 16 720p videos at the same time in mpv from the other side of the planet without any hiccups buteza --git --longmakes it shudder...I'm waiting for Tailscale Taildrive to hit GA so I can have any directory shared anywhere easily and safely.
Looks neat! I wonder how performance will compare. It will be a game changer for Windows!
SSHFS is definitely compatible with Tailscale. Just change the above:
To your tailnet address
I used NFS on my local LAN from my NAS due to years and years of administering NIS and NFS on a heterogenous setup of Solaris, AIX, HP-UX, Dynix/ptx, DRS/nx and Linux clients. My world was forever filled with joy and happiness.
A few years ago though, I switched to Samba on my lan because it was just easier. The morass of NFSv3, v4, access permissions and whatnot was becoming intolerably complex.
I'm slowly deprecating out the use of SAMBA in favour of WebDAV these days, as most of my important storage is now remote, on a DAV server.
Edit: Just as I wrote this, I realise it was likely the NIS that was holding back the tide of permissions issues from my old admin days. That came with its own set of problems though, oh boy did it. NIS maps, primary and secondary servers will haunt my death.
Still though, it unified the UID and GIDs over so many systems, home directories were mounted etc. It all worked.
Remove the NIS though, and the permissions go to pot faster than a rat up a rhododendron.
I don't personally find the user situation on NFS to be too bad. Its UID/GID based as far as I remember, so as long as you're accessing from 1000 on both systems, everything maps just as you'd expect. Many apps will use custom UID/GID values these days so they're more likely to remain the same across devices. The only time I personally ran into issues was over a decade ago when I migrated to a new user account on one desktop for myself.
I can understand that on an enterprise level it might be more complicated, but you probably have good systems in place to keep UID/GID consistent.
I honestly can't remember the exact reason why I got in to UID/GID hell with NFS on my personal servers, I reckon it was some time around the transition to V4, and the introduction of a commercial NAS into my home setup. Various computers running various OSs, I had hp-ux and Solaris servers in the garage, various kinds of Linux, and the odd Mac.
But yes, now that I think on it, having all the enterprise users served from a central directory server using NIS solved /those/ issues, but introduced enough of their own that I didn't want to go near it for personal use.
NFS on MacOS is atrocious, it just hangs weirdly and stops working intermittently. There are some mount option tricks to make it a bit less shit, but it's still shit.
I've found that copyparty running a WebDAB with Finder linked to that is more performant in many cases...
I find remote file systems on Mac OS are generally poor. Finder is supposed to do WebDav , but it's so slow for me that it's unusable. It's the only platform where I use the NextCloud syncing app, rather than just live mounting my files over WebDav.
And doubly unhappily, it's the platform where duplicating your remote files to local storage is the most expensive, due to the apple tax.