1 point

Can someone tell a dummy like me, if this impacts truenas core?

permalink
report
reply
1 point

There is a thread going on at https://www.truenas.com/community/threads/silent-corruption-with-openzfs-ongoing-discussion-and-testing.114390/

Some users are reporting that it does affect Truenas (though it depends on the use case).

permalink
report
parent
reply
1 point

TrueNAS-13.0-U6 (the current core version)

zfs version reports 2.1.13

so it should be clean.

permalink
report
parent
reply
1 point

It’s hard to tell because I get this:

root@truenas[~]# zpool get version poolname

NAME PROPERTY VALUE SOURCE

poolname version - default

permalink
report
parent
reply
1 point

Ah heck I just updated my NAS VM to FreeBSD 14.

Anyone running FreeBSD 14, make sure vfs.zfs.bclone_enabled is set to 0.

permalink
report
reply
1 point

… FreeBSD 14, …

Not only 14 …

permalink
report
parent
reply
1 point

This is why you ALWAYS need INDEPENDENT backups. You can think all day long about detecting bitrot, and how well you’re protected against X drive failures but then something comes from the side and messes up your data in a different way than you’ve foreseen.

permalink
report
reply
1 point

Wait. Are you trying to say that raid is not a backup?

permalink
report
parent
reply
1 point

something comes from the side and messes up your data in a different way than you’ve foreseen.

This happened to me years ago. Naïvely thinking SnapRAID protected me against the likelihood of a drive failure. I wasn’t prepared for two drives failing simultaneously due to a power supply catastrophically failing (smoke, sparks) and frying the drives as it died.

It was an expensive lesson: I had to send one drive off for data recovery, and after I got it back I used SnapRAID to restore the remaining drive. Independent backups (and multiple parity drives) is the way.

permalink
report
parent
reply
1 point

Also independent way to verify files. I cfv everything before a big move and then after to check

permalink
report
parent
reply
1 point

The problem here is that those independent backups would also be corrupted. As I understand from the github discussion, the issue might be a bug that causes ZFS to not recognize when a page is dirty and needs to be flushed and is somehow triggered when copying files using a new-ish optimization that has been implemented in Linux and *BSD kernels? If you trigger the bug while copying a file, the original remains kosher but the new file has swaths of bad data. Any backup made after this point would contain both the (good) original and (corrupted) copied file.

permalink
report
parent
reply
1 point

The point is you’ll still have the originals, which you might in the meantime have removed (for example if one would reorganize a huge collection and started by working on the reflinked copy and in the end removed the original, natural cleanup workflow, not many would think that you’d need to check the results after a reflinked nearly-instant copy, not even foresee that if there’s some bitrot it’ll come from THAT).

Sure, in this case snapshots would have worked just as well, but of course there are other cases in which they wouldn’t have. Independent backups cover everything, well assuming you have enough history which is another discussion (I was considering to literally keep it forever after removing some old important file by mistake, but it becomes too daunting and too tempting to remove files removed 1,2,3 years ago).

permalink
report
parent
reply
1 point

modinfo zfs | grep version

To quickly get the version installed.

permalink
report
reply
1 point

zfs --version also does the trick.

permalink
report
parent
reply
1 point

That did not work for me on ubuntu, but did on my debian/proxmox distribution

proxmox:

zfs-0.8.3-pve1

zfs-kmod-0.8.3-pve1

ubuntu:

version: 0.6.5.6-0ubuntu26

srcversion: 0968F94158D646E259D86B5

vermagic: 4.4.0-142-generic SMP mod_unload modversions retpoline

looks like im using an ancient version and am ok?

permalink
report
parent
reply
2 points

I also use a version close to that, 0.something. See absolutely no reason to upgrade. It just works. It’s the version that has the fast scrub already.

permalink
report
parent
reply
1 point

It looks like the source of the bug is identified and fixed.

https://github.com/openzfs/zfs/pull/15579/commits/679738cc408d575289af2e31cdb1db9e311f0adf

[2.2] dnode_is_dirty: check dnode and its data for dirtiness #15579

permalink
report
reply

Data Hoarder

!datahoarder@selfhosted.forum

Create post

We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time ™ ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

Community stats

  • 1

    Monthly active users

  • 913

    Posts

  • 4.6K

    Comments

Community moderators