• Re: [gentoo-user] Re: Package compile failures with "internal compiler

    From corbin bird@21:1/5 to Dale on Wed Sep 4 06:20:01 2024
    On 9/3/24 19:39, Dale wrote:
    Grant Edwards wrote:
    On 2024-09-03, Dale <rdalek1967@gmail.com> wrote:

    I was trying to re-emerge some packages.  The ones I was working on
    failed with "internal compiler error: Segmentation fault" or similar
    being the common reason for failing.
    In my experience, that usually means failing RAM. I'd try running
    memtest86 for a day or two.

    --
    Grant
    I've seen that before too.  I'm hoping not.  I may shutdown my rig,
    remove and reinstall the memory and then test it for a bit.  May be a
    bad connection.  It has worked well for the past couple months tho.
    Still, it is possible to either be a bad connection or just going bad.

    Dang those memory sticks ain't cheap.  o_~

    Thanks.  See if anyone else has any other ideas.

    Dale

    :-)  :-)

    Please refresh my memory, what brand of CPU ( Intel or AMD ) is in your
    new rig?

    ----

    If AMD, binutils can build -broken- without failing the compile process.

    gcc starts segfaulting constantly.

    workaround :

    setup a package.env for gcc and binutils.

        gcc.conf contents :

    CFLAGS="-march=generic -O2 -pipe"

    CXXFLAGS="-march=generic -O2 -pipe"


    use the old version of gcc to rebuild both binutils and gcc ( the new
    version ).

    leave this fix in place, and prevent this error from reoccurring.


    Hope this helps,

    Corbin

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Wed Sep 4 13:10:01 2024
    Am Wed, Sep 04, 2024 at 05:48:29AM -0500 schrieb Dale:

    I wonder how much fun getting this memory replaced is going to be.  o_O 

    I once had a bad stick of Crucial Ballistix DDR3. I think it also started
    with GCC segfaults. So I took a picture of the failing memtest, e-mailed
    that to crucial and they sent me instructions what to do.

    I keep the packaging of all my tech stuff, so I put the sticks into their blister (I bought it as a kit, so I had to send in both sticks), put a paper note in for which one was faulty and sent them off to Crucial in Ireland. After two weeks or so I got a new kit in the mail. Thankfully by that time I had two kits for the maximum of 4 × 8 GiB, so I was able to continue using
    my PC.

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    An empty head is easier to nod with.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmbYPwcACgkQizG+tUDU MMpThQ//ZWXCuib99wFxEmPpEfjOziItWiEkyb+xkk0nLpSyvymqFBfhY31rd7JQ oOrcBjZjR1Lzdy+1OH6WH06VSyyX/FPa+bjjVOm9nV7Pe2JRtrmkTQsI8dDUB9e1 euTKvTEP5aydj/D16YjoowOkBI53Doo4RswyOR55Ct9gu4TwajBDRQcrxPwioXnl Ijw25cITm+i8C1BABl5aeol9yOUFQcaHQbRQtzQjZlAi90+0e7vDY5WyxvDCGCda 8z3aqfZByBrZmlizGpXhOd7TkLIQSXyRG93HEthTsK/6YAOr5tct8IyC/WXIc0lU 70g9T77lUIRSlIm4HwNLN5LRB6Rg1YzJFL2kRPgIjCjOBsGpM6DbD06X52unqV/N YgpqCU26EwctQYQwnlwxe2aY3uoFmc0hUMoexRwK0sVUEWoz9EfNuzugVFiNJidZ hrRAKddkvN7wjimDpnruGkykSlhtDZZvJhrqusesER37f8dEhFAMZULaVuufLZNs If4jkJJd7ovz1YA3DBX2FjxQ+M5s0vY0V8W0D5jJDMaISXi2jc7cKVnpyW5lHJzV R0Tu0VjXfNP6ezAakhyjyRWcXBMKpo57gkJnOZlbG2A9Ct9m1xpK1qzZrBjn3XtQ U3yNCYiM/IlhNN2MNmar1eU6/+GEMwaVcCdAZiNU6dOnylLEvrI=
    =zZ0d
    -----END PGP SIGNATURE-----

    --- SoupGa
  • From Peter Humphrey@21:1/5 to All on Wed Sep 4 18:00:01 2024
    On Wednesday 4 September 2024 12:21:19 BST Dale wrote:

    I wasn't planning to go to 128GBs yet but guess I am now.

    I considered doubling up to 128GB a few months ago, but the technical help people at Armari (the workstation builder) told me that I'd need to jump through a few hoops. Not only would the chips need to have been selected for that purpose, but I'd have to do a fair amount of BIOS tuning while running performance tests.

    I decided not to push my luck.

    --
    Regards,
    Peter.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Wed Sep 4 23:10:01 2024
    Am Wed, Sep 04, 2024 at 07:09:43PM -0000 schrieb Grant Edwards:

    On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:

    I ordered another set of memory sticks. I figure I will have to send
    them both back which means no memory at all. I wasn't planning to go to 128GBs yet but guess I am now. [...]

    Good luck.

    […]
    I plugged them in alongside the recently purchased pair. Wouldn't
    work. Either pair of SIMMs worked fine by themselves, but the only way
    I could get both pairs to work together was to drop the clock speed
    down to about a third the speed they were supposed to support.

    Indeed that was my first thought when Dale mentioned getting another pair. I don’t know if it’s true for all Ryzen chips, but if you use four sticks, they may not work at the maximum speed advertised by AMD (not counting in overlcocking). If you kept the settings to Auto you shouldn’t get problems, but RAM may work slower then. OTOH, since you don’t do hard-core gaming or scientific number-crunching, it is unlikely you will notice a difference in your every-day computing.

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    How can I know what I’m thinking before I hear what I’m saying?

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmbYzEEACgkQizG+tUDU MMq9wxAAm0mDNmFnFUOa74OHdZs27e75Ru0+1qY9XhVpnEh02Ad6jQ0kOdYY1T/X L470ObBC2zmZ8yW9Vs/e9hGpWQ8Kuc/gJOZ7WCRLRStQkLiWzzqomOaeJ2qkhcvn QZjqFgXR84vqdyUwMVM8dyzyCeHm6vaOcKWfL7hBdKuZuhYWizIzlloPeMpLPRdi jI72wxYFe2anEvxcYOsY7kzmqZn9tf5Yb8ujuXJ6iUv16LKFnParAiHw41miYnvy 1V5u4VB/g0/tdb2nyFMaU6/geisFfOAbJsa8MdioL5FwQ0nnLy/9x6l+m4Hzc9d1 VwKOrS5tjhKfUs7ckuS/+IrV2/ReU/6Vgtp/NSMB9zuJ4zsowZnCEqlOvY2I5TcL xwxya1uNYE1cpj3uKov77YLlSMj9FTpdNYZStkx6WhX8lEpkWZJyEw3+C0cPj7WX YJHg3xoeVPYf86ziZ7lhmHjP4Wk8ytfGa7r3SJQtjhJtOR4wBClpO8fGVInPBOqw JSc8wl7Jj6sD1kOAIhTMHQBWiX1Fh4klVVhrKKS/NGU8sD3h4E0qGSt3YQf7CSc6 x+yhMjVlQH2SHbcZgk3gpkm8E3o6RPaLQrEdSCdGpd01xP2YO4iFAmMpz6aJWZlD YZ21RVirk69BFXLqvELMkZGwhmoMWN7CJb23m+aQR2pUM2zyRs4=
    =Qt5s
    -----END P
  • From Michael@21:1/5 to All on Wed Sep 4 23:38:01 2024
    On Wednesday 4 September 2024 23:07:17 BST Grant Edwards wrote:
    On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:
    At one point, I looked for a set of four sticks of the memory. I
    couldn't find any. They only come in sets of two. I read somewhere
    that the mobo expects each pair to be matched.

    Yep, that's definitely how it was supposed to work. I fully expected
    my two (identically spec'ed) sets of two work. All the documentation I
    could find said it should. It just didn't. :/

    --
    Grant

    Often you have to dial down latency and/or increase voltage when you add more RAM modules. It is a disappointment when faster memory has to be slowed down because those extra two sticks you bought on ebay at a good price, are of a slightly lower spec.

    Some MoBos are more tolerant than others. I have had systems which failed to work when the additional RAM modules were not part of a matching kit. I've
    had others which would work no matter what you threw at them. High
    performance MoBos which have highly strung specs, tend to require lowering frequency/increasing latency when you add more RAM.

    Regarding Dale's question, which has already been answered - yes, anything the bad memory has touched is suspect of corruption. Without ECC RAM a dodgy module can cause a lot of damage before it is discovered. This is why I *always* run memtest86+ overnight whenever I get a new system, or add new RAM. I've only had one fail over the years, but I'd better be safe than sorry. ;-) -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmbY4UkACgkQseqq9sKV Zxlb3w//XFcdaJnkxwyLK+MDJWCBlDzBQIql9tQnK9Fyjtto2XaVWSNIY3g/zY7D G3glmr+XExarAMrrjR1IYc3TvGrhsAhymhnGhGqNEtP8wfP3a52xHhA8sfvwZbTp XOk1RffcyRrc7NQsyD8L+AInfAE2/naCv88B3fhRGVE7cACIo41NkzEUeWCIxX43 nfugRn+2E9sZERJxS/EhdisS5P9KauI4figFlOhoAtnlJeuGxRZMigz84JvTLsXm WyWREZ1SBj8zz1hLnx79GKc0bhelXXX5TO0d/dOhZ6tjGINx4GmFz4lZ3BghkL1g hYW36W9mnoqpgohwWiZg2H3hmJ7jzuKtB184uMWZTuMowfSOcfgfhhsa5RKSH0t+ BCht3uiQOW1MDFDFA7v9FuksAwMkNV5EbA1v04T6Pk3sQSzCMNDS/dwPG4Tg3mOe xX6mOnwQIL5oa141QepucMD+TT13r/WnAQww3urG6TSTrM5NlFijml3O+0gbwTiL eNuEv7gLm9qm5VXZ7m8M2/wl/0KTjl0tt2ZIA3gIUnEzP6UTih/NgrwAg8zs07bl I9+EqmZzLa+KoQt+J6CK3vQeTqsmUC+Mr8WlfEfItgwX/NtFIeHW5iyjX0h4Yd0Q l82MkIK+6HMbiDOnwfofDCxgmjVCeslZ8XqYg7TQHxzmhjkfVBc=
    =aT+L
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael@21:1/5 to All on Thu Sep 5 09:05:13 2024
    On Thursday 5 September 2024 01:11:13 BST Dale wrote:

    When I built this rig, I first booted the Gentoo Live boot image and
    just played around a bit. Mostly to let the CPU grease settle in a
    bit. Then I ran memtest through a whole test until it said it passed.
    Only then did I start working on the install. The rig has ran without
    issue until I noticed gkrellm temps were stuck. They wasn't updating as temps change. So, I closed gkrellm but then it wouldn't open again.
    Ran it in a console and saw the error about missing module or
    something. Then I tried to figure out that problem which lead to seg
    fault errors. Well, that lead to the thread and the discovery of a bad memory stick. I check gkrellm often so it was most likely less than a
    day. Could have been only hours. Knowing I check gkrellm often, it was likely only a matter of a couple hours or so. The only reason it might
    have went longer, the CPU was mostly idle. I watch more often when the
    CPU is busy, updates etc.

    Ah! It seems it died while in active service. :-)

    There's no way to protect against this kind of failure in real time, short of running a server spec. board with ECC RAM. An expensive proposition for a
    home PC.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmbZZjkACgkQseqq9sKV ZxkN5RAAkQxyKSLQnt5H4fdaV7FseQV05Rxw6KWHHnWdhe/YUX5tnehoBmcFxYAe nWOp44JMvkZE939UlVEbNqFqdHcl6Vwt3HzFwV93+U5X1gwO0KcceDHHkrpe21FV 1AbHwE6XCJYz3K6v+qQ5AIbGFuYHzfauvC/HAShJisjm54gyKtTgsxUWW1kDysBT 2rsfPB5wN5r7dah2/Rrs5zfjD5jhwsQItIHKREVodjgBHQodMjgxHnsETaSqfrms gIgADzysVxyi0VdTQbG0ezFkv2FzRXmzid8lU9rLK+D6KVlx8BfzTV/3Ua5n9X4l nssbS1Q8boCMabNl9sg6wbFUsPphHtyAwBrE//2ibsLFp+EWdZvZlUmgKHMDRWZJ 2BWk12IE6nl7Bb5Mves7ZfzIcmoP4FqmdLTfdwomJooXCDoztnhdfVyGd9r6yHZW 0U1fj8WE9fVmS+LIDyATQkNofhI1scM3LeiPu0ARHokcQjRV23UJcUIqAZCTVa9S R/BQsmABpcnwN/pd8mEHEAJoYquim4ci8ErwpvoZNL3ak5lzJ3rsf1Adxl5wGkw+ HJtjR3vbn1dHfZfGAzI9mtBrQpiYElQMizuw1GvBiQ2rOOYtyklguqo3OJfF/f// mbv2XsxRqdQobCx4kTrKEYNua8/Gp4SIiGluDpuxHt1+AwUeZtk=
    =kXB3
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael@21:1/5 to All on Thu Sep 5 09:42:55 2024
    On Thursday 5 September 2024 09:36:36 BST Dale wrote:

    I've ran fsck before mounting on every file system so far. I ran it on
    the OS file systems while booted from the Live image. The others I just
    did before mounting. I realize this doesn't mean the files themselves
    are OK but at least the file system under them is OK.

    This could put your mind mostly at rest, at least the OS structure is OK and the error was not running for too long.


    I'm not sure how
    to know if any damage was done between when the memory stick failed and
    when I started the repair process. I could find the ones I copied from
    place to place and check them but other than watching every single
    video, I'm not sure how to know if one is bad or not. So far,
    thumbnails work. o_O

    If you have a copy of these files on another machine, you can run rsync with --checksum. This will only (re)copy the file over if the checksum is different.


    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmbZbw8ACgkQseqq9sKV Zxmlcg//W7Lw7fmA7Mg8CDQqWSYYc6W7AO+dud+xOkzfQMGubSmoWrJf+4rug1xZ +/Ln7mMHU5zEac9//EEzG4slQDX4pGTrgmAXEes5x4DmulwBe0i2Y5bjRbjQAjuF cR/13v+lpaNseYmCHPPlYnohv0UL3/kfcuWwpE8dst7rNCBRXMWBgnVzvd+dOhGz u05NqF7/nigUeftXdaPFtaYO83DSVaq5bM7XrZutvZPme/03UUGp2Okr2UxEosAN X7Fzq2Y8P+ct+SpOVeePpMnmqY8BcGApoOrAyZck08CINF5lHo6XPptWK4Bsqd1Y y89Q3Zr4QILMMuq9xQAZfKY6xlavGudwFa789GVSwp+kMg5okUHgo66RRj3oMsrp a92xhq1zbFoYVCw0KwnoTSmKW9H9I0n8BQsE2iCguP96G9huBZKiHlHONAZD+Dgd a0ulEIbA30t+9nWDSaWdqNeu/MQYf+UJjiSk9358NvxCXlUhHC32L0vx7IxP+QmB AfJSBTzc+Md5wBt+R4+ZmstpkgNuCeqG4wZ25c3RirajJ3wXghYeSqMbWVDrs78b 5oxZWjT2424pUrsuTmrKIc3HJFZyWNU1won0EKj8DuJLzEvPrwdAAkmThYtV2U3o ePmoks/k2foBXbtXnY2OgN2GzgOLcZubBDnTnWiTcxAexY6tezo=
    =KjFb
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Thu Sep 5 11:10:01 2024
    Am Wed, Sep 04, 2024 at 11:38:01PM +0100 schrieb Michael:

    Some MoBos are more tolerant than others.

    Regarding Dale's question, which has already been answered - yes, anything the
    bad memory has touched is suspect of corruption. Without ECC RAM a dodgy module can cause a lot of damage before it is discovered.

    Actually I was wondering: DDR5 has built-in ECC. But that’s not the same as the
    server-grade stuff, because it all happens inside the module with no communication to the CPU or the OS. So what is the point of it if it still causes errors like in Dale’s case?

    Maybe that it only catches 1-bit errors, but Dale has more broken bits?

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    Says the zero to the eight: “nice belt”.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmbZdPUACgkQizG+tUDU MMqAvBAAtmYYQJoLgNRhT/Wi20iLma0QcpkvqKkfLFCY7Hli5e1jSxujURQeSZ3h a7NJtMiEOmmeX8rljeJe4WNB6BxQNabolrYcxidgGdD+9nFTKMLt00Tfuc3g94fl wA5PAX9pVtNXklwzQaNZL4ieb6QhKfBOvHOGQID3v6m43yLAshM2t4q4gxSlKrQj 5kXaO1OKU4ZM5NgMezi8Hc/GA3G0lgKROOOWYoWyfIRp757ru+CV/cDQuP2KArVa FW/LJNOLA6H3H+wIvVcjXo7yknOFrrBqHtNLBwIqaNtgCM65a6Hjr/+IEADTwlK1 pmJNghleb0RDD1QqeDlDlJrwXKuns3l3ZFM1RLg5pKCbuldex2dcC6n+jYOgwcrb gVGGfpq8LVDXT2inaQUOi+WqFfaG4KgCwD1LsQLwlR7oXHBQ6JiB/YYnaSgsRxrG oy7Qpln+Bdb2IFtNt6kQtE+DcuEFPeVFJpxvpvrI+Kz/Mvybosh9tuo0ySCgfA9M 6OZJA8oMvSOgZXBPOw2eguDjZob9d+fyOSwIhhpYi6JS3ML6c+yC6A6DW66KAoWG Ah1gtxvi63+sNuRtaLLs2mOz3Hk61TvbvakhNdmaSX57CjqYe0tWmBXoZth02sAI WctUqCoXVGl8VqADtX2ol8seMLdIOSMnDHXyRu93TfXb3EzxEpI=
    =QZWa
    -----END PGP SIGNATURE-----

    --
  • From Frank Steinmetzger@21:1/5 to All on Thu Sep 5 12:10:01 2024
    Am Thu, Sep 05, 2024 at 10:36:19AM +0100 schrieb Michael:

    Maybe that it only catches 1-bit errors, but Dale has more broken bits?

    Or it could be Dale's kit is DDR4?

    You may be right. We talked about AM5 at great length during the concept
    phase and then I think I actually asked back because in one mail he
    mentioned to have bought an AM4 CPU (5000 series). :D

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    Damn Chinese keyboald dlivel!

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmbZgW4ACgkQizG+tUDU MMoaaBAAmZGffLPlv4tTh5Gy+sljhDOVYTh/Hf2eKP25CtxE19HUSxxo5vzFDUE6 p/38STq47ROUbDsjqOxnsw7vUPAz2KD2hP8O3CBOSCjHJCbBGBP4BQL8UDO+jz6c V5DVRPtZrv6ian/hg4/q/LLUAQCBk/I1AhBIzxLH+vVFWTjDU++98O8h2saZtqJ+ ItOf9Ub15Ak9NYFETks+sihgVU+lpw9GiLMpoXNVE3O5gpU2pKYIlt3x/k9kgj2d mf/RNSHEyBRLpW4gAmlBxKy2FS7q3EOrhjjIrsJ1V8eacvvVgo0mQzB6utBYW/Ml z6G5GhxFpCrsHh5L+37FGGIqCEjGDD6WYEVZCvwnjyZEH7mHbOlZhVUQMYA8/D0y CzR4Pym4E4nUkl4ZcC3+shFNvLkrutRtyiqdGoDBWGz11Kh/lVUpBja8XTYJerGa lkrsfbLgzImvOSn2rvvHiYf+N8yeTnFPCs3vtpunkCJs4bAZJRTlf6jAVh7Ro1DH cBWjEjy7wgx1ClEYAufr6CqvrCK1UyGYRuAJhtgEACSiKS1abkAmg0xHkEgo+MG3 F997uAWicw4FCBotrqfjz0OfcaDRbmjHmqqhUrM1qB2GDPG/2aZI2spBbMB4Wpd1 PCvlbMfAEuQ2S/222aTa6V0+HM1hVJzHC1piGhMRnQ67II4wTjU=
    =bKc6
    -----END PGP SIGNATURE-----

    --- SoupGate-Win3
  • From Michael@21:1/5 to All on Thu Sep 5 10:36:19 2024
    On Thursday 5 September 2024 10:08:08 BST Frank Steinmetzger wrote:
    Am Wed, Sep 04, 2024 at 11:38:01PM +0100 schrieb Michael:
    Some MoBos are more tolerant than others.

    Regarding Dale's question, which has already been answered - yes, anything the bad memory has touched is suspect of corruption. Without ECC RAM a dodgy module can cause a lot of damage before it is discovered.

    Actually I was wondering: DDR5 has built-in ECC. But that’s not the same as the server-grade stuff, because it all happens inside the module with no communication to the CPU or the OS. So what is the point of it if it still causes errors like in Dale’s case?

    Maybe that it only catches 1-bit errors, but Dale has more broken bits?

    Or it could be Dale's kit is DDR4?

    Either way, as you say DDR5 is manufactured with On-Die ECC capable of correcting a single-bit error, necessary because DDR5 chip density has increased to the point where single-bit flip errors become unavoidable. It also allows manufacturers to ship chips which would otherwise fail the JEDEC specification. On-Die ECC will only correct bit flips *within* the memory chip.

    Conventional Side-Band ECC with one additional chip dedicated to ECC correction is capable of correcting errors while data is being moved by the memory controller between the memory module and CPU/GPU. It performs much more heavy lifting and this is why ECC memory is slower.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmbZe5MACgkQseqq9sKV ZxkTHQ//WtFpq08Az9XvQDEIc2w0qd8mnFsS/N5owX0BrZeWJGUgHMOOJMXq9vao ZspI2iG22ndmQV+DumWkYHMDNq9FbaNbYD9uOJtttQ2ShVYGRK7kR3LhHx7hZpQw 0zfqfhAWA9rKhUUN7mm2tIoNoH80UaFTbANIbstcx+eJllBveDUg7Jw/yLp8PgmO zvgOkrcJgmA2SNV+DLZdQnesZIS4ieQChv3m2gA4yuDSZGpU25Xm8hJ9HRZ3sE3C zoGutr5T2ATKaX+EXqFcg6wrnA7kwB1gYS/DqFrgQwdGOnjQPDCtPLGxd9KN+yDH LlssXWlB9UF8kXAN4YCETIJKOq6DnLlYQJtwxo+zZ6Bc0ktEzONwNWgVxtCbZifN IzJUgEh3gyEs1/JBN7QKYrVTcAU5hCDiYwKIoVkOCSWQmCtmnT/TMaYp8f3D3Tqe WIO535tj2nM9niR5LKXWTmIeSsIZPQmBf37SN1Khk4ktHDXnOo+6l5p0q5O3Y5yN CKThpY3lyv2RU5pqV7Wck/dQIac/Nc7dP4qLYyeiD2zpFdwnvcOU81u/R/c0GA8s Ugmgic7pidh3eMv+kzbmkxGkaOHMJAWp9ATgv7rtVu7nBfLmergf7WdWD0I22l9T xnvsLCfhuYSMChEW0ICO4hFkq/GE4nlbqvXchna+vB+hqZJ/NiY=
    =1FBi
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael@21:1/5 to All on Thu Sep 5 12:08:39 2024
    On Thursday 5 September 2024 11:53:16 BST Dale wrote:
    Michael wrote:
    On Thursday 5 September 2024 09:36:36 BST Dale wrote:
    I've ran fsck before mounting on every file system so far. I ran it on
    the OS file systems while booted from the Live image. The others I just >> did before mounting. I realize this doesn't mean the files themselves
    are OK but at least the file system under them is OK.

    This could put your mind mostly at rest, at least the OS structure is OK and the error was not running for too long.

    That does help.

    I'm not sure how
    to know if any damage was done between when the memory stick failed and
    when I started the repair process. I could find the ones I copied from
    place to place and check them but other than watching every single
    video, I'm not sure how to know if one is bad or not. So far,
    thumbnails work. o_O

    If you have a copy of these files on another machine, you can run rsync with --checksum. This will only (re)copy the file over if the checksum
    is different.

    I made my backups last weekend. I'm sure it was working fine then.
    After all, it would have failed to compile packages if it was bad. I'm thinking about checking against that copy like you mentioned but I have
    other files I've added since then. I figure if I remove the delete
    option, that will solve that. It can't compare but it can leave them be.

    Use rsync with:

    --checksum

    and

    --dry-run

    Then it will compare files in situ without doing anything else.

    If you have a directory or only a few files it is easy and quick to run.

    You can also run find to identify which files were changed during the period you were running with the dodgy RAM. Thankfully you didn't run for too long before you spotted it.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmbZkTcACgkQseqq9sKV Zxn3URAA4iwFsw5JOBDFYhgUOO9vFyV54SLAwA9Dhr/XndJ0Ar+nQwKAlHIlhwpH 9cIa7eYY7vQAIdxwJdYuAl8mNgqDEQZxspc6potZ/wT4oRBwi+5XUsW7RRRzsdJY uBRc7Hgvu9FwZx9tjZP0M9vCLOzArv1eNhjcP+kCK3VJVk6NT9HoKeKBR0gzcJgM 89W97qvnXvptQgMDN4xzTIbcPPWtiYRC4nq6jMqwtIm6Y6oEesA5irnC9Pukt+0J Pn84ytIiqO2+EG9obzeDv3PvTRnl8rxJKQMMGAyUMZyn5+52ExkDxEkBmJsuDKvr 2xNv5kRdg3odU9bl677u1iRP8yE9mXZnjuogWc1Jv/fO3HtZUWhbhihQ2SOrU75m hRR0qnuzcPP6/Xsx4o3uMEGXMWj+qHxkN5g63AWxnNjvI47lLE9pYnULxDXr3yHk Ak9f+rYfaysDyphG9CdQ53nNFwOUbJT/A3PUtz8hL8F1mtjKSJY13gMXkWKexSSa e7xwte0xCK3Oilyy8ACFiE5aPGyM/hPx7vR55yG/RdqGVH1FhDQ8Ym0/69eGz1Sa 239nwZPo1w81NIe0waFoccT4hKb6iuwQGS+Y5tMsBzRki8Mnyssr/GEBicvQ0CQD HRLp0icTyxT/gZGdCLIuHhA61SEErP2ffgK/OzbhNKjJ1BHyGjE=
    =x4ZF
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Thu Sep 5 21:00:01 2024
    Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale:

    Use rsync with:

    --checksum

    and

    --dry-run

    I suggest calculating a checksum file from your active files. Then you don’t have to read the files over and over for each backup iteration you compare
    it against.

    You can also run find to identify which files were changed during the period
    you were running with the dodgy RAM. Thankfully you didn't run for too long
    before you spotted it.

    This. No need to check everything you ever stored. Just the most recent
    stuff, or at maximum, since you got the new PC.

    I have just shy of 45,000 files in 780 directories or so.  Almost 6,000
    in another.  Some files are small, some are several GBs or so.  Thing
    is, backups go from a single parent directory if you will.  Plus, I'd
    want to compare them all anyway.  Just to be sure.

    I aqcuired the habit of writing checksum files in all my media directories such as music albums, tv series and such, whenever I create one such directory. That way even years later I can still check whether the files are intact. I actually experienced broken music files from time to time (mostly
    on the MicroSD card in my tablet). So with checksum files, I can verify which file is bad and which (on another machine) is still good.

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    Lettered up the mixes?

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmbZ/rcACgkQizG+tUDU MMor8hAAj5/iD1pyXN8bmjCAUrLCNGFpjY94ukq4fEtIB+XXgEhyeY8TgA9gmP2G cW1s1RYz4lNmMtNIsJUEuHy1mBI56AecbBMIlXYlAcWx6YTFBriGjiirlfPEke/e AFQ7wmT/s5TGJ23Ccm+d/oa8lcdycrNb0tSSV04Lc8cJ4nNEq7WmdfA8QGNsH0NH oX3B3U/DU7KBimfAYmhiolDQKpVS+v+zRh7BcxbW5XKVTwhzXJ5KBR+hnzTbtsqm 2+aK3RDn4AxAjCRdQDYuMbou4rw282ZW5e/OFo9lvxwaYH0DMhbPQGdc3sUX1MhL GqBLwh+Bo6G51GL5GSTHzO5WX58Ay7VIOODtiSjs352Noh2Sas9ryEZl/mJRdTH8 I5lDwNzqh+40kAWaU0TaRmeEyVgf9PlCpH7uL/CnsZydbKlhkwGjmZPTizL8pqI/ jrmD79BNDCrx5F/X+t07utdrFdSS0h9VKXU2ByQOycWrVJJJs+zcrWZuWqRtsGhg ssOz1g+ITwzmviqihtd71W/+EYyVd/7oPMhYQJuaXs5aNHbdZUpSliC3Y0Okx5ne jwlla4S7zqDqWjhZVpG7qQqDPFivZV91YSmCFhPL0mFf/trQlVREWTKZZ3S5WEnI EgTGH1fvvKhgvgjnYCsgriYefHztPklhnPyn0Pv4ZOhBrYefmXk=
    =enEM
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
  • From Michael@21:1/5 to All on Thu Sep 5 23:06:26 2024
    On Thursday 5 September 2024 19:55:56 BST Frank Steinmetzger wrote:
    Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale:
    Use rsync with:
    --checksum

    and

    --dry-run

    I suggest calculating a checksum file from your active files. Then you don’t
    have to read the files over and over for each backup iteration you compare
    it against.

    You can also run find to identify which files were changed during the period you were running with the dodgy RAM. Thankfully you didn't run for too long before you spotted it.

    This. No need to check everything you ever stored. Just the most recent stuff, or at maximum, since you got the new PC.

    I have just shy of 45,000 files in 780 directories or so. Almost 6,000
    in another. Some files are small, some are several GBs or so. Thing
    is, backups go from a single parent directory if you will. Plus, I'd
    want to compare them all anyway. Just to be sure.

    I aqcuired the habit of writing checksum files in all my media directories such as music albums, tv series and such, whenever I create one such directory. That way even years later I can still check whether the files are intact. I actually experienced broken music files from time to time (mostly on the MicroSD card in my tablet). So with checksum files, I can verify
    which file is bad and which (on another machine) is still good.

    There is also dm-verity for a more involved solution. I think for Dale something like this should work:

    find path-to-directory/ -type f | xargs md5sum > digest.log

    then to compare with a backup of the same directory you could run:

    md5sum -c digest.log | grep FAILED

    Someone more knowledgeable should be able to knock out some clever python script to do the same at speed.
    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmbaK2IACgkQseqq9sKV ZxmXjRAAwyrtHHOEklG3NQWfCkJsyNU/45yTqG7qfewSvrMOJ38gcKBVbXA5bJPq xwLpcwo/8t1yeTdci9skfflxw+uAmFB76JVlu/uLzZhYxyvy0NN1ldERy1M6kwpj 2QUAgKWs15bTdJNbsH1g/ro73iecn9mP87anjw0c/IKljlzmwyRgBbwUiVR4/qCa moLGuZCargr4L5P3443gejtbRR3X9g4Txt0FWnx/S5FVLMiJsESdg+HNaXXu5a6T OnNAEUEXjzUABybmrQ2OhXAaURTBiEeAjYYv6z3HGa8rzb8T5T/xMMZWFcK3FmF9 JqDFFolm0NasZwMfFHEbM4f6iErV9UXvsr88X5FIDN1lN+2HV4xbMid2zXoKaPz4 zoruFVmKU5+hYVBfG67HzqmnIDIgVmNUJihptCNa4ecz7Dw4wXIKDdbweIhHwBgk CrhRoCHHi9xbapXetdgKrtN2aoqjtrOds4ucKpmJ+eCWQF2PJf8rFYOS/vLYTjrm tHcT6fdEtP5JhTmdw1Vpi+Fln6Xxx6vLUesEclAIEI1Bv4DG/6FiandqV7SP/I7C bg6Z5ErYSY0pQXrL3xShjUvQv0dB2jKtS9en1RewUWCleLvhQ8mAjYJoUUq+Nls+ IO7G9gH9ZMLOl/lXPUPgEtWjKiUixWjf8oO+UhAnbftfbw01RRY=
    =HywO
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael@21:1/5 to All on Fri Sep 6 13:21:20 2024
    On Friday 6 September 2024 01:43:18 BST Dale wrote:
    Michael wrote:
    On Thursday 5 September 2024 19:55:56 BST Frank Steinmetzger wrote:
    Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale:
    Use rsync with:
    --checksum

    and

    --dry-run

    I suggest calculating a checksum file from your active files. Then you
    don’t have to read the files over and over for each backup iteration you >> compare it against.

    You can also run find to identify which files were changed during the >>>> period you were running with the dodgy RAM. Thankfully you didn't run >>>> for too long before you spotted it.

    This. No need to check everything you ever stored. Just the most recent
    stuff, or at maximum, since you got the new PC.

    I have just shy of 45,000 files in 780 directories or so. Almost 6,000 >>> in another. Some files are small, some are several GBs or so. Thing
    is, backups go from a single parent directory if you will. Plus, I'd
    want to compare them all anyway. Just to be sure.

    I aqcuired the habit of writing checksum files in all my media
    directories
    such as music albums, tv series and such, whenever I create one such
    directory. That way even years later I can still check whether the files >> are intact. I actually experienced broken music files from time to time
    (mostly on the MicroSD card in my tablet). So with checksum files, I can >> verify which file is bad and which (on another machine) is still good.

    There is also dm-verity for a more involved solution. I think for Dale something like this should work:

    find path-to-directory/ -type f | xargs md5sum > digest.log

    then to compare with a backup of the same directory you could run:

    md5sum -c digest.log | grep FAILED

    Someone more knowledgeable should be able to knock out some clever python script to do the same at speed.

    I'll be honest here, on two points. I'd really like to be able to do
    this but I have no idea where to or how to even start. My setup for
    series type videos. In a parent directory, where I'd like a tool to
    start, is about 600 directories. On a few occasions, there is another directory inside that one. That directory under the parent is the name
    of the series. Sometimes I have a sub directory that has temp files;
    new files I have yet to rename, considering replacing in the main series directory etc. I wouldn't mind having a file with a checksum for each
    video in the top directory, and even one in the sub directory. As a
    example.

    TV_Series/

    ├── 77 Sunset Strip (1958)
    │ └── torrent
    ├── Adam-12 (1968)
    ├── Airwolf (1984)


    I got a part of the output of tree. The directory 'torrent' under 77
    Sunset is temporary usually but sometimes a directory is there for
    videos about the making of a video, history of it or something. What
    I'd like, a program that would generate checksums for each file under
    say 77 Sunset and it could skip or include the directory under it.
    Might be best if I could switch it on or off. Obviously, I may not want
    to do this for my whole system. I'd like to be able to target
    directories. I have another large directory, lets say not a series but sometimes has remakes, that I'd also like to do. It is kinda set up
    like the above, parent directory with a directory underneath and on
    occasion one more under that.

    As an example, let's assume you have the following fs tree:

    VIDEO
    ├──TV_Series/
    | ├── 77 Sunset Strip (1958)
    | │ └── torrent
    | ├── Adam-12 (1968)
    | ├── Airwolf (1984)
    |
    ├──Documentaries
    ├──Films
    ├──etc.

    You could run:

    $ find VIDEO -type f | xargs md5sum > digest.log

    The file digest.log will contain md5sum hashes of each of your files within the VIDEO directory and its subdirectories.

    To check if any of these files have changed, become corrupted, etc. you can run:

    $ md5sum -c digest.log | grep FAILED

    If you want to compare the contents of the same VIDEO directory on a back up, you can copy the same digest file with its hashes over to the backup top directory and run again:

    $ md5sum -c digest.log | grep FAILED

    Any files listed with "FAILED" next to them have changed since the backup was originally created. Any files with "FAILED open or read" have been deleted, or are inaccessible.

    You don't have to use md5sum, you can use sha1sum, sha256sum, etc. but md5sum will be quicker. The probability of ending up with a hash clash across two files must be very small.

    You can save the digest file with a date, PC name, top directory name next to it, to make it easy to identify when it was created and its origin. Especially useful if you move it across systems.


    One thing I worry about is not just memory problems, drive failure but
    also just some random error or even bit rot. Some of these files are
    rarely changed or even touched. I'd like a way to detect problems and
    there may even be a software tool that does this with some setup,
    reminds me of Kbackup where you can select what to backup or leave out
    on a directory or even individual file level.

    While this could likely be done with a script of some kind, my scripting skills are minimum at best, I suspect there is software out there
    somewhere that can do this. I have no idea what or where it could be
    tho. Given my lack of scripting skills, I'd be afraid I'd do something
    bad and it delete files or something. O_O LOL

    The above two lines is just one way, albeit rather manual way, to achieve this. Someone with coding skills should be able to write up a script to more or less automate this, if you can't find something ready-made in the interwebs.


    I been watching videos again, those I was watching during the time the
    memory was bad. I've replaced three so far. I think I noticed this
    within a few hours. Then it took a little while for me to figure out
    the problem and shutdown to run the memtest. I doubt many files were affected unless it does something we don't know about. I do plan to try
    to use rsync checksum and dryrun when I get back up and running. Also,
    QB is finding a lot of its files are fine as well. It's still
    rechecking them. It's a lot of files.

    Right now, I suspect my backup copy is likely better than my main copy.
    Once I get the memory in and can really run some software, then I'll run rsync with those compare options and see what it says. I just got to remember to reverse things. Backup is the source not the destination.
    If this works, I may run that each time, help detect problems maybe.
    Maybe??

    This should work in rsync terms:

    rsync -v --checksum --delete --recursive --dry-run SOURCE/ DESTINATION

    It will output a list of files which have been deleted from the SOURCE and will need to be deleted at the DESTINATION directory.

    It will also provide a list of changed files at SOURCE which will be copied over to the destination.

    When you use --checksum the rsync command will take longer than when you don't, because it will be calculating a hash for each source and destination file to determine it if has changed, rather than relying on size and timestamp.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmba88AACgkQseqq9sKV ZxmqBRAAq64OMwsWMdTie39PkNEsFF5hjMjwBKg0kDY6wCuRwMFyk8FSfZ9ZWCJN 8S+6IemY8iKiQXq8e2qqVlzxnuLCVu7+E6rBu4K9x7RAKs9KJ9e6/W5go/zq8Cja qGk3OPgFInqHfQOPvuS2Q1BcvfXvITZIzM45w1nui06c9U4Z8hdBbN3jShbRL6h7 fLg7dtjTixYAMb+PDcezSkIkC2257vWozsfBlm0qOWT9+08t+Ww/CcXx+jT+3PID N5bB5UVzX4VOsD24JWXWHifKBX/2plG1eGBlOi7i7KNBMcnsIUbpsFlvaQiAPNTU nkTJiW8kmwne42DUk2aMg5aEeR9h9uD5iwcdfH0oi/GbPiuu0EUPNMbSWjsAobvg nXlFHNyWo/P5zMPoBpOqt92P1AE/fAwpROrJuxrTrqpkgdR/KtxUVOdBj3VUPldS kzqEqGKKrZhXkKLzbsK44qhr9ush7/+pvDxXf0Qz1fYoUkYUWStZKg2FzLz7hs/l 0CzIFzzIDYDBTxOloqKazVG0/ljPUlyS0dh0NElItlS1s6elFSOLrxbeFK0fpn7G l25HdYQbbznsHAuINCky9vuEK5YKGywBQkTnvoXmU1ISwgeET35z6Zy8ZWhhD/Ih EDsmow7K2nQ8FYgyROAtlDrkAhJy9v6FrEvZuuT8umrufer9pIA=
    =O8Vl
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Fri Sep 6 23:50:01 2024
    Am Fri, Sep 06, 2024 at 01:21:20PM +0100 schrieb Michael:

    find path-to-directory/ -type f | xargs md5sum > digest.log

    then to compare with a backup of the same directory you could run:

    md5sum -c digest.log | grep FAILED

    I had a quick look at the manpage: with md5sum --quiet you can omit the grep part.

    Someone more knowledgeable should be able to knock out some clever python script to do the same at speed.

    And that is exactly what I have written for myself over the last 11 years. I call it dh (short for dirhash). As I described in the previous mail, I use
    it to create one hash files per directory. But it also supports one hash
    file per data file and – a rather new feature – one hash file at the root of
    a tree. Have a look here: https://github.com/felf/dh
    Clone the repo or simply download the one file and put it into your path.

    I'll be honest here, on two points. I'd really like to be able to do
    this but I have no idea where to or how to even start. My setup for
    series type videos. In a parent directory, where I'd like a tool to
    start, is about 600 directories. On a few occasions, there is another directory inside that one. That directory under the parent is the name
    of the series.

    In its default, my tool ignores directories which have subdirectories. It
    only hashes files in dirs that have no subdirs (leaves in the tree). But
    this can be overridden with the -f option.

    My tool also has an option to skip a number of directories and to process
    only a certain number of directories.

    Sometimes I have a sub directory that has temp files;
    new files I have yet to rename, considering replacing in the main series directory etc. I wouldn't mind having a file with a checksum for each video in the top directory, and even one in the sub directory. As a example.

    TV_Series/

    ├── 77 Sunset Strip (1958)
    │ └── torrent
    ├── Adam-12 (1968)
    ├── Airwolf (1984)

    So with my tool you would do
    $ dh -f -F all TV_Series
    `-F all` causes a checksum file to be created for each data file.

    What
    I'd like, a program that would generate checksums for each file under
    say 77 Sunset and it could skip or include the directory under it.

    Unfortunately I don’t have a skip feature yet that skips specific directories. I could add a feature that looks for a marker file and then
    skips that directory (and its subdirs).

    Might be best if I could switch it on or off. Obviously, I may not want
    to do this for my whole system. I'd like to be able to target
    directories. I have another large directory, lets say not a series but sometimes has remakes, that I'd also like to do. It is kinda set up
    like the above, parent directory with a directory underneath and on occasion one more under that.

    As an example, let's assume you have the following fs tree:

    VIDEO
    ├──TV_Series/
    | ├── 77 Sunset Strip (1958)
    | │ └── torrent
    | ├── Adam-12 (1968)
    | ├── Airwolf (1984)
    |
    ├──Documentaries
    ├──Films
    ├──etc.

    You could run:

    $ find VIDEO -type f | xargs md5sum > digest.log

    The file digest.log will contain md5sum hashes of each of your files within the VIDEO directory and its subdirectories.

    To check if any of these files have changed, become corrupted, etc. you can run:

    $ md5sum -c digest.log | grep FAILED

    If you want to compare the contents of the same VIDEO directory on a back up,
    you can copy the same digest file with its hashes over to the backup top directory and run again:

    $ md5sum -c digest.log | grep FAILED

    My tool does this as well. ;-)
    In check mode, it recurses, looks for hash files and if it finds them,
    checks all hashes. There is also an option to only check paths and
    filenames, not hashes. This allows to quickly find files that have been renamed or deleted since the hash file was created.

    One thing I worry about is not just memory problems, drive failure but
    also just some random error or even bit rot. Some of these files are rarely changed or even touched. I'd like a way to detect problems and there may even be a software tool that does this with some setup,
    reminds me of Kbackup where you can select what to backup or leave out
    on a directory or even individual file level.

    Well that could be covered with ZFS, especially with a redundant pool so it can repair itself. Otherwise it will only identify the bitrot, but not be
    able to fix it.

    Right now, I suspect my backup copy is likely better than my main copy.

    The problem is: if they differ, how do you know which one is good apart from watching one from start to finish? You could use vbindiff to first find the part that changed. That will at least tell you where the difference is, so
    you could seek to the area of the position in the video.

    This should work in rsync terms:

    rsync -v --checksum --delete --recursive --dry-run SOURCE/ DESTINATION

    It will output a list of files which have been deleted from the SOURCE and will need to be deleted at the DESTINATION directory.

    If you look at changed *and* deleted files in one run, better use -i instead of -v.

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    If two processes are running concurrently,
    the less important will take processor time away from the more important one.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmbbdwoACgkQizG+tUDU MMpwahAAx4iULwvOmIpbw4IG30yMiNnkYK9Fq5yVGgL6dfM3Izint9mvHpEmOKl4 wg1gJQf8V8dDJFAI2al/ihP11+phbbVmnxwW/lked68JcRu7Te84tUsqLdi0mPp8 jK4zycqXRt9UzcUEyBW2+u75CHmAI4qRUOHhPvpoUeaUBmHmOm32GSWX79fjKtIe O47YvdUzIL9Y/RJFKg1eK1LxOvu01vGXkmPiwjk6HAGNtMq/+7T3SHpnkAPbStQn 7X7YixB3yyxuxYE0mn90hnIGBu+vzNCkyxo5E0MiDWhf6XnAD+SMQK+wcfazt+Zo OHaI+aOhpral6Xt5CfUyNKEbJNolG4oHE8LePfcU0MP5XDArsghWQW0wOjwVgFba U5bYOoFmmGmyBU0Ox/9Z9o1EGGuPZ2+q5ZZJmnNEFFwttZcjhbORIKvA6A0UBIuy Knc8IX+m0vGFdM5FFbAFcRWWt/FOZ6WGbLeXnpwKawK9+gOryqn2tvEJ22ZndppQ gqrmRV/HWeYu/eNfYzC8iofSNqTZk9fq/sg6+Q0fj9UbVCKRGxhJ7ZSyUlZd7mcr qFhikTEAH46u9gKbzT5W7RZJ4CBityaAEcuWckEq6Ar8oBIvgaruYZ/WmBZJCXW7 euvYVqBTOFzGodxUu
  • From Michael@21:1/5 to All on Sat Sep 7 00:17:42 2024
    On Friday 6 September 2024 21:15:32 BST Dale wrote:

    Update. New memory sticks i bought came in today. I ran memtest from
    Gentoo Live boot media and it passed. Of course, the last pair passed
    when new too so let's hope this one lasts longer. Much longer.

    Run each new stick on its own overnight. Some times errors do not show up until a few full cycles of tests have been run.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmbbjZYACgkQseqq9sKV ZxlQZxAA4fHZhgHck9/tjT+ir7U2A5zCa1VrfQXzF8b5JRFEYcT1nxDI6AGRuMbb WX2iNOZMC5eGhW9rZvawG6bBLS7BwVSRhML3INxlM4w8yePOa0AY803MCYLxZMDB 9FTy5ow8qq892QBH7SFrCwYS6YwMf76UDXdG7K0Y7a4fRIlChnil4CTQFeNvJCJS ElPFBTHgNgFJpcLp0x4kmWU8YNeihOOwHBiZgz08NnUWwyqyuTC1Xz0OUG0T1fyT V2TdKCDyAqBA4looTAHzkWaEBYkaL+0Rbt6Bx/xyb05ZBiI031W54hScOJlEJUEk sjtUOk+WW6NkHYDlrT3dQHdtdnZ22ATZ1+c7U84dpo8D9evMBptLDQnDGiqgG3dN 1uow/sOQ98xF6QB26iuIVxzrtThVJi/rzI4uGl73DDZevbVgK5/Yep/+wz+dERro GxwcJRgBM3I8y/8tRUpUQFJNplg2XyQSiOc96SHsXyrr0ZwtDECo3BnGSunrXYpp cu3ii4yQ9fKYV2H03pSZ5f+c1qpduxP/XuLfj1MGgqnLmJW/aTQZnIlS5kZquQ8G Z6/lvbewG8XTkZ1OsUng1ZdzJGSo76+Fxcwhx7Yd/mgSjcAcp5fDFTEdCBa5dyxH PVs4DtS6d54PLJnYqsq2RQsN1YWKmxiBts5U35JpV7mToxFRM9Y=
    =biPn
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael@21:1/5 to All on Sat Sep 7 10:37:04 2024
    On Friday 6 September 2024 22:41:33 BST Frank Steinmetzger wrote:
    Am Fri, Sep 06, 2024 at 01:21:20PM +0100 schrieb Michael:
    find path-to-directory/ -type f | xargs md5sum > digest.log

    then to compare with a backup of the same directory you could run:

    md5sum -c digest.log | grep FAILED

    I had a quick look at the manpage: with md5sum --quiet you can omit the grep part.

    Good catch. You can tell I didn't spend much effort to come up with this. ;-)


    Someone more knowledgeable should be able to knock out some clever python
    script to do the same at speed.

    And that is exactly what I have written for myself over the last 11 years. I call it dh (short for dirhash). As I described in the previous mail, I use
    it to create one hash files per directory. But it also supports one hash
    file per data file and – a rather new feature – one hash file at the root of a tree. Have a look here: https://github.com/felf/dh
    Clone the repo or simply download the one file and put it into your path.

    Nice! I've tested it briefly here. You've put quite some effort into this. Thank you Frank!

    Probably not your use case, but I wonder how it can be used to compare SOURCE to DESTINATION where SOURCE is the original fs and DESTINATION is some backup, without having to copy over manually all different directory/subdirectory Checksums.md5 files.

    I suppose rsync can be used for the comparison to a backup fs anyway, your script would be duplicating a function unnecessarily.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmbcHsAACgkQseqq9sKV Zxn6ahAAm6lNLzLCtFtHECCG0405sjVRaabJ3Y1oj1gTYL1XeO5CYCOgmyV+Bj/x oJadw6Wkkwv/ow/B4wsVTyXe6yZCIAk1g/fX9iBy5drkKhFqIKmNQyF4B6WkCw5d sDe9orP8/VqMTINd3c2ZA9WIvtfapn8Q1xsFEtYc2jWvpf/A93zzIntwlk3TsJOx AA7hC7C+XbODTBCB/DNIo5V/mE9t3qE5nSS+bXgStjvhTvSFDsCmJazDsjhFGY9a bjFH2I+NujbVaA2ER4LX2Wxhh/XRwXrUf32JFRO311DoXL93L7qLffp0XUJiMgNM eYziFk87CTd1EAEbIQAJBGDxRkLphwE6+4fc0WFRt/G6AKlh17CpTrI8rapOcJk/ TLh442/jU7Dv4RYf3Adj/jQO8iumfv05bZuH1IslmRtWn0xDtEAS9lolCLUGD1Yb pCFA2yUjLqY5aYfM0CS1OkfvRM014j8lj2U7xOD6I2ZCQ130bxoG/nY+RHlUtUK+ ExVxCGslU87a6nSt7laHJvMttSgqydGyVrTVvMx4MlEBqrLFmhaq6OB5/nI99uMb i5oHmgF/hbNGc31WvY84SUBzgcSzzP6Q1coreuE6sK89OMiDbwWVDmYkvXbOZsTn SlLjZhH0TkScmJoiPlgQnp/BD7yWj8Vl90LNc2JeReXNOChmzVk=
    =cxET
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)