• HDTune 2.55 OK for SSDs? (Was: Re: Follow up to "No Boot Device Found")

    From J. P. Gilliver@G6JPG@255soft.uk to alt.comp.os.windows-10,alt.windows7.general,alt.comp.os.windows-11 on Sun Sep 21 09:23:54 2025
    From Newsgroup: alt.windows7.general

    On 2025/9/21 4:0:59, Paul wrote:

    []


    Purely for your amusement, you could try a bad block scan of the drive.
    This is pretty old software itself, having been released in 2008. The last column over will check whether the blocks have bad CRC. That version of HDTune, does not do writes, so it should not make the SSD health any worse.

    https://www.hdtune.com/files/hdtune_255.exe

    Does that mean _anything_ that that version of HDTune does is safe to do
    on an SSD, or only the last column (tab, "Error Scan")? (Doing that now
    on my drive - which CrystalDiskInfo says is "CT480BX500SSD1 : 480.1 GB",
    and has always said it is "Good, 97%". For interest, it took just under
    18 minutes to pass 300000 MB [all green so far]. Finished after 27:10,
    457679 MB, all green.)>
    There are a couple possibilities. The softwares might not know how to read the SMART table on that particular device. But just as easily, you could already

    Interesting: Boris's screenshots show: for HDTune's Info tab, the Failed attribute as "(EE) (unknown attribute)" (when I select that tab, I get a
    blank table - how do I get it to populate?), and for CrystalDiskInfo,
    the red one as "EE Media Wearout Indicator (Cycles Remaining)", between
    C7 and F1 - my CDI doesn't include EE.

    []


    The prices on this drive (870EVO), are not always that good. There was a time when Samsung was selling them at a more competitive rate, but that is
    not today. I think I have about five of those, all in good working shape.

    (Presumably the generally favourable PCWorld review had at least some
    effect on the price.)

    They're 600TBW per a 1TB drive. A 4TB drive in that family would be 2400TBW. Each cell can be written 600 times. For comparison, I have eight NS100 drives,
    which are not in the same class, but are great for doing OS installs and
    for testing. Some of them were struggling to perform, on the date of delivery. But that is what you expect when the manufacturer can swap NAND chips in mid production and use something else. It's up to you to
    pick something that isn't too sketchy. You can "do sketchy" if you can convince the operator of the computer to "do frequent backups". If I had personal data on the NS100 drives, I would be backing them up for sure.
    But they sure are cheap :-)

    https://www.pcworld.com/article/407542/best-ssds.html

    https://www.pcworld.com/article/393933/samsung-870-evo-sata-ssd-review-the-speed-you-need-at-sane-prices.html

    Presumably an SSD sitting unpowered on the shelf isn't deteriorating
    fast, other than the guarantee. I just wonder if I should be buying one
    in case they cease to be available in that form factor (2.5" drive
    equivalent), and/or a capacity that Windows (7 and) 10 can handle (2GB
    for 7-32 is it? I don't know for 10). Doesn't look like they're
    disappearing yet, though, and the guarantee consideration mitigates
    against purchase, at least while my current one stays at 97%?>
    Some computers equipped with SATA SSDs, the Southbridge SATA port only ran at SATA II rates.
    This also means the power consumption of a SATA III drive running on a SATA II port,
    is reduced, as the drive is never running at max speed. While the 870EVO
    has a claimed SLC cache, I can't say I've seen SLC cache behavior on it.
    Even large writes seem consistent. And the OS decides "how fast it is on small files". A 4K synthetic test can show how fast a drive is, but the
    OS can degrade the living shit out of a fine performance there.

    (Crystal says mine is using "Transfer Mode: SATA/600 | SATA/600".)>
    Anyway, just for fun, run the bad block scan in the last column of HDTune ("Error Scan").
    The bad blocks located, are not stored in $BADCLUS, so there is no permanent record
    of the mess it finds. You don't want anything compromising the drive contents,
    any more than they are already. If you get a lot of "red blocks" in the scan, this means the drive ran out of spare blocks, and could not repair the blocks it was finding as bad ones. The "red blocks" then, are an indicator of how far into debt the drive has gone. Finding red blocks, means that some
    of the more simple-minded copy schemes, are going to have trouble.
    It then depends on your expectations on data recovery, exactly how
    much you can shovel off there. ddrescue (gddrescue package, linux) is
    one way to do a low level copy of something in serious trouble. If the scan is all green blocks, then you are laughing. Sure, some of the files are corrupted
    (by the inability of the flash to do a good write, or by bit rot), but if the drive really has all green blocks, you can use Macrium or anything else you might like to use. When the red blocks show up, the recovery will be a
    battle of wills, you versus the drive :-) Been there, the drive has beaten
    me every time.

    Paul
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

    Sometimes I'm so sweet even I can't stand it. ~ Julie Andrews
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-10,alt.windows7.general,alt.comp.os.windows-11 on Sun Sep 21 08:50:20 2025
    From Newsgroup: alt.windows7.general

    On Sun, 9/21/2025 4:23 AM, J. P. Gilliver wrote:
    On 2025/9/21 4:0:59, Paul wrote:

    []


    Purely for your amusement, you could try a bad block scan of the drive.
    This is pretty old software itself, having been released in 2008. The last >> column over will check whether the blocks have bad CRC. That version of
    HDTune, does not do writes, so it should not make the SSD health any worse. >>
    https://www.hdtune.com/files/hdtune_255.exe

    Does that mean _anything_ that that version of HDTune does is safe to do
    on an SSD, or only the last column (tab, "Error Scan")? (Doing that now
    on my drive - which CrystalDiskInfo says is "CT480BX500SSD1 : 480.1 GB",
    and has always said it is "Good, 97%". For interest, it took just under
    18 minutes to pass 300000 MB [all green so far]. Finished after 27:10,
    457679 MB, all green.)>
    There are a couple possibilities. The softwares might not know how to read >> the SMART table on that particular device. But just as easily, you could already

    Interesting: Boris's screenshots show: for HDTune's Info tab, the Failed attribute as "(EE) (unknown attribute)" (when I select that tab, I get a blank table - how do I get it to populate?), and for CrystalDiskInfo,
    the red one as "EE Media Wearout Indicator (Cycles Remaining)", between
    C7 and F1 - my CDI doesn't include EE.

    []


    The prices on this drive (870EVO), are not always that good. There was a time
    when Samsung was selling them at a more competitive rate, but that is
    not today. I think I have about five of those, all in good working shape.

    (Presumably the generally favourable PCWorld review had at least some
    effect on the price.)

    They're 600TBW per a 1TB drive. A 4TB drive in that family would be 2400TBW. >> Each cell can be written 600 times. For comparison, I have eight NS100 drives,
    which are not in the same class, but are great for doing OS installs and
    for testing. Some of them were struggling to perform, on the date of
    delivery. But that is what you expect when the manufacturer can swap NAND
    chips in mid production and use something else. It's up to you to
    pick something that isn't too sketchy. You can "do sketchy" if you can
    convince the operator of the computer to "do frequent backups". If I had
    personal data on the NS100 drives, I would be backing them up for sure.
    But they sure are cheap :-)

    https://www.pcworld.com/article/407542/best-ssds.html

    https://www.pcworld.com/article/393933/samsung-870-evo-sata-ssd-review-the-speed-you-need-at-sane-prices.html

    Presumably an SSD sitting unpowered on the shelf isn't deteriorating
    fast, other than the guarantee. I just wonder if I should be buying one
    in case they cease to be available in that form factor (2.5" drive equivalent), and/or a capacity that Windows (7 and) 10 can handle (2GB
    for 7-32 is it? I don't know for 10). Doesn't look like they're
    disappearing yet, though, and the guarantee consideration mitigates
    against purchase, at least while my current one stays at 97%?>
    Some computers equipped with SATA SSDs, the Southbridge SATA port only ran at SATA II rates.
    This also means the power consumption of a SATA III drive running on a SATA II port,
    is reduced, as the drive is never running at max speed. While the 870EVO
    has a claimed SLC cache, I can't say I've seen SLC cache behavior on it.
    Even large writes seem consistent. And the OS decides "how fast it is on
    small files". A 4K synthetic test can show how fast a drive is, but the
    OS can degrade the living shit out of a fine performance there.

    (Crystal says mine is using "Transfer Mode: SATA/600 | SATA/600".)>
    Anyway, just for fun, run the bad block scan in the last column of HDTune ("Error Scan").
    The bad blocks located, are not stored in $BADCLUS, so there is no permanent record
    of the mess it finds. You don't want anything compromising the drive contents,
    any more than they are already. If you get a lot of "red blocks" in the scan,
    this means the drive ran out of spare blocks, and could not repair the blocks
    it was finding as bad ones. The "red blocks" then, are an indicator of how >> far into debt the drive has gone. Finding red blocks, means that some
    of the more simple-minded copy schemes, are going to have trouble.
    It then depends on your expectations on data recovery, exactly how
    much you can shovel off there. ddrescue (gddrescue package, linux) is
    one way to do a low level copy of something in serious trouble. If the scan >> is all green blocks, then you are laughing. Sure, some of the files are corrupted
    (by the inability of the flash to do a good write, or by bit rot), but if the
    drive really has all green blocks, you can use Macrium or anything else you >> might like to use. When the red blocks show up, the recovery will be a
    battle of wills, you versus the drive :-) Been there, the drive has beaten >> me every time.

    HDTune 2.55 does not have the SSD SMART table information needed, to do a
    good job of decoding SSD SMART. However, when the table won't render at all, your drive could be in a USB enclosure where the SMART Passthru option is
    not present. And storage devices which don't have SMART, will also not
    display a SMART table. And my 4GB HDD from the year 2000, does not have
    SMART either :-) That means it is perfectly healthy. But if SMART is accessible, the (dumb) HDTune will try to decode it anyway, with comical results. If any tables look goofy in HDTune, don't take it personally.
    Whereas CrystalDiskInfo is more carefully managed, and a lot more drives
    and their quirks, should be handled properly.

    You would use something like CrystalDiskInfo, to display a wider range of
    SMART tables. And when an SSD is branded, like an 870EVO from Samsung,
    you can go look for a "Samsung ToolBox" (each company has a peppy name
    for their software) and install that.

    [Picture]

    https://i.postimg.cc/L4fQ0qpB/toolkit-99-percent-life-left.png

    # Samsung Magician

    https://semiconductor.samsung.com/consumer-storage/support/tools/

    While the SSDs have some common entries, you can find vendor-specific
    entries on a lot of them. And we don't know what the vendor-specific
    entries mean. It's so bad in fact, that some drives, the Toolkit
    does not support several of the drive models. They are that non-standard
    and unsupported. That means the drive manufacturer (!) cannot display
    the SMART, and quite possibly, the CrystalDiskInfo designer doesn't
    know the answer either. The drive manufacturer likely does know,
    but the info may be covered by an NDA preventing the sharing of the
    info with mere customers.

    HDTune 2.55 is perfectly safe as far as I know. If any info displayed
    is not to your liking, the software is quite old and a lot of things
    have happened since that free version was frozen. The supported (paid)
    version is much nicer looking and supports R/W characterization for
    things like cheesy SMR (shingled HDD) drives. The read and write speeds
    do not need to be identical on HDD, and that would happen on an SSD only
    if the error corrector ARM cores can't keep up with the TLC read errors :-)

    TLC or QLC drives can get "mushy" if left unpowered for periods of time.
    For drives that are powered irregularly but at some frequency, the
    SSD drives like that have maintenance routines they can run to freshen
    the storage (at the cost of one cell wear for the affected areas).
    The "uncorrected" error information, helps the drive figure out how
    mushy things are. For example, sectors in the free pool, would not
    need to be "tended", as they don't contain user info. So if I had
    a 4TB drive with 1TB of partitions, only 1TB of it would be a candidate
    for "freshening". Windows 10 running TRIM once a week, helps inform the
    drive about the status. You can create a temporary partition on an
    unallocated part of the drive, TRIM the partition, then delete the
    partition, as a way of signaling an unallocated section is not being
    used. The drive does not "know" the partition structure -- the TRIM
    just gives it a list of LBAs known not to be in usage, and the calculation
    is based on partition file contents and their used clusters.

    They're trying to make SATA disappear (the 870E boards only have four
    SATA ports, versus the traditional six SATA ports). I think it is a
    reasonably estimate, that "some day", SATA will disappear just as
    surely as IDE. But there is really no need of doing that. SATA is
    a pretty economical standard, and they should always be able to support it.

    You should look at the prices of NAND, predictions of the future,
    before buying a stocking stuffer for later. At some previous time,
    I was warning people that at the time, that was a PERFECT TIME
    to buy an SSD. But that time has passed and the price is a bit high
    at the moment. If you find drives at lesser prices (like the NS100 drives
    I used for scratch OS installs), those are not recommended for long
    term storage, just because of how the latest ones have presented themselves when brand new. The first batch I bought, were much better behaved
    than the current ones. And since there is a Chinese NAND flash company
    now, it is always possible some of those will become "favoured"
    for usage in super-cheap drives ($40). There are two aspects to
    NAND, knowing how to make them, but also screening them and deciding
    who gets the table scraps. We can see the results of table-scrap NAND
    on the USB sticks we buy. We don't want that grade of NAND in our
    SSD drives.

    Presumably, the NAND companies characterize their NAND layers, before
    they stack them. There are two layers of layers. There is the vertical
    string in a NAND slab. And you can stack NAND slabs to make a sandwich.
    That's how you squeeze 2TB into a single chip, is stacking slabs.
    Without slabs, it would be harder to make 256GB drives (with a single
    256GB chip), and 4TB drives (with two 2TB chips). And you would want
    to screen the slabs before stacking, to cut down on material loss.
    In the same way as MCM (multi-chip module packaging), requires
    the usage of screened die, to keep the yield of MCM packages up.

    https://www.atpinc.com/blog/what-is-nand-die-stacking

    You can even stack DRAM, but fortunately the practice is not common yet (thermals, when doing Refresh).

    The GPT issue with Windows 7, might have been with respect to booting
    or something. I would not particularly expect a limitation on
    only being able to use 2TB drives on Windows 7 x86. If you wanted
    to use a 4TB GPT drive for data on Windows 7 x86 that should work.
    The Wiki has the detail on booting.

    https://en.wikipedia.org/wiki/GUID_Partition_Table

    "Details of GPT support on 32-bit editions of Microsoft Windows[33]"

    And when cloning a drive, I think you can see the potential for trouble,
    when taking a <2TB drive content and placing it on a >2TB device.
    The former drive may be MBR partitioned, the new drive may need to be
    GPT, yet the cloning softwares may not be equipped for the "transition".
    And partition manager softwares, may not have good coverage for this
    case either. While Windows has an "MBR2GPT.exe" utility, that is
    mainly intended for boot drive transitions and not for general
    purpose partition handling. I've not tested that in a VM to see
    what kind of a mess it makes of data drives. That's a research topic
    for somebody.

    Paul

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From J. P. Gilliver@G6JPG@255soft.uk to alt.comp.os.windows-10,alt.windows7.general,alt.comp.os.windows-11 on Sun Sep 21 19:52:34 2025
    From Newsgroup: alt.windows7.general

    On 2025/9/21 13:50:20, Paul wrote:
    On Sun, 9/21/2025 4:23 AM, J. P. Gilliver wrote:
    On 2025/9/21 4:0:59, Paul wrote:

    []


    Purely for your amusement, you could try a bad block scan of the drive.
    This is pretty old software itself, having been released in 2008. The last >>> column over will check whether the blocks have bad CRC. That version of
    HDTune, does not do writes, so it should not make the SSD health any worse. >>>
    https://www.hdtune.com/files/hdtune_255.exe

    Does that mean _anything_ that that version of HDTune does is safe to do
    on an SSD, or only the last column (tab, "Error Scan")? (Doing that now
    on my drive - which CrystalDiskInfo says is "CT480BX500SSD1 : 480.1 GB",

    []


    HDTune 2.55 does not have the SSD SMART table information needed, to do a good job of decoding SSD SMART. However, when the table won't render at all, your drive could be in a USB enclosure where the SMART Passthru option is

    No, it's where the HD was in this Lenovo ideapad. (At least I assume it
    is; I bought the ideapad with it already fitted by the refurbisher [a
    small shop]).

    CrystalDiskInfo read its SMART stull no problem.

    not present. And storage devices which don't have SMART, will also not display a SMART table. And my 4GB HDD from the year 2000, does not have
    SMART either :-) That means it is perfectly healthy. But if SMART is accessible, the (dumb) HDTune will try to decode it anyway, with comical results. If any tables look goofy in HDTune, don't take it personally.

    It just showed blank - column headings, with nothing below them. no
    problem - I'll just use Crystal. I hadn't even installed HDTune until
    you said (to Boris) it was safe to use the last tab (to look for red
    blocks).


    Whereas CrystalDiskInfo is more carefully managed, and a lot more drives
    and their quirks, should be handled properly.

    You would use something like CrystalDiskInfo, to display a wider range of SMART tables. And when an SSD is branded, like an 870EVO from Samsung,
    you can go look for a "Samsung ToolBox" (each company has a peppy name
    for their software) and install that.

    [Picture]

    https://i.postimg.cc/L4fQ0qpB/toolkit-99-percent-life-left.png

    # Samsung Magician

    https://semiconductor.samsung.com/consumer-storage/support/tools/

    Crystal just says it's "CT480BX500SSD1 : 480.1 GB" - no maker name.
    Googling the first part of that suggests it's Crucial )and 34.99 pounds
    new via Amazon). So I'm getting the "Crucial storage executive". (It virus-scanned OK, but during install "Created with an evaluation version
    of InstallBuilder", which doesn't fill one with confidence!)

    I thought I'd check the warranty while there; I typed in what Crystal
    said it was (2115E596FF67), and it (the website) said "SSD Serial Number
    is invalid.". I clicked on "How to find my serial number?", an it told
    me 1. on your product, 2. on the packaging, 3. via software - which I
    chose, which led me to the "storage executive", which I have installed
    and run - and it said 2115E596FF67! (Nevertheless, I copied-and-pasted
    it from there into the website - still says it is invalid, of course.)
    [So I've filled in a ticket, saying more or less what I have in this paragraph.]>
    While the SSDs have some common entries, you can find vendor-specific
    entries on a lot of them. And we don't know what the vendor-specific
    entries mean. It's so bad in fact, that some drives, the Toolkit


    Well, the "executive" has a button for the SMART data, and gives values
    and titles for lots of them - though no guidance as to which are good or
    bad (though overall says the drive is healthy).


    does not support several of the drive models. They are that non-standard
    and unsupported. That means the drive manufacturer (!) cannot display
    the SMART, and quite possibly, the CrystalDiskInfo designer doesn't
    know the answer either. The drive manufacturer likely does know,
    but the info may be covered by an NDA preventing the sharing of the
    info with mere customers.

    HDTune 2.55 is perfectly safe as far as I know. If any info displayed

    So I could try the left tab without shortening the SSD's life? (I had
    assumed that did a read/write to all sectors.)

    is not to your liking, the software is quite old and a lot of things
    have happened since that free version was frozen. The supported (paid) version is much nicer looking and supports R/W characterization for
    things like cheesy SMR (shingled HDD) drives. The read and write speeds
    do not need to be identical on HDD, and that would happen on an SSD only
    if the error corrector ARM cores can't keep up with the TLC read errors :-)

    TLC or QLC drives can get "mushy" if left unpowered for periods of time.
    For drives that are powered irregularly but at some frequency, the
    SSD drives like that have maintenance routines they can run to freshen
    the storage (at the cost of one cell wear for the affected areas).

    I would only buy one and keep it that way intending to use it from new
    (i. e. when this one dies), so would accept it doing a lengthy self-test
    to get rid of "mushiness", if they can do that. But from what you're
    saying, I don't need to do so for a while yet.

    The "uncorrected" error information, helps the drive figure out how
    mushy things are. For example, sectors in the free pool, would not
    need to be "tended", as they don't contain user info. So if I had
    a 4TB drive with 1TB of partitions, only 1TB of it would be a candidate
    for "freshening". Windows 10 running TRIM once a week, helps inform the

    Does it do that by default? I've never told it to; I would hope the shop
    that sold me the laptop would have, though.


    They're trying to make SATA disappear (the 870E boards only have four
    SATA ports, versus the traditional six SATA ports). I think it is a reasonably estimate, that "some day", SATA will disappear just as
    surely as IDE. But there is really no need of doing that. SATA is
    a pretty economical standard, and they should always be able to support it.

    Hmm. I suspect if (when) these laptops (7-32 and 10-64) finally die,
    it'll be a major new system anyway, so not worrying about that.>
    You should look at the prices of NAND, predictions of the future,
    before buying a stocking stuffer for later. At some previous time,
    I was warning people that at the time, that was a PERFECT TIME
    to buy an SSD. But that time has passed and the price is a bit high
    at the moment. If you find drives at lesser prices (like the NS100 drives
    I used for scratch OS installs), those are not recommended for long
    term storage, just because of how the latest ones have presented themselves

    I would only be buying one against this one failing. For long-term
    storage, I still feel happier with spinning drives, shingling
    notwithstanding.

    []

    The GPT issue with Windows 7, might have been with respect to booting
    or something. I would not particularly expect a limitation on
    only being able to use 2TB drives on Windows 7 x86. If you wanted
    to use a 4TB GPT drive for data on Windows 7 x86 that should work.

    Ah, I had a memory of there being a 2T limit _somewhwere_ - maybe it's partition size - for 7-32.

    []

    And when cloning a drive, I think you can see the potential for trouble,
    when taking a <2TB drive content and placing it on a >2TB device.

    I had something similar on a previous generation: when restoring an
    image from the drive that seized (I think it was a 1xx GB drive, and I
    restored to a bigger one - might have been 2xx), I first restored making
    a partition of the size I'd had before, then in Windows enlarged it. (I
    know Macrium could have resized while restoring, but I felt happier
    doing one thing at a time!) That may not have involved hardware/OS
    limits, though.

    The former drive may be MBR partitioned, the new drive may need to be
    GPT, yet the cloning softwares may not be equipped for the "transition".
    And partition manager softwares, may not have good coverage for this
    case either. While Windows has an "MBR2GPT.exe" utility, that is
    mainly intended for boot drive transitions and not for general
    purpose partition handling. I've not tested that in a VM to see
    what kind of a mess it makes of data drives. That's a research topic
    for somebody.

    The only PM that I've used in the last few years is the EaseUS one (and
    the one in Windows, switching to the EaseUS one when the Windows one
    couldn't do what I wanted - probably a C: shrink). I did use another one
    years ago - I think it was just called Partition Manager. I can't
    remember why I stopped using that - maybe it hit some limit, or maybe I
    just didn't have it when I needed one, and the EaseUS one was the first
    one I found, and as that did what I wanted, I didn't look further.>
    Paul

    John
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

    "To YOU I'm an atheist; to God, I'm the Loyal Opposition." - Woody Allen
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dillinger@dillinger@invalid.not to alt.comp.os.windows-10,alt.windows7.general,alt.comp.os.windows-11 on Mon Sep 29 02:14:23 2025
    From Newsgroup: alt.windows7.general

    Op 21-9-2025 om 20:52 schreef J. P. Gilliver:
    I thought I'd check the warranty while there; I typed in what Crystal
    said it was (2115E596FF67), and it (the website) said "SSD Serial Number
    is invalid.". I clicked on "How to find my serial number?", an it told
    me 1. on your product, 2. on the packaging, 3. via software - which I
    chose, which led me to the "storage executive", which I have installed
    and run - and it said 2115E596FF67! (Nevertheless, I copied-and-pasted
    it from there into the website - still says it is invalid, of course.)
    [So I've filled in a ticket, saying more or less what I have in this paragraph.]>

    Just for fun, I checked your serial number and if it breaks you're out
    of luck:

    This product is valid but warranty has expired and this part is no
    longer eligible for return

    Crucial BX500 480GB 3D NAND SATA 2.5-inch SSD
    Serial number
    2115E596FF67
    Part number
    CT480BX500SSD1

    Possibly you disabled a bit too much on the website.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From J. P. Gilliver@G6JPG@255soft.uk to alt.comp.os.windows-10,alt.windows7.general,alt.comp.os.windows-11 on Mon Sep 29 01:50:29 2025
    From Newsgroup: alt.windows7.general

    On 2025/9/29 1:14:23, dillinger wrote:
    Op 21-9-2025 om 20:52 schreef J. P. Gilliver:
    I thought I'd check the warranty while there; I typed in what Crystal
    said it was (2115E596FF67), and it (the website) said "SSD Serial Number
    is invalid.". I clicked on "How to find my serial number?", an it told
    me 1. on your product, 2. on the packaging, 3. via software - which I
    chose, which led me to the "storage executive", which I have installed
    and run - and it said 2115E596FF67! (Nevertheless, I copied-and-pasted
    it from there into the website - still says it is invalid, of course.)
    [So I've filled in a ticket, saying more or less what I have in this
    paragraph.]>

    Just for fun, I checked your serial number and if it breaks you're out
    of luck:

    This product is valid but warranty has expired and this part is no
    longer eligible for return

    Crucial BX500 480GB 3D NAND SATA 2.5-inch SSD
    Serial number
    2115E596FF67
    Part number
    CT480BX500SSD1

    Possibly you disabled a bit too much on the website.

    I filled out a ticket, and got a sort of standard (and thus irritating)
    reply, asking for all the standard things - when I bought it, proof of purchase, etc.; I was in the process of replying saying that the problem
    wasn't at my end but with the website, about to tell them _exactly_ what
    the error message was that the website produced (maybe send them a
    screenshot) - but the website then worked, as you have found. So,
    although they didn't admit anything in their message, they'd obviously
    fixed it. (I wasn't surprised to find my drive was out of warranty.)
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()ALIS-Ch++(p)Ar++T+H+Sh0!:`)DNAf

    An act like Morecambe and Wise happens once in a lifetime. Why did it
    have to happen in mine?
    - Bernie Winters quoted by Barry Cryer, RT 2013/11/30-12/6
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Jeff Barnett@jbb@notatt.com to alt.comp.os.windows-10,alt.windows7.general,alt.comp.os.windows-11 on Mon Sep 29 01:20:07 2025
    From Newsgroup: alt.windows7.general

    On 9/21/2025 2:23 AM, J. P. Gilliver wrote:
    On 2025/9/21 4:0:59, Paul wrote:

    []


    Purely for your amusement, you could try a bad block scan of the drive.
    This is pretty old software itself, having been released in 2008. The last >> column over will check whether the blocks have bad CRC. That version of
    HDTune, does not do writes, so it should not make the SSD health any worse. >>
    https://www.hdtune.com/files/hdtune_255.exe

    <MAJOR SNIPING>

    I thought that reading a "unit" on an SSD caused tits contents to be
    rewrote in another unit and the original to go in a pool (over
    partition?) that is used for wear leveling So whether that utility
    writes or not, the SSD will have its pot stirred.

    Is my memory wrong about this?
    --
    Jeff Barnett

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-10,alt.windows7.general,alt.comp.os.windows-11 on Mon Sep 29 08:46:10 2025
    From Newsgroup: alt.windows7.general

    On Mon, 9/29/2025 3:20 AM, Jeff Barnett wrote:
    On 9/21/2025 2:23 AM, J. P. Gilliver wrote:
    On 2025/9/21 4:0:59, Paul wrote:

    []


    Purely for your amusement, you could try a bad block scan of the drive.
    This is pretty old software itself, having been released in 2008. The last >>> column over will check whether the blocks have bad CRC. That version of
    HDTune, does not do writes, so it should not make the SSD health any worse. >>>
    -a-a-a https://www.hdtune.com/files/hdtune_255.exe

    <MAJOR SNIPING>

    I thought that reading a "unit" on an SSD caused tits contents to be rewrote in another unit and the original to go in a pool (over partition?) that is used for wear leveling So whether that utility writes or not, the SSD will have its pot stirred.

    Is my memory wrong about this?

    When a drive is TRIMmed, the OS sends a list of LBAs known to
    not be used, and those can be put in the Free Pool and prepared
    for usage. The preparation for usage is an Erase, which costs
    one wear life. Any voltage changes in a NAND floating gate, can
    cause small changes in crystal structure. Annealing a NAND cell
    could fix these, extending life significantly, but we don't
    know how to do that on a massive scale.

    When you write a file, on a TLC or QLC device, the write method
    can use an SLC cache (floating gate cell made to hold only one
    bit, by changing the voltage threshold scheme), it costs you
    one write for the SLC cache, and one write for the final placement
    into TLC storage. The SLC cache is not a dedicated static thing,
    it can be moved around by applying different voltage sensing
    to any LBAs declared as being part of the cache area. It's not
    like the critical-data-storage area, which might be an SLC area
    at all times.

    But reading is "free". It isn't DRAM and does not need refresh
    in that sense. DRAM dribbles in milliseconds. The floating gate
    in that case, is not protected by quantum mechanics.

    On something like QLC, just about every sector read needs correction.
    A 512 byte sector has 4KBits and the voltage thresholds are pretty tight.
    If a read has one bit in error, this is easily corrected by Reed Solomon without needing the sector to be rewritten. The error corrector code
    is quite strong and no reason to panic.

    Say you read a sector and 20 bits out of 4096 are errored. Maybe the code
    is still strong enough for this, correcting it. The syndrome is 50 bytes
    on a 512 byte sector, to give some idea of the syndrome size.

    But at some point, if you were tracking the statistics on the whole
    page, as the firmware, you might decide to re-write the page by pulling
    a new one from Free Pool, and copying the data over. These activities
    are not normally described in any detail, so we are not privy to how
    the devices are tuned.

    But reading as an activity, does not need refresh in the same way
    that DRAM needs constant refresh and lost bandwidth. In the DRAM,
    there are refresh counters that go up through the address space,
    and, in effect, have "timed" refresh activity. The DRAM memory
    might be refreshed 64 times within the discharge period, so the
    refresh is done in such a way, there is no possibility of a
    discharge failure. In years past, people would fool with the
    DRAM refresh settings, to "push their luck" :-)

    Compare that to a floating gate NAND, where electrons leaving
    the insulated gate is quantum mechanically disallowed. Maybe an
    electron tunnels to escape, and the probability is quite low.
    A NAND cell should be able to remain charged to a particular
    voltage for ten years. In an SLC cell, the voltage threshold
    is at 50%, and there are huge margins around the threshold.
    The SLC NAND, as a beast, is hardly likely to be "mushy" in
    everyday usage, and will be robust right up to the ten year
    estimate. Whereas QLC, where there are 15 thresholds for the
    16 voltage regions, that is tight-as-a-drum and the voltage
    could shift in the cell, getting dangerously close to moving
    into the next threshold region. We can correct a lot of these
    shifts, with our powerful error corrector, but it's getting
    harder to guarantee, by trickery, that we have robust
    storage (with that correction scheme).

    There was a Samsung, where it was noticed the read-performance
    after three months, was dropping to 300MB/sec from 540MB/sec
    and this was thought to be error corrector activity on
    everything being read. And it is possible at that point,
    that the re-write tuning was added or tuned, to stop users
    from complaining. But we really don't know what was going
    on there, in terms of details. This is all just guesses from
    enthusiast web site news releases. Some SSD controller chips
    have four cores now, one would be the command interpreter,
    while three cores could be running error correction. That's
    to help hide our precious purchase is mushy, do the correction
    really fast...

    In decades passed, there were reports that reading did wear
    on cells. However, there is no anecdotal evidence of this
    being anywhere near significant, in end-user usage patterns.
    You can read the shit out of them, and if there is an
    effect (like it were to be making the flash cells mushy),
    users do not notice that a high-read application was
    causing poorer eventual read performance, than SSDs
    that are loafing along through their workday. If there is
    a mechanism there, no one has joined the dots and claimed
    to have seen this in practice.

    I have never seen a comment from a person knowledgeable on the subject, regarding why there were early reports of an effect, and no such
    claims are made today. The disconnect remains unexplained.
    I get the impression though, that the mushy cases, they
    can be mushy just while leaving the SSD sitting on the shelf
    for three months, that particular device might have shown mush.
    Whether mush can be cured by making the gate to channel
    thicker, I don't know if that is a factor or not. Considering
    the Z-axis and the 3D nature of NAND, there is some amount
    of design pressure to make then thin-as-can-be. There can be
    16 die inside a NAND chip, and then a couple hundred cells
    in a column within each die. And yet the chips never seem
    to grow in stature.

    But on the other hand, advancement of NAND is kinda slow now.
    They are working on five-bit cells (which means even tighter
    thresholds), but those damn things have not poked their
    little heads into super-cheap product. They're not shipping
    yet. Sooner or later the write life is going to have to drop
    below 600 or so, writes per cell.

    Micron makes an industrial flash with 6x the write life, but
    like everything in high tech, the price has to be 6x higher.
    Maybe you might find those in certain Enterprise products.
    The 600 number is not some "manifest constant", it is just
    a common number for consumer devices. There have been devices
    with considerably better numbers. And if they still made actual
    true SLC devices, those could manage 100,000 writes per cell,
    and would outlast your lifetime. Making 600 ones, ensures,
    like bog roll, there is a kind of rent-seeking behavior.
    But 600 still lasts a long time when all you do is web surf
    (and you set your web cache to be in RAM :-) ).

    Paul
    --- Synchronet 3.21a-Linux NewsLink 1.2