Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 28 |
Nodes: | 6 (1 / 5) |
Uptime: | 66:10:43 |
Calls: | 451 |
Files: | 1,049 |
Messages: | 94,894 |
Howdy,
I ran up on a couple deals. I first bought a 16TB drive which worked
fine. Then I saw a deal on a 20TB drive. I first put it in a external enclosure and connected it by eSATA cable to my new rig. I got this in messages.
May 5 15:41:31 Gentoo-1 kernel: ata4: found unknown device (class 0)
May 5 15:41:40 Gentoo-1 kernel: ata4: softreset failed (1st FIS failed)
May 5 15:41:41 Gentoo-1 kernel: ata4: found unknown device (class 0)
May 5 15:41:50 Gentoo-1 kernel: ata4: softreset failed (1st FIS failed)
May 5 15:41:51 Gentoo-1 kernel: ata4: found unknown device (class 0)
May 5 15:41:59 Gentoo-1 kernel: ata4: found unknown device (class 0)
May 5 15:41:59 Gentoo-1 kernel: ata4: SATA link up 1.5 Gbps (SStatus
113 SControl 300)
May 5 15:41:59 Gentoo-1 kernel: ata4: link online but 1 devices misclassified, retrying
May 5 15:41:59 Gentoo-1 kernel: ata4: reset failed (errno=-11),
retrying in 27 secs
May 5 15:42:26 Gentoo-1 kernel: ata4: SATA link up 1.5 Gbps (SStatus
113 SControl 300)
May 5 15:42:26 Gentoo-1 kernel: ata4.00: ATA-11: ST20000NM007D-3DJ103,
SN05, max UDMA/133
May 5 15:42:26 Gentoo-1 kernel: ata4.00: 39063650304 sectors, multi 16: LBA48 NCQ (depth 32), AA
May 5 15:42:26 Gentoo-1 kernel: ata4.00: Features: NCQ-sndrcv
May 5 15:42:26 Gentoo-1 kernel: ata4.00: configured for UDMA/133
May 5 15:42:26 Gentoo-1 kernel: scsi 3:0:0:0: Direct-Access
ATA ST20000NM007D-3D SN05 PQ: 0 ANSI: 5
May 5 15:42:26 Gentoo-1 kernel: sd 3:0:0:0: Attached scsi generic sg2
type 0
May 5 15:42:26 Gentoo-1 kernel: sd 3:0:0:0: [sdk] 39063650304 512-byte logical blocks: (20.0 TB/18.2 TiB)
May 5 15:42:26 Gentoo-1 kernel: sd 3:0:0:0: [sdk] 4096-byte physical blocks May 5 15:42:26 Gentoo-1 kernel: sd 3:0:0:0: [sdk] Write Protect is off
May 5 15:42:26 Gentoo-1 kernel: sd 3:0:0:0: [sdk] Mode Sense: 00 3a 00 00 May 5 15:42:26 Gentoo-1 kernel: sd 3:0:0:0: [sdk] Write cache: enabled,
read cache: enabled, doesn't support DPO or FUA
May 5 15:42:26 Gentoo-1 kernel: sd 3:0:0:0: [sdk] Preferred minimum I/O
size 4096 bytes
May 5 15:42:26 Gentoo-1 kernel: sd 3:0:0:0: [sdk] Attached SCSI disk
I thought it might be the enclosure so I booted up my NAS box, removed
the drive from the enclosure and connected it bare by SATA cable to the
NAS box mobo SATA connector. This is what NAS box shows.
May 5 16:00:20 nas kernel: ata4: link is slow to respond, please be
patient (ready=0)
May 5 16:00:24 nas kernel: ata4: found unknown device (class 0)
May 5 16:00:24 nas last message buffered 1 times
May 5 16:00:24 nas kernel: ata4: SATA link up 3.0 Gbps (SStatus 123
SControl 300)
May 5 16:00:24 nas kernel: ata4: link online but 1 devices
misclassified, retrying
May 5 16:00:30 nas kernel: ata4: link is slow to respond, please be
patient (ready=0)
May 5 16:00:34 nas kernel: ata4: found unknown device (class 0)
May 5 16:00:34 nas last message buffered 1 times
May 5 16:00:34 nas kernel: ata4: SATA link up 3.0 Gbps (SStatus 123
SControl 300)
May 5 16:00:34 nas kernel: ata4: link online but 1 devices
misclassified, retrying
May 5 16:00:40 nas kernel: ata4: link is slow to respond, please be
patient (ready=0)
May 5 16:00:42 nas kernel: ata4: SATA link up 3.0 Gbps (SStatus 123
SControl 300)
May 5 16:00:42 nas kernel: ata4.00: ATA-11: ST20000NM007D-3DJ103, SN05,
max UDMA/133
May 5 16:00:42 nas kernel: ata4.00: 39063650304 sectors, multi 16:
LBA48 NCQ (depth 32), AA
May 5 16:00:42 nas kernel: ata4.00: Features: NCQ-sndrcv
May 5 16:00:42 nas kernel: ata4.00: configured for UDMA/133
May 5 16:00:42 nas kernel: scsi 3:0:0:0: Direct-Access ATA ST20000NM007D-3D SN05 PQ: 0 ANSI: 5
May 5 16:00:42 nas kernel: sd 3:0:0:0: Attached scsi generic sg1 type 0
May 5 16:00:42 nas kernel: sd 3:0:0:0: [sdb] 39063650304 512-byte
logical blocks: (20.0 TB/18.2 TiB)
May 5 16:00:42 nas kernel: sd 3:0:0:0: [sdb] 4096-byte physical blocks
May 5 16:00:42 nas kernel: sd 3:0:0:0: [sdb] Write Protect is off
May 5 16:00:42 nas kernel: sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
May 5 16:00:42 nas kernel: sd 3:0:0:0: [sdb] Preferred minimum I/O size
4096 bytes
May 5 16:00:42 nas kernel: sd 3:0:0:0: [sdb] Attached SCSI removable disk
Connected directly, no external enclosure, it connects at normal speed.
Maybe the enclosure limits the speed??? What concerns me with the NAS
box info, the first part about slow to respond. Is that normal? Also,
is it likely since it works on the NAS box at full speed that the
enclosure is causing the slow down or is that slow to respond a possible cause?
I ran the conveyance and short test and it passed both tests. I'm about
to start the long test. I figure that will take a couple days, or close
to it. Looking for thoughts on whether this drive has issues. I might
add, the company I buy from packages their drives to survive about
anything. Drive is put in a tough plastic bubble wrap made just for
hard drives and that is placed in a box. They then wrap that box in
large bubble wrap, like any of us can buy, and put that in a large
second box. I can't imagine the drive being damaged in shipping.
Oh, when I get a new drive, I first watch messages to see how it
connects. Then I run conveyance test, short test and then long test.
If it passes all that, I then add it to a LVM drive set or use in some
other way. I'm thinking about buying another spare 20TB. Good deal at
just over $200 and current drive has only 2 run hours. O_O
Thoughts on the above info? Anyone seen this before? Is this drive perfectly fine? Need to return?
Thanks.
Dale
:-) :-)
Michael wrote:
Initially I'd be suspecting the SATA cable/port, but if you tried another MoBo did you also try a different SATA cable?
Were the ports you connected to compatible with SATA 3 revision capable of 6Gb/s? Notwithstanding the warnings and errors you'd want the highest transfer speed you can get on a new drive.
I think the speed issue might be that external enclosure I used. I've
got two of those I think. The other external enclosures work at full
speed tho.
The concern I have mostly is the slow part when hooked to the NAS mobo directly. I have a good size power supply for that old thing. It
likely runs at about a 20% load most of the time. The most excitement
it sees is when I do OS updates and backup updates at the same time.
LOL I included that first error just in case it may be relevant to the
one from the NAS box about being slow.
When I did some searches for that error, I never found a real answer to
the question. Is that normal for some drives or a sign of future
failure? It's a 20TB Seagate EXOS Enterprise drive. Maybe it has a
extra platter which takes longer to spin up or something and it is
normal. Then again, maybe it is a weak motor that is about to fail.
Some stuff I found claimed it was a kernel error. I've never seen that
on either of my systems and I been using those same kernels for a long
time. As most know, I have quite a few large drives here. o_O
As far as I know, all my rigs are SATA 3 ready.
So far, this is the first drive I've ever seen this 'slow to respond'
message with before. Since I've never seen it before, curious as to
what it means exactly and is it normal? Searching didn't help. Some
claim kernel, others claim something else.
Dale wrote:
Michael wrote:
On Tuesday, 6 May 2025 13:59:16 British Summer Time Dale wrote:
Michael wrote:
When I put it on the NAS box, I used
the same power cable and data cable that I use to update my backups and
it works without error. I don't see how it can be the data cable, power
or mobo in this case. All those work fine when doing backups. It
powers 4 hard drives in that setup. Soon to be 5 drives.
As a update, the long SMART test finished without error. I cycled the
drive off for a few minutes, to be sure the kernel has finished its
house cleaning. When I powered it back up, this was in messages.
May 6 18:07:36 nas kernel: ata4: link is slow to respond, please be
patient (ready=0)
May 6 18:07:41 nas kernel: ata4: found unknown device (class 0)
May 6 18:07:41 nas last message buffered 1 times
May 6 18:07:41 nas kernel: ata4: SATA link up 3.0 Gbps (SStatus 123
SControl 300)
I ran a hdparm test. I wanted to see as accurately as I could what the
speed was. I got this.
root@nas ~ # hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 7106 MB in 2.00 seconds = 3554.48 MB/sec
Timing buffered disk reads: 802 MB in 3.00 seconds = 267.03 MB/sec
root@nas ~ #
From what I've seen of other drives, that appears to be SATA 3 or the
faster speed. So, it is slow to respond but connects and works fine.
My question still remains tho. Do I need to return this drive because
this is a sign of upcoming failure or is it normal and just carry on
with the drive?
Dale
:-) :-)
Dale wrote:
I didn't know about that until now. I already shutdown my old rig.
Might try that later. It may shed some light on this mess.
I did send a email to the seller tho. They sell a LOT of drives. I've seen them show a stock of over 200 drives of a particular model and a
day or so later, sold out. They sell new, a few kinds of used as well.
I tend to buy used but most of the time, the number of power on hours is
in the single digits. The recent drives show 2 hours each. I think if
it is a problem, they will know since they test a lot of drives. Maybe
it is normal but if not, I'm sure they will agree to swap or refund.
They sold out of the 20TB drives shortly after I ordered mine. They started with right at 200 and sold out in like 2 or 3 days.
I figure I'll hear back shortly. They been pretty fast to respond to questions in the past.
Dale
:-) :-)
I got a response. This is what they said.
Thank you for bringing this to our attention. As long as we're not
seeing any I/O errors that would inhibit your ability to use the
drive, everything should be fine.
This type of link speed negotiation issue can occur with helium-filled drives, as their spin-up time tends to be slightly longer than that of traditional drives. Is your system or HBA a bit on the older side?
Most modern toolsets and software account for this extended spin-up
time by allowing a longer delay before attempting speed negotiation,
which typically avoids this issue altogether.
In summary, this isn't unprecedented behavior when working with older hardware or software, but at this stage, it doesn’t point to any major functional problem. I hope this information helps.
As I mentioned, it passed all the SMART tests.
I'm not sure on the
3GB/sec connection yet tho. I'm pretty sure that mobo is capable of
6GBs/sec tho. When I put it in my main rig, I'll know for sure.
What are your thoughts on what they say? It make sense to anyone who
knows more about hard drives than me? Now if they can just find that
last drive I ordered that is several days late.
Thanks.
Dale
:-) :-)
On Wednesday, 7 May 2025 00:30:34 British Summer Time Dale wrote:
[…]
I ran a hdparm test. I wanted to see as accurately as I could what the speed was. I got this.
root@nas ~ # hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 7106 MB in 2.00 seconds = 3554.48 MB/sec
These are rather pedestrian ^^^^ but I do not have any drives as large as yours to compare. A 4G drive here shows this:
~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 52818 MB in 1.99 seconds = 26531.72 MB/sec
Timing buffered disk reads: 752 MB in 3.00 seconds = 250.45 MB/sec
That's an order of magnitude higher cached reads.
Timing buffered disk reads: 802 MB in 3.00 seconds = 267.03 MB/sec root@nas ~ #
From what I've seen of other drives, that appears to be SATA 3 or the faster speed. So, it is slow to respond but connects and works fine.
Michael wrote:
On Saturday, 10 May 2025 16:53:55 British Summer Time Dale wrote:
Dale wrote:
I didn't know about that until now. I already shutdown my old rig.
Might try that later. It may shed some light on this mess.
I did send a email to the seller tho. They sell a LOT of drives. I've >>> seen them show a stock of over 200 drives of a particular model and a
day or so later, sold out. They sell new, a few kinds of used as well. >>> I tend to buy used but most of the time, the number of power on hours is >>> in the single digits. The recent drives show 2 hours each. I think if >>> it is a problem, they will know since they test a lot of drives. Maybe >>> it is normal but if not, I'm sure they will agree to swap or refund.
They sold out of the 20TB drives shortly after I ordered mine. They
started with right at 200 and sold out in like 2 or 3 days.
I figure I'll hear back shortly. They been pretty fast to respond to
questions in the past.
Dale
:-) :-)
I got a response. This is what they said.
Thank you for bringing this to our attention. As long as we're not
seeing any I/O errors that would inhibit your ability to use the
drive, everything should be fine.
This type of link speed negotiation issue can occur with helium-filled >>> drives, as their spin-up time tends to be slightly longer than that of >>> traditional drives. Is your system or HBA a bit on the older side?
Most modern toolsets and software account for this extended spin-up
time by allowing a longer delay before attempting speed negotiation,
which typically avoids this issue altogether.
In summary, this isn't unprecedented behavior when working with older
hardware or software, but at this stage, it doesn’t point to any major >>> functional problem. I hope this information helps.
As I mentioned, it passed all the SMART tests.
What do you get for the smart attribute with ID 22?
https://www.backblaze.com/blog/smart-22-is-a-gas-gas-gas/
Although others report ID 16 as the "Current Helium Level", or "Internal Environmental Status" attribute. The ID number and Attribute description depends on the drive firmware.
I'm not sure on the
3GB/sec connection yet tho. I'm pretty sure that mobo is capable of
6GBs/sec tho. When I put it in my main rig, I'll know for sure.
Slow spin-up or not, if it is not performing at 6Gbps as advertised when connected to a SATA 3 bus, then it is not fit for purpose - assuming transfer speeds are a consideration for you and you don't want to let
this slip.>
What are your thoughts on what they say? It make sense to anyone who
knows more about hard drives than me? Now if they can just find that
last drive I ordered that is several days late.
Thanks.
Dale
:-) :-)
My knowledge of drives is quite limited and my working knowledge of large Helium filled drives is a fat zero. Despite this, here's some random thoughts - should you wish to read further:
I have read drives which have seen continuous service in large datacenters and crypto-mining farms for a couple of years are decommissioned, tested, reset to zero and sold cheaper as 'refurbished'. If you keep an eye on Amazon and other large retailers and you notice large batches of refurbished drives suddenly show up sold at cut prices, then this is in
all likelihood their origin and explains the low prices. When you check the perturbations in supply you'll notice some makes, models and sizes of drives arrive rather prematurely compared to their age in the refurbished drives marketplace and this is an indication of early failure rates
higher than the big datacenters were wishing to see. It doesn't necessarily make all of these drives bad, but it is something to bear in mind when you check how much warranty they are being sold with after they are labelled as 'refurbished', compared to the original OEM warranty when new.
Regarding Helium sealed drives, they are reported to have a slightly lower average failure rate than conventional drives. Helium having a lower density than air and not smelling anywhere as bad as methane ;-) is used
to reduce aerodynamic drag of the moving parts within the drive. The
idea being such drives will consume less energy to run, with less windage the platters vibrate less and therefore they can be packed tighter, they will run cooler and at least theoretically will last longer.
The laser welding techniques to seal the helium in the drive casing and keep denser air out is meant to ensure the 5 year warranty these drives
are sold with when new. In practice, any light weight small molecule gas can leak and in this case the drive will lose its Helium content - and
soon fail smart tests. As it loses Helium at some point it will start to draw more energy to operate in a higher drag environment. Since any SATA controller power threshold is not unlimited, the increased drag will
cause a slower spin-up than when it was new.
I'm not saying your drive is failing, but the slow spin-up argument *because* ... Helium, could be somewhat moot. Modelling studies have shown ceteris paribus a Helium filled drive will spin *faster* and remain cooler than an air filled drive:
https://www.researchgate.net/publication/225162945_Thermal_analysis_of_hel ium-filled_enterprise_disk_drive
You can check if smartctl output shows a different Spin-Up Time value against other drives - if this Attribute is reported at all. The Average Latency of your 20TB Helium filled drive is reported in its data sheet as 4.16ms - the same as 16TB, 14TB, 12TB non-Helium Ironwolf Pro drives.
This indicates the time for an I/O request to be completed, not
necessarily a spin-up performance alone, but why should your 20TB be
slower to spin up? I don't know. :-/
Anyway, these are a lay person's comments. A drive engineer will know exactly what's what with this technology and its performance variations.
A chat with Seagate's support may get you closer to the truth and explain why the 16TB drive spins up nicely while the 20TB drags its feet.
HTH,
OK. My old 8TB SMR drive seems to be having . . . issues. Luckily I
bought two 16TB drives and a 20TB, topic of this thread. So I got a
spare drive. Anyway, I finally got some time to hook this 20TB drive
back up to my main rig with a external enclosure that I know works at
full speed. Other drives do. I recall you mentioning using hdparm -I.
Here is the output of that.
root@Gentoo-1 / # hdparm -I /dev/sdb^^^^^^^
/dev/sdb:
ATA device, with non-removable media
Model Number: ST20000NM007D-3DJ103
Serial Number: :-D :-D :-D
Firmware Revision: SN05
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II
Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Standards:^^^^^
Used: unknown (minor revision code 0xffff)
Supported: 11 10 9 8 7 6 5
Likely used: 11
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 39063650304
Logical Sector size: 512 bytes [ Supported:
512 4096 ]
Physical Sector size: 4096 bytes
Logical Sector-0 offset: 0 bytes
device size with M = 1024*1024: 19074048 MBytes
device size with M = 1000*1000: 20000588 MBytes (20000 GB)
cache/buffer size = unknown
Form Factor: 3.5 inch
Nominal Media Rotation Rate: 7200
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 254, current value: 0
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
Power-Up In Standby feature set
* SET_FEATURES required to spinup after power up
SET_MAX security extension
* 48-bit Address feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* Media Card Pass-Through
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
* IDLE_IMMEDIATE with UNLOAD
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE_EXT command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* unknown 119[6]
* unknown 119[7]
unknown 119[8]
unknown 119[9]
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
* Native Command Queueing (NCQ)
* Phy event counters
* Idle-Unload when NCQ is active
* READ_LOG_DMA_EXT equivalent to READ_LOG_EXT
* DMA Setup Auto-Activate optimization
Device-initiated interface power management
* Software settings preservation
unknown 78[7]
* SMART Command Transport (SCT) feature set
* SCT Write Same (AC2)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
unknown 206[7]
unknown 206[12] (vendor specific)
unknown 206[13] (vendor specific)
unknown 206[14] (vendor specific)
* SANITIZE_ANTIFREEZE_LOCK_EXT command
* SANITIZE feature set
* OVERWRITE_EXT command
* All write cache is non-volatile
* Extended number of user addressable sectors
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
1716min for SECURITY ERASE UNIT. 1716min for ENHANCED SECURITY
ERASE UNIT.
Logical Unit WWN Device Identifier: 5000c500e59b0554
NAA : 5
IEEE OUI : 000c50
Unique ID : 0e59b0554
Checksum: correct
root@Gentoo-1 / #
If I recall correctly, udma6 is the fastest speed.
So, in the end the
drive should be connected at 6GB/sec. Right?
Also, do you see anything
else in there that would concern you? I'm also going to include the
output of smartctl -a for it as well.
root@Gentoo-1 / # smartctl -a /dev/sdb^^^^^^^^
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.9.10-gentoo] (local build) Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate Exos X20
Device Model: ST20000NM007D-3DJ103
Serial Number: :-D :-D :-D
LU WWN Device Id: 5 000c50 0e59b0554
Firmware Version: SN05
User Capacity: 20,000,588,955,648 bytes [20.0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database 7.3/5671
ATA Version is: ACS-4 (minor revision not indicated)
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon May 12 03:05:08 2025 CDT[snip ...]
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 083 079 044 Pre-fail
Always - 0/223644630
3 Spin_Up_Time 0x0003 091 091 000 Pre-fail
Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age
Always - 12
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail
Always - 0
7 Seek_Error_Rate 0x000f 071 060 045 Pre-fail
Always - 0/12784911
9 Power_On_Hours 0x0032 100 100 000 Old_age
Always - 36
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail
Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age
Always - 12
18 Head_Health 0x000b 100 100 050 Pre-fail
Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age
Always - 0
188 Command_Timeout 0x0032 100 100 000 Old_age
Always - 0 0 0
190 Airflow_Temperature_Cel 0x0022 071 049 000 Old_age
Always - 29 (Min/Max 24/29)
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age
Always - 5
193 Load_Cycle_Count 0x0032 100 100 000 Old_age
Always - 13
194 Temperature_Celsius 0x0022 029 048 000 Old_age
Always - 29 (0 22 0 0 0)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age
Always - 15
200 Pressure_Limit 0x0023 100 100 001 Pre-fail
Always - 0
240 Head_Flying_Hours 0x0000 100 100 000 Old_age
Offline - 35h+57m+09.784s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age
Offline - 46594603
242 Total_LBAs_Read 0x0000 100 253 000 Old_age
Offline - 3602490543
SMART Error Log Version: 1
No Errors Logged
I'm thinking about adding this to my backup drive set. With this
addition, I can have one backup for all my videos instead of breaking it
into two pieces.
Any concerns with the data you see? Would you be OK using this drive?
Am Wed, May 07, 2025 at 09:18:16AM +0100 schrieb Michael:
On Wednesday, 7 May 2025 00:30:34 British Summer Time Dale wrote:
[…]
I ran a hdparm test. I wanted to see as accurately as I could what the speed was. I got this.
root@nas ~ # hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 7106 MB in 2.00 seconds = 3554.48 MB/sec
These are rather pedestrian ^^^^ but I do not have any drives as large as yours to compare. A 4G drive here shows this:
~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 52818 MB in 1.99 seconds = 26531.72 MB/sec
Timing buffered disk reads: 752 MB in 3.00 seconds = 250.45 MB/sec
That's an order of magnitude higher cached reads.
So you have a faster machine, possibly DDR5.
Dale’s NAS is an old build.
That’s why it’s called cached.
From the manpage of hdparm: This measurement [of -T] is essentially an indication of the throughput of the processor, cache, and memory of the system under test. This displays the speed of reading directly from the
Linux buffer cache without disk access. -------------------^^^^^^^^^^^^^^^^^^^
Howdy,
Same thing just a new drive. I got a new 20TB drive in, same as last
one in this thread. This is what I get from it.
May 29 19:06:52 Gentoo-1 kernel: ata4: found unknown device (class 0)
May 29 19:07:02 Gentoo-1 kernel: ata4: softreset failed (1st FIS failed)
May 29 19:07:02 Gentoo-1 kernel: ata4: found unknown device (class 0)
May 29 19:07:12 Gentoo-1 kernel: ata4: softreset failed (1st FIS failed)
May 29 19:07:12 Gentoo-1 kernel: ata4: found unknown device (class 0)
May 29 19:07:17 Gentoo-1 kernel: ata4: found unknown device (class 0)
May 29 19:07:17 Gentoo-1 kernel: ata4: SATA link up 1.5 Gbps (SStatus
113 SControl 330)
May 29 19:07:17 Gentoo-1 kernel: ata4: link online but 1 devices misclassified, retrying
May 29 19:07:17 Gentoo-1 kernel: ata4: reset failed (errno=-11),
retrying in 30 secs
May 29 19:07:47 Gentoo-1 kernel: ata4: SATA link up 1.5 Gbps (SStatus
113 SControl 330)
May 29 19:07:48 Gentoo-1 kernel: ata4.00: ATA-11: ST20000NM007D-3DJ103,
SN05, max UDMA/133
May 29 19:07:48 Gentoo-1 kernel: ata4.00: 39063650304 sectors, multi 16: LBA48 NCQ (depth 32), AA
May 29 19:07:48 Gentoo-1 kernel: ata4.00: Features: NCQ-sndrcv
May 29 19:07:48 Gentoo-1 kernel: ata4.00: configured for UDMA/133
May 29 19:07:48 Gentoo-1 kernel: scsi 3:0:0:0: Direct-Access
ATA ST20000NM007D-3D SN05 PQ: 0 ANSI: 5
May 29 19:07:48 Gentoo-1 kernel: sd 3:0:0:0: Attached scsi generic sg9
type 0
May 29 19:07:48 Gentoo-1 kernel: sd 3:0:0:0: [sdi] 39063650304 512-byte logical blocks: (20.0 TB/18.2 TiB)
May 29 19:07:48 Gentoo-1 kernel: sd 3:0:0:0: [sdi] 4096-byte physical blocks May 29 19:07:48 Gentoo-1 kernel: sd 3:0:0:0: [sdi] Write Protect is off May 29 19:07:48 Gentoo-1 kernel: sd 3:0:0:0: [sdi] Mode Sense: 00 3a 00 00 May
29 19:07:48 Gentoo-1 kernel: sd 3:0:0:0: [sdi] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
May 29 19:07:48 Gentoo-1 kernel: sd 3:0:0:0: [sdi] Preferred minimum I/O
size 4096 bytes
May 29 19:07:48 Gentoo-1 kernel: sd 3:0:0:0: [sdi] Attached SCSI
removable disk
And hdparm -I shows this.
root@Gentoo-1 / # hdparm -I dev/sdi
dev/sdi:
ATA device, with non-removable media
Model Number: ST20000NM007D-3DJ103
Serial Number: ZVT3ANGK
Firmware Revision: SN05
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II
Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Standards:
Used: unknown (minor revision code 0xffff)
Supported: 11 10 9 8 7 6 5
Likely used: 11
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 39063650304
Logical Sector size: 512 bytes [ Supported:
512 4096 ]
Physical Sector size: 4096 bytes
Logical Sector-0 offset: 0 bytes
device size with M = 1024*1024: 19074048 MBytes
device size with M = 1000*1000: 20000588 MBytes (20000 GB)
cache/buffer size = unknown
Form Factor: 3.5 inch
Nominal Media Rotation Rate: 7200
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 254, current value: 0
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
Power-Up In Standby feature set
* SET_FEATURES required to spinup after power up
SET_MAX security extension
* 48-bit Address feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* Media Card Pass-Through
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
* IDLE_IMMEDIATE with UNLOAD
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE_EXT command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* unknown 119[6]
* unknown 119[7]
unknown 119[8]
unknown 119[9]
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
* Native Command Queueing (NCQ)
* Phy event counters
* Idle-Unload when NCQ is active
* READ_LOG_DMA_EXT equivalent to READ_LOG_EXT
* DMA Setup Auto-Activate optimization
Device-initiated interface power management
* Software settings preservation
unknown 78[7]
* SMART Command Transport (SCT) feature set
* SCT Write Same (AC2)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
unknown 206[7]
unknown 206[12] (vendor specific)
unknown 206[13] (vendor specific)
unknown 206[14] (vendor specific)
* SANITIZE_ANTIFREEZE_LOCK_EXT command
* SANITIZE feature set
* OVERWRITE_EXT command
* All write cache is non-volatile
* Extended number of user addressable sectors
<<< SNIP >>>
Checksum: correct
root@Gentoo-1 / #
According to that, it is connected at udma6 which is the fastest. So
that is good, I guess. Since both drives is slow to connect, it seems
this is a trend and may just be normal for these drives. Given I use
these for data that is only put in use a few minutes after booting,
which gives it plenty of time to connect properly, then it isn't a
problem for me. I just wouldn't want to try to put a OS on the thing
and boot from it.
Any one else have different thoughts? See a problem that is a trend, in
a bad way? Given two different drives has the same slow connect time,
maybe it is normal.
By the way, I have data on the first 20TB drive and it has worked fine
so far. I've had to reboot a couple times and by the time the system
comes up, I login, KDE comes up and I get to a Konsole to unlock the encrypted drives, they are ready. I'd guess it takes at least three
minutes from power up until I unlock the drives. This is likely shorter
if I put one of these drives on the NAS box, no GUI on it. Still, I
give it plenty of time to come up before even ssh'ing into the box much
less unlocking it.
Dale
:-) :-)
According to that, it is connected at udma6 which is the fastest. So
that is good, I guess. Since both drives is slow to connect, it seems
this is a trend and may just be normal for these drives. Given I use
these for data that is only put in use a few minutes after booting,
which gives it plenty of time to connect properly, then it isn't a
problem for me. I just wouldn't want to try to put a OS on the thing
and boot from it.
Any one else have different thoughts? See a problem that is a trend, in
a bad way? Given two different drives has the same slow connect time, maybe it is normal.
You can transfer some data from a tmpfs and measure the speed. If it gets anywhere near 4.8 Gbit/s (600 MB/s) its a SATA 3.
Am Fri, May 30, 2025 at 11:56:24AM +0100 schrieb Michael:
According to that, it is connected at udma6 which is the fastest. So that is good, I guess. Since both drives is slow to connect, it seems this is a trend and may just be normal for these drives. Given I use these for data that is only put in use a few minutes after booting,
which gives it plenty of time to connect properly, then it isn't a problem for me. I just wouldn't want to try to put a OS on the thing
and boot from it.
Any one else have different thoughts? See a problem that is a trend, in a bad way? Given two different drives has the same slow connect time, maybe it is normal.
You can transfer some data from a tmpfs and measure the speed. If it gets anywhere near 4.8 Gbit/s (600 MB/s) its a SATA 3.
But only on a fast SSD. An HDD does not reach this speed at all. I have a USB3 to M.2 SATA enclosure with an SSD inside and I get around 400 MB/s for sequential transfers of big video files. I guess the SSD doesn’t get any faster.
Michael wrote:
You can transfer some data from a tmpfs and measure the speed. If it gets anywhere near 4.8 Gbit/s (600 MB/s) its a SATA 3. The delay in the kernel picking it up at boot may be related to its size, but I have no experience of such large drives to be able to confirm this.
When I started reading your reply, I realized I had that drive in the
older external enclosure connected to my main rig. I'm not sure if it
is SATA 3 capable or not. I need to research the specs on that
enclosure. It works fine for running self tests tho. Anyway, I
connected it to my NAS box. On it, it reads at speeds that make me
think it is SATA 3. It's around 250MB/sec with hdparm -t which is
normal on most all my rigs.
I'm surprised by the speed of the OS drive tho. It is a SSD drive.
Anyway, as one can see, some drives are connected at SATA 2 and some at SATA3. Now I noticed, the 4 drive set is connected to a PCIe card. The three drive set and the OS drive is connected to the mobo itself. This
makes me wonder, is the mobo ports only SATA 2? I went and looked at
the manual that shows a block diagram and what is what. Sure enough,
the mobo ports are SATA 2 or 3GBs/sec. Well, that explains that.
I can't believe that mobo is SATA 2 tho. I could have sworn it was SATA
3.
Dale
:-) :-)
One more update. I did some research on the older external enclosure
that hard drives just slide into, no disassembly to change drives, and
it is indeed a SATA 2 spec enclosure. Don't get me wrong, I love the enclosure. I can swap drives in well under 30 seconds. It's a nice enclosure and I wish I had some more of them just maybe SATA 3.
[…] I like the ease with which I can swap drives. I mostly use
them for testing and updating a hard drive backup of /root, /etc and my
world file. I only update those every couple months or so anyway so
speed just isn't a issue.