Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 35 |
Nodes: | 6 (1 / 5) |
Uptime: | 11:13:50 |
Calls: | 331 |
Calls today: | 1 |
Files: | 986 |
Messages: | 107,499 |
Posted today: | 1 |
Validation exists for a reason -- separate from and subsequent to
product testing. Because these sorts of SNAFUs happen all the
time!
On 8th November 2024, David Brown wrote:
"Unfortunately, in some regulated markets, or for some types of "safety certification", the rule-makers don't understand how this works. The result is
that they insist on extra fault-checking hardware and/or software that actually
decreases the total reliability of the system, and introduces new parts that in
themselves cannot be checked (systematically, in production, and/or in the field)."
Professor William H. Sanders from the University of Illinois at Urbana-Champaign dishonestly boasted in a lecture to us on what he pretends to
be "Validating computer system and network trustworthiness" that he has solved
a problem to guarantee successes against faults and that NASA had complained that this problem is not solvable. He showed us this sham supposed solution. So
I immediately accused him that this proposal does not succeed. So he admitted this accusation when he said "Who checks the checker?" Many papers exist with a
similar title to this question. They are supposedly named after a supposedly common question about Ancient Roman soldiers.
On 11/12/2024 4:27 PM, Niocláiſín Cóilín de Ġloſtéir wrote:
On 8th November 2024, David Brown wrote:VALIDATION is simply ensuring the product DELIVERED meets the needs of the customer to which it is delivered.
"Unfortunately, in some regulated markets, or for some types of "safety
certification", the rule-makers don't understand how this works. The
result is
that they insist on extra fault-checking hardware and/or software that
actually
decreases the total reliability of the system, and introduces new
parts that in
themselves cannot be checked (systematically, in production, and/or in
the
field)."
TESTING simply ensures that a product meets its specification.
There can -- and often is -- a disconnect between the specification
and "what the customer wants/needs". Because the spec author often
has incomplete domain knowledge OR makes assumptions that aren't
guaranteed by anything.
Because you are trying to prove the device meets the customer's needs,
you have to have THE product -- not a simulation of it or a bunch of
"test logs" where you threw test cases at the code and "proved" that
the system did what it should.
[I patched a message in a binary for a product many years ago. The
customer -- an IBM division -- aborted the lengthy validation test
when my message appeared ("That is not supposed to be the message
so we KNOW that this isn't the actual system that we contracted to purchase!")]
Because you have to validate the actual product, the types of things
you can do to the hardware to facilitate fault injection are sorely limited. "Are you SURE this doesn't have any secondary impact on
the product that changes WHAT we are testing?"
I was interviewed by a prospective client to work on an
"electronic door lock" system (think: hotels). During the
demo of their prototype, I reached over and unplugged a cable
(that I strongly suspected would inject a fault... that they
would NOT detect!). Sure as shit, their system didn't notice
what I had done and blindly allowed me to produce several
"master keys" while my host watched in horror. All without
the system having a record of my actions!
"Oooops! Wanna bet that wasn't part of the specification?!"
Validation exists for a reason -- separate from and subsequent to
product testing. Because these sorts of SNAFUs happen all the
time!
In *regulated* industries (FDA, aviation, etc.), products are
validate (hardware and software) in their "as sold" configurations.
This adds constraints to what can be tested, and how. E.g.,
invariants in code need to remain in the production configuration
if relied upon during validation.
But, *testing* (as distinct from validation) is usually more
thorough and benefits from test-specific changes to the
hardware and software. These to allow for fault injection
and observation.
In *unregulated* industries (common in the US but not so abroad),
how much of a stickler is the validation process for this level
of "purity"?
On 11/7/2024 11:10 PM, Don Y wrote:
In *regulated* industries (FDA, aviation, etc.), products are
validate (hardware and software) in their "as sold" configurations.
This adds constraints to what can be tested, and how. E.g.,
invariants in code need to remain in the production configuration
if relied upon during validation.
But, *testing* (as distinct from validation) is usually more
thorough and benefits from test-specific changes to the
hardware and software. These to allow for fault injection
and observation.
In *unregulated* industries (common in the US but not so abroad),
how much of a stickler is the validation process for this level
of "purity"?
<snip>
OK boss says i gotta build a self driving car huh... ok lets see... java, that's a given.. alright... *starts typing* public class Car extends Vehicle {...
The effort is usually significant (in Pharma, it begins long before the product development -- with audits of your firm, its process and procedures, the qualifications of the personnel tasked with the design/development, etc.).
For a specific product, you must verify everything documented
behaves as stated: show me that you will not accept invalid input;
show me that the mechanism moves to a safe state when configured
(or accessed) improperly; show me that you can vouch for the information
that your sensors CLAIM and the actions that your actuators purport
to affect; etc. Just stating that a particular error message (or other response) will be generated isn't proof that it will -- show me HOW you
sense that error condition, how you report it and then give me a real exemplar to prove that you *can*.
The *customer* ultimately knows how the product will be (ab)used -- even
if he failed to communicate that to the developer at the time the specification was written (a common problem is the impedance mismatch
between domains: what the customer takes for granted may not be evident
to the specification developer). He will hold its feet to the fire
and refuse to accept the device for use in his application.
In *regulated* industries (FDA, aviation, etc.), products are
validate (hardware and software) in their "as sold" configurations.
This adds constraints to what can be tested, and how. E.g.,
invariants in code need to remain in the production configuration
if relied upon during validation.
But, *testing* (as distinct from validation) is usually more
thorough and benefits from test-specific changes to the
hardware and software. These to allow for fault injection
and observation.
In *unregulated* industries (common in the US but not so abroad),
how much of a stickler is the validation process for this level
of "purity"?
E.g., I have "test" hardware that I use to exercise the algorithms
in my code to verify they operate as intended and detect the
faults against which they are designed to protect. So, I can inject
EDAC errors in my memory interface, SEUs, multiple row/column
faults, read/write disturb errors, pin/pad driver faults, etc.
These are useful (essential?) to proving the software can
detect these faults -- without having to wait for a "naturally
occurrence". But, because they are verified/validated on non
production hardware, they wouldn't "fly" in regulated
markets.
Do you "assume" your production hardware/software mimics
the "test" configuration, just by a thought exercise
governing the differences between the two situations?
Without specialty devices (e.g., bond-outs), how can you
address these issues, realistically?