Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 23 |
Nodes: | 6 (0 / 6) |
Uptime: | 54:23:25 |
Calls: | 583 |
Files: | 1,139 |
D/L today: |
179 files (27,921K bytes) |
Messages: | 111,706 |
Don Y <blockedofcourse@foo.invalid> wrote:
Well, if display can not update fast, then there is no point in
higher fram rates, just lower frame rate to what display can do,
I would expect about 40 machine
instructions per sample. One needs to do this for each frame,
assuming 30 frames per second this gives 1200 mln instrucions
per second. Quite doable on Raspbery Pi class board. This assumes
doing computation via CPU. For video core in Raspbery Pi this
should be very easy job. Smaller RPi class boards can fit
behind 10cm by 10 cm screen, so size should not be a problem.
I don't understand why one wouldn't want to CAPTURE the entire
RAW signal -- playing the original medium exactly once -- and
then experimenting with the captured data. Applying post
processing to *it* instead of trying to tweek that while
the original medium is being subjected to mechanical playback
action.
IIUC the point is that data captured with wrong settings is
essentially useless. I do not know what Liz is exactly doing,
but clearly want to monitor and adjust settings in real time.
There may be psychological effect: human brain is better at
finding dynamic changes than detail in static image.
I do not condider problem of automating what Liz is doing,
for some problem of related nature there are remarkable
successes, but also many problems currently lack cost-effective
solution.
Waldek Hebisch <antispam@fricas.org> wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
[...]
Well, if display can not update fast, then there is no point in
higher fram rates, just lower frame rate to what display can do,
I would expect about 40 machine
instructions per sample. One needs to do this for each frame,
assuming 30 frames per second this gives 1200 mln instrucions
per second. Quite doable on Raspbery Pi class board. This assumes
doing computation via CPU. For video core in Raspbery Pi this
should be very easy job. Smaller RPi class boards can fit
behind 10cm by 10 cm screen, so size should not be a problem.
I don't understand why one wouldn't want to CAPTURE the entire
RAW signal -- playing the original medium exactly once -- and
then experimenting with the captured data. Applying post
processing to *it* instead of trying to tweek that while
the original medium is being subjected to mechanical playback
action.
IIUC the point is that data captured with wrong settings is
essentially useless. I do not know what Liz is exactly doing,
but clearly want to monitor and adjust settings in real time.
There may be psychological effect: human brain is better at
finding dynamic changes than detail in static image.
I do not condider problem of automating what Liz is doing,
for some problem of related nature there are remarkable
successes, but also many problems currently lack cost-effective
solution.
I have now built a Wave2 dual-input oscilloscope from a kit and
connected it up to the X and Y outputs of the vector analyser. A low
refresh rate shows the trace as a series of widely-spaced dots, looking almost like a photograph of a galaxy. As the rate is increased, the
dots become closer until they appear as lines - but then some of the transient detail is missed altogether.
Nowhere near as useful as a CRT oscilloscope.
Liz Tuddenham <liz@poppyrecords.invalid.invalid> wrote:
Waldek Hebisch <antispam@fricas.org> wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
[...]
Well, if display can not update fast, then there is no point in
higher fram rates, just lower frame rate to what display can do,
I would expect about 40 machine
instructions per sample. One needs to do this for each frame,
assuming 30 frames per second this gives 1200 mln instrucions
per second. Quite doable on Raspbery Pi class board. This assumes
doing computation via CPU. For video core in Raspbery Pi this
should be very easy job. Smaller RPi class boards can fit
behind 10cm by 10 cm screen, so size should not be a problem.
I don't understand why one wouldn't want to CAPTURE the entire
RAW signal -- playing the original medium exactly once -- and
then experimenting with the captured data. Applying post
processing to *it* instead of trying to tweek that while
the original medium is being subjected to mechanical playback
action.
IIUC the point is that data captured with wrong settings is
essentially useless. I do not know what Liz is exactly doing,
but clearly want to monitor and adjust settings in real time.
There may be psychological effect: human brain is better at
finding dynamic changes than detail in static image.
I do not condider problem of automating what Liz is doing,
for some problem of related nature there are remarkable
successes, but also many problems currently lack cost-effective
solution.
I have now built a Wave2 dual-input oscilloscope from a kit and
connected it up to the X and Y outputs of the vector analyser. A low
refresh rate shows the trace as a series of widely-spaced dots, looking
almost like a photograph of a galaxy. As the rate is increased, the
dots become closer until they appear as lines - but then some of the
transient detail is missed altogether.
Nowhere near as useful as a CRT oscilloscope.
"A picture's worth a thousand words."
I'm trying (and failing) to imagine the image shown on the oscilloscope.
Does it look anything like this?
<https://res.utmel.com/Images/UEditor/e4ea58ec-2720-40a7-9536-c9875c7df749.png>
73,
--
Don veritas _|_ KB7RPU liberabit | https://www.qsl.net/kb7rpu vos |
On Tue, 15 Jul 2025 11:09:16 +0100, liz@poppyrecords.invalid.invalid
(Liz Tuddenham) wrote:
<snip>
An LCD implementation would be small and simple and cost $20-50. And
have nice colors.
...But will it do the job? The answer so far appear to be "No". As the >>price is so low, I might be tempted to buy one and try it.
And be more reliable.
Yes - if it can be made to work in the first place.
Considering how long it took the industry to mimic persistence in
the LCD displays of top quality lab equipment, it is likely no
simple matter.
It's lack was always a major complaint of analog users.Digital
apps often didn't need anything more than representative displays.
It probably still defines whether a display is intended for toying
or professional use.
On Mon, 11 Aug 2025 11:12:55 -0400, legg <legg@nospam.magma.ca> wrote:
On Tue, 15 Jul 2025 11:09:16 +0100, liz@poppyrecords.invalid.invalid
(Liz Tuddenham) wrote:
<snip>
An LCD implementation would be small and simple and cost $20-50. And
have nice colors.
...But will it do the job? The answer so far appear to be "No". As the >>price is so low, I might be tempted to buy one and try it.
And be more reliable.
Yes - if it can be made to work in the first place.
Considering how long it took the industry to mimic persistence in
the LCD displays of top quality lab equipment, it is likely no
simple matter.
It's lack was always a major complaint of analog users.Digital
apps often didn't need anything more than representative displays.
It probably still defines whether a display is intended for toying
or professional use.
Actually, very basic persistence has offered in most digital scopes
from the beginning - it was the effectiveness of the attempt that
tended to be disappointing. The old monochrome TDS210 and the recently popular Rigol DS1054 both offer manually variable settings in their
display menues.
Some allow this to be displayed on a PC - so you might not need a
screen on the applicable equipment, just the PC interface and SW.
Fooling with these may let you know what a satisfactory display
requires, or if it will ever be satisfactory, off-tube.
Sorry the Wave 2 was a bust. You might fiddle with it's settings as
well, in a non-XY mode, before switching over. Not just offsets, gain
and frequency. Triggering, if relevant, may be a unique learning
process for any model, that can only be 'assessed'prior to XY
transference.
On 12/08/2025 13:57, Don wrote:[...]
"A picture's worth a thousand words."
I'm trying (and failing) to imagine the image shown on the oscilloscope.
Does it look anything like this?
<https://res.utmel.com/Images/UEditor/e4ea58ec-2720-40a7-9536-c9875c7df7 49.png>
Maybe more like this:
https://www.rtw.com/en/blog/focus-the-audio-vectorscope.html
Scroll down for a video.
Don wrote:
Liz Tuddenham wrote:
I have now built a Wave2 dual-input oscilloscope from a kit and
connected it up to the X and Y outputs of the vector analyser. A low
refresh rate shows the trace as a series of widely-spaced dots, looking
almost like a photograph of a galaxy. As the rate is increased, the
dots become closer until they appear as lines - but then some of the
transient detail is missed altogether.
Nowhere near as useful as a CRT oscilloscope.
"A picture's worth a thousand words."
I'm trying (and failing) to imagine the image shown on the oscilloscope.
Does it look anything like this?
<https://res.utmel.com/Images/UEditor/e4ea58ec-2720-40a7-9536-c9875c7df749.png>
Maybe more like this:
https://www.rtw.com/en/blog/focus-the-audio-vectorscope.html
Scroll down for a video.
Don Y <blockedofcourse@foo.invalid> wrote:
On 7/31/2025 5:27 PM, Waldek Hebisch wrote:
Sampling at 1 MS/sec (should be enough for 100 KHz signal) and
assuming decay in 1s gives 1 mln points for display. For
each point one needs to apply decay factor and redistribute value
betwen corresponding points, I would expect about 40 machine
instructions per sample.
I hacked together a demo to try to get an idea as to performance.
Of course, I write in HLLs so counting machine cycles isn't easy
from the source code.
But, "40" was clearly a low number -- unless you are relying on
graphic hardware to implement (e.g.) line drawing primitives.
I through together a DDA to play "connect the dots" -- the
dots being the individual samples (L,R) --> (X,Y). Then,
a clipping region to ensure the display is never overshot.
Then, goosed the gain so two consecutive samples could be
at opposite edges of the clipping region.
This means a sample can require a line segment (I didn't bother
trying to fit curves to the data) that crosses the entire
display. HUNDREDS of dots to be individually plotted.
Clearly not going to happen in 40 opcodes -- unless those include
graphic primitives (AND execute in the same time as other opcodes)
Yes. I was thinking about a simpler apprach: just distribute
value in 2 by 2 square. Given largish number of points
the effect could be quite good. Page linked to in one
of previous messages claimed to use similar approach
(large number of points and simple arithemtic for each
pont) and others agreed that result was good.
One needs to do this for each frame,
assuming 30 frames per second this gives 1200 mln instrucions
per second. Quite doable on Raspbery Pi class board. This assumes
doing computation via CPU. For video core in Raspbery Pi this
should be very easy job. Smaller RPi class boards can fit
behind 10cm by 10 cm screen, so size should not be a problem.
Worst case, you have to imagine filling the entire screen
at the frame rate (and however many samples that corresponds to)
Yes, with long tail of points at some moment it is cheaper to
combine screen content with new points.
legg <legg@nospam.magma.ca> wrote:
On Mon, 11 Aug 2025 11:12:55 -0400, legg <legg@nospam.magma.ca> wrote:
On Tue, 15 Jul 2025 11:09:16 +0100, liz@poppyrecords.invalid.invalid
(Liz Tuddenham) wrote:
<snip>
An LCD implementation would be small and simple and cost $20-50. And
have nice colors.
...But will it do the job? The answer so far appear to be "No". As the >> >>price is so low, I might be tempted to buy one and try it.
And be more reliable.
Yes - if it can be made to work in the first place.
Considering how long it took the industry to mimic persistence in
the LCD displays of top quality lab equipment, it is likely no
simple matter.
It's lack was always a major complaint of analog users.Digital
apps often didn't need anything more than representative displays.
It probably still defines whether a display is intended for toying
or professional use.
Actually, very basic persistence has offered in most digital scopes
from the beginning - it was the effectiveness of the attempt that
tended to be disappointing. The old monochrome TDS210 and the recently
popular Rigol DS1054 both offer manually variable settings in their
display menues.
Some allow this to be displayed on a PC - so you might not need a
screen on the applicable equipment, just the PC interface and SW.
I absolutely do not want to have a PC involved in any way. This is a >self-contained analogue piece of equipment that just needs a suitable >display.
Fooling with these may let you know what a satisfactory display
requires, or if it will ever be satisfactory, off-tube.
Sorry the Wave 2 was a bust. You might fiddle with it's settings as
well, in a non-XY mode, before switching over. Not just offsets, gain
and frequency. Triggering, if relevant, may be a unique learning
process for any model, that can only be 'assessed'prior to XY
transference.
I am mystified about the triggering, as it shouldn't need triggering in
the X-Y mode. The instruction sheet doesn't explain why triggering
should be needed but just lists various options.
Liz Tuddenham <liz@poppyrecords.invalid.invalid> wrote:
Waldek Hebisch <antispam@fricas.org> wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
[...]
Well, if display can not update fast, then there is no point in
higher fram rates, just lower frame rate to what display can do,
I would expect about 40 machine
instructions per sample. One needs to do this for each frame,
assuming 30 frames per second this gives 1200 mln instrucions
per second. Quite doable on Raspbery Pi class board. This assumes
doing computation via CPU. For video core in Raspbery Pi this
should be very easy job. Smaller RPi class boards can fit
behind 10cm by 10 cm screen, so size should not be a problem.
I don't understand why one wouldn't want to CAPTURE the entire
RAW signal -- playing the original medium exactly once -- and
then experimenting with the captured data. Applying post
processing to *it* instead of trying to tweek that while
the original medium is being subjected to mechanical playback
action.
IIUC the point is that data captured with wrong settings is
essentially useless. I do not know what Liz is exactly doing,
but clearly want to monitor and adjust settings in real time.
There may be psychological effect: human brain is better at
finding dynamic changes than detail in static image.
I do not condider problem of automating what Liz is doing,
for some problem of related nature there are remarkable
successes, but also many problems currently lack cost-effective
solution.
I have now built a Wave2 dual-input oscilloscope from a kit and
connected it up to the X and Y outputs of the vector analyser. A low
refresh rate shows the trace as a series of widely-spaced dots, looking
almost like a photograph of a galaxy. As the rate is increased, the
dots become closer until they appear as lines - but then some of the
transient detail is missed altogether.
Nowhere near as useful as a CRT oscilloscope.
"A picture's worth a thousand words."
I'm trying (and failing) to imagine the image shown on the oscilloscope.
Does it look anything like this?
<https://res.utmel.com/Images/UEditor/e4ea58ec-2720-40a7-9536-c9875c7df749.png>
73,
Don wrote:
"A picture's worth a thousand words."
I'm trying (and failing) to imagine the image shown on the oscilloscope.
Does it look anything like this?
<https://res.utmel.com/Images/UEditor/e4ea58ec-2720-40a7-9536-c9875c7df749.png
Partial Discharge Test?
Hi Liz, I shopped for digital oscilloscopes a few years ago. (the spreadsheets on EEVblog forum was helpful). I found it gets very pricey
or impossible to find high-resolution LCDs. Seems VGA or SVGA is typical. Remedy for this is to pan & zoom the waveform (pinch and zoom on touchscreen), so the LCD's native res is less of an issue. Therefore, I suggest a raspberry pi or similar SBC, where you can plug in your own LCD panel, and choose as high the resolution you desire (or afford). And to easily upgrade it in future as the price falls. So many hits online for "raspberry pi oscilloscope". I like the ones who use external ADC chip,
like this: https://singleboardbytes.com/707/build-own-raspberry-pi-oscilloscope.htm
(I can't vouch for that project's viability, not having done this)