Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 42 |
Nodes: | 6 (0 / 6) |
Uptime: | 02:08:18 |
Calls: | 220 |
Calls today: | 1 |
Files: | 824 |
Messages: | 121,544 |
Posted today: | 6 |
I know it CAN handle BIG transactions.
But SHOULD it ?
This is maybe a "philosophical" matter ...
I'm still of the crap CPU/Mem era ... always look
to minimize/simplify.
On Sat, 23 Nov 2024 01:27:25 -0500, 186282@ud0s4.net wrote:
On 11/23/24 12:19 AM, Lawrence D'Oliveiro wrote:
You didn’t realize IPC on *nix is industrial-strength?
I know it CAN handle BIG transactions.
But SHOULD it ?
That’s what it’s designed for!
I dd a hard disk partition, compress it, and at the same time
calculate a checksum.
mkfifo mdpipe
dd if=/dev/$1 status=progress bs=16M | tee mdpipe | pigz -3 > $3.gz &
md5sum -b mdpipe | tee -a md5checksum_expanded
wait
rm mdpipe
echo "$3" >> md5checksum_expanded
*Should* is an interesting word ...
Well yes, but we have gigabytes of RAM these days.
I use named pipes on my backup script.
I dd a hard disk partition, compress it, and at the same time calculate
a checksum.
mkfifo mdpipe
dd if=/dev/$1 status=progress bs=16M | tee mdpipe | pigz -3 > $3.gz &
md5sum -b mdpipe | tee -a md5checksum_expanded
wait
rm mdpipe
echo "$3" >> md5checksum_expanded
This way there is only one disk read operation. I can see the thing
running at max hard disk speed.
Carlos E.R. <robin_lis...@es.invalid> [CE]:
I dd a hard disk partition, compress it, and at the same time
calculate a checksum.
mkfifo mdpipe
dd if=/dev/$1 status=progress bs=16M | tee mdpipe | pigz -3 > $3.gz &
md5sum -b mdpipe | tee -a md5checksum_expanded
wait
rm mdpipe
echo "$3" >> md5checksum_expanded
David B Rosen has written the tpipe(1) utility for exactly such cases:
The above steps can be rewritten in a much cleaner way as:
dd if=/dev/$1 status=progress bs=16M |
tpipe "md5sum -b >> md5checksum_expanded" |
pigz -3 > $3.gz
If tpipe is not available on your system,
you can always use the shell's
process substitution feature instead:
dd if=/dev/$1 status=progress bs=16M |
tee >(md5sum -b >> md5checksum_expanded) \
>(pigz -3 > $3.gz) \
>/dev/null
On Sat, 23 Nov 2024 14:39:40 +0100, Carlos E.R. wrote:
I use named pipes on my backup script.
I dd a hard disk partition, compress it, and at the same time calculate
a checksum.
mkfifo mdpipe
dd if=/dev/$1 status=progress bs=16M | tee mdpipe | pigz -3 > $3.gz &
md5sum -b mdpipe | tee -a md5checksum_expanded
wait
rm mdpipe
echo "$3" >> md5checksum_expanded
This way there is only one disk read operation. I can see the thing
running at max hard disk speed.
Clever. Just one thing, I would probably use a dynamic name for the
pipe and put it in $TMPDIR (e.g. generated with tempfile) so that 1)
multiple instances could run at once, and 2) it doesn’t depend on
writing into the current directory, whatever that might be.
(It’s likely neither of those issues is relevant to your particular
use case ...)
On Sat, 23 Nov 2024 11:12:50 +0000, The Natural Philosopher wrote:
*Should* is an interesting word ...
Here I think it is being used to backpedal from the poster’s original
claim that pipes are somehow unsuited to passing around large quantities
of data, while trying to somehow save face.
Well yes, but we have gigabytes of RAM these days.
That’s irrelevant. Pipes originated on the earliest Unix machine, which
was a PDP-11 with only a 64kiB address space. They work great for pumping around gigabytes of data, but you don’t need gigabyte-sized memory buffers to do that.
On 11/23/24 4:25 PM, Lawrence D'Oliveiro wrote:
Pipes originated on the earliest Unix machine, which
was a PDP-11 with only a 64kiB address space. They work great for
pumping around gigabytes of data, but you don’t need gigabyte-sized
memory buffers to do that.
It all has to be SOMEWHERE ... if not in RAM then on a mass storage
device.
There has been a long tendency in Linux/comp groups
to horribly attack, belittle, abuse, anybody who has
had a 'different experience' and sees things a little
off some kinda poorly-defined 'norm'.
Sorry, we all didn't come up on the same track.
A thousand different paths, a thousand different
styles of apps/needs/solutions. Computers let
you DO that.
Hey, if I feel the need to use files instead
of 'pipes' then I WANT TO USE FILES INSTEAD
OF PIPES. Don't like it ? Tuff titty. Any
'contributions' will be to tell me how to
maximize that approach - not to rain down piss.
(actually I think pipes are better - but there
are and will be other approaches/reasons)
The recent elections kinda upset the Linux/Unix
groups - a lot of politics promoted a lot of
threads. Well, the elections are OVER now.
Back to business.
BUT ... consider ... "back to business" does
not need to mean "back to old habits". We all
can do BETTER, move towards the future instead
of being at each others throats over NOTHING.
Just sayin'
Most everybody here seems to have been in the
groove since (or during) PUNCH CARDS. Let's
not be petty. We all did it OUR WAY.
Hey, I remember the giant handful of punch
cards - DON'T DROP 'EM ! :-)
Most everybody here seems to have been in the
groove since (or during) PUNCH CARDS. Let's
not be petty. We all did it OUR WAY.
Hey, I remember the giant handful of punch
cards - DON'T DROP 'EM ! :-)
There has been a long tendency in Linux/comp groups
to horribly attack, belittle, abuse, anybody who has
had a 'different experience' and sees things a little
off some kinda poorly-defined 'norm'.
Sorry, we all didn't come up on the same track.
A thousand different paths, a thousand different
styles of apps/needs/solutions. Computers let
you DO that.
Hey, if I feel the need to use files instead
of 'pipes' then I WANT TO USE FILES INSTEAD
OF PIPES. Don't like it ? Tuff titty. Any
'contributions' will be to tell me how to
maximize that approach - not to rain down piss.
On 11/23/24 4:25 PM, Lawrence D'Oliveiro wrote:
That’s irrelevant. Pipes originated on the earliest Unix machine,
which was a PDP-11 with only a 64kiB address space. They work great
for pumping around gigabytes of data, but you don’t need
gigabyte-sized memory buffers to do that.
It all has to be SOMEWHERE ... if not in RAM then
on a mass storage device.
Of course 'gigabytes' ALL AT ONCE -vs- "a little at a time, added
up" are entirely different things.
186282@ud0s4.net <186283@ud0s4.net> wrote:
On 11/23/24 4:25 PM, Lawrence D'Oliveiro wrote:
That’s irrelevant. Pipes originated on the earliest Unix machine,
which was a PDP-11 with only a 64kiB address space. They work great
for pumping around gigabytes of data, but you don’t need
gigabyte-sized memory buffers to do that.
It all has to be SOMEWHERE ... if not in RAM then
on a mass storage device.
Nope, at least not with pipes.
On Sun, 24 Nov 2024 14:25:10 +0000, Rich wrote:
186282@ud0s4.net <186283@ud0s4.net> wrote:
On 11/23/24 4:25 PM, Lawrence D'Oliveiro wrote:
That’s irrelevant. Pipes originated on the earliest Unix machine,
which was a PDP-11 with only a 64kiB address space. They work
great for pumping around gigabytes of data, but you don’t need
gigabyte-sized memory buffers to do that.
It all has to be SOMEWHERE ... if not in RAM then
on a mass storage device.
Nope, at least not with pipes.
Hold on a sec.... pipes are /buffered/ in RAM, so there's at least a
small bit of ram set aside for each open pipe. On Linux, pipe(7)
says "In Linux versions before 2.6.11, the capacity of a pipe was the
same as the system page size (e.g., 4096 bytes on i386). Since Linux
2.6.11, the pipe capacity is 65536 bytes. Since Linux 2.6.35, the
default pipe capacity is 65536 bytes, but the capacity can be queried
and set using the fcntl(2) F_GETPIPE_SZ and F_SETPIPE_SZ operations.
See fcntl(2) for more information." and that "capacity" referred to
consists of a kernel-managed RAM buffer.
[snip]
On Sun, 24 Nov 2024 14:25:10 +0000, Rich wrote:
186282@ud0s4.net <186283@ud0s4.net> wrote:
On 11/23/24 4:25 PM, Lawrence D'Oliveiro wrote:
That’s irrelevant. Pipes originated on the earliest Unix machine,
which was a PDP-11 with only a 64kiB address space. They work great
for pumping around gigabytes of data, but you don’t need
gigabyte-sized memory buffers to do that.
It all has to be SOMEWHERE ... if not in RAM then
on a mass storage device.
Nope, at least not with pipes.
Hold on a sec.... pipes are /buffered/ in RAM, so there's at least
a small bit of ram set aside for each open pipe.
For example,
head -c $((1024*1024*1024)) /dev/urandom | sha256sum
puts a gigabyte of data through a pipe, but at no point does anything allocate anywhere close to a gigabyte of storage of any kind.
Lew Pitcher <lew.pitcher@digitalfreehold.ca> writes:
On Sun, 24 Nov 2024 14:25:10 +0000, Rich wrote:
186282@ud0s4.net <186283@ud0s4.net> wrote:
On 11/23/24 4:25 PM, Lawrence D'Oliveiro wrote:
That’s irrelevant. Pipes originated on the earliest Unix machine, >>>>> which was a PDP-11 with only a 64kiB address space. They work great >>>>> for pumping around gigabytes of data, but you don’t need
gigabyte-sized memory buffers to do that.
It all has to be SOMEWHERE ... if not in RAM then
on a mass storage device.
Nope, at least not with pipes.
Hold on a sec.... pipes are /buffered/ in RAM, so there's at least
a small bit of ram set aside for each open pipe.
The word ‘all’ isn’t just decoration. The claim was ‘it all has to be
somewhere’, and Rich’s point (as I understand it) is that it does not
all have to be somewhere.
For example,
head -c $((1024*1024*1024)) /dev/urandom | sha256sum
puts a gigabyte of data through a pipe, but at no point does anything allocate anywhere close to a gigabyte of storage of any kind.
Hey, if I feel the need to use files instead of 'pipes' then I WANT
TO USE FILES INSTEAD OF PIPES.
...
Pipes are good.
But, really, they're just temp files the parent process can access.
186282@ud0s4.net <186283@ud0s4.net> wrote:
Hey, if I feel the need to use files instead of 'pipes' then I WANT
TO USE FILES INSTEAD OF PIPES.
Except you did not say you were "using files instead of 'pipes'".
What you said, in Message-ID: <hzSdnTUBKbG_YKv6nZ2dnZfqnPQAAAAA@earthlink.com>
was:
186282@ud0s4.net <186283@ud0s4.net> wrote:
...
Pipes are good.
But, really, they're just temp files the parent process can access.
When reality is that pipes have not been "just temp files" since the
day's of MSDOS's fake "pipes", and for Unix systems, pipes have never
been "just temp files".
And when challenged, you doubled down on the just temp files parts.
Hey, if I feel the need to use files instead of 'pipes' then I WANT TO
USE FILES INSTEAD OF PIPES.
I think the point that is being made by calling pipes a "temp files" is
that they are not persistent.
On Mon, 18 Nov 2024, 186282@ud0s4.net wrote:
There has been a long tendency in Linux/comp groups
to horribly attack, belittle, abuse, anybody who has
had a 'different experience' and sees things a little
off some kinda poorly-defined 'norm'.
Sorry, we all didn't come up on the same track.
A thousand different paths, a thousand different
styles of apps/needs/solutions. Computers let
you DO that.
Hey, if I feel the need to use files instead
of 'pipes' then I WANT TO USE FILES INSTEAD
OF PIPES. Don't like it ? Tuff titty. Any
'contributions' will be to tell me how to
maximize that approach - not to rain down piss.
(actually I think pipes are better - but there
are and will be other approaches/reasons)
The recent elections kinda upset the Linux/Unix
groups - a lot of politics promoted a lot of
threads. Well, the elections are OVER now.
Back to business.
BUT ... consider ... "back to business" does
not need to mean "back to old habits". We all
can do BETTER, move towards the future instead
of being at each others throats over NOTHING.
Just sayin'
Most everybody here seems to have been in the
groove since (or during) PUNCH CARDS. Let's
not be petty. We all did it OUR WAY.
Hey, I remember the giant handful of punch
cards - DON'T DROP 'EM ! :-)
Are you saying we should disregard the emperor? Doesn't he teach us that
our hate makes us stronger?
On 11/18/24 4:13 AM, D wrote:
On Mon, 18 Nov 2024, 186282@ud0s4.net wrote:
There has been a long tendency in Linux/comp groups
to horribly attack, belittle, abuse, anybody who has
had a 'different experience' and sees things a little
off some kinda poorly-defined 'norm'.
Sorry, we all didn't come up on the same track.
A thousand different paths, a thousand different
styles of apps/needs/solutions. Computers let
you DO that.
Hey, if I feel the need to use files instead
of 'pipes' then I WANT TO USE FILES INSTEAD
OF PIPES. Don't like it ? Tuff titty. Any
'contributions' will be to tell me how to
maximize that approach - not to rain down piss.
(actually I think pipes are better - but there
are and will be other approaches/reasons)
The recent elections kinda upset the Linux/Unix
groups - a lot of politics promoted a lot of
threads. Well, the elections are OVER now.
Back to business.
BUT ... consider ... "back to business" does
not need to mean "back to old habits". We all
can do BETTER, move towards the future instead
of being at each others throats over NOTHING.
Just sayin'
Most everybody here seems to have been in the
groove since (or during) PUNCH CARDS. Let's
not be petty. We all did it OUR WAY.
Hey, I remember the giant handful of punch
cards - DON'T DROP 'EM ! :-)
Are you saying we should disregard the emperor? Doesn't he teach us that
our hate makes us stronger?
Well, the 'woke' really did try to show us the
power of higher, gigabuck-funded, hate :-)
But that's all over for now.
In any case, I prefer to see these comp groups
as being much better when they are collaborative,
rather than derogative. Be you old boy or newbie,
everybody has a different 'vision', a slightly
different take on 'how it should be done'. Adding
1000 cuts and ad-homs - all too common on usenet -
does not represent any kind of improvement.
Just wanted to say it.
On Mon, 18 Nov 2024 09:45:07 -0500, Phillip Frabott wrote:
I think the point that is being made by calling pipes a "temp files" is
that they are not persistent.
Named pipes can indeed be persistent.
On 11/19/2024 19:23, Lawrence D'Oliveiro wrote:
On Mon, 18 Nov 2024 09:45:07 -0500, Phillip Frabott wrote:
I think the point that is being made by calling pipes a "temp files"
is that they are not persistent.
Named pipes can indeed be persistent.
Sure, but then your just creating a file with all the limitations that
come from that.
IPC only benefits you when you use unnamed or traditional pipes
(performance and resources).
On Thu, 21 Nov 2024 02:05:46 -0500, Phillip Frabott wrote:
On 11/19/2024 19:23, Lawrence D'Oliveiro wrote:
On Mon, 18 Nov 2024 09:45:07 -0500, Phillip Frabott wrote:
I think the point that is being made by calling pipes a "temp files"
is that they are not persistent.
Named pipes can indeed be persistent.
Sure, but then your just creating a file with all the limitations that
come from that.
Not at all. It still has the same synchronization behaviour.
IPC only benefits you when you use unnamed or traditional pipes
(performance and resources).
Certainly not.
On 11/21/2024 02:22, Lawrence D'Oliveiro wrote:
On Thu, 21 Nov 2024 02:05:46 -0500, Phillip Frabott wrote:
On 11/19/2024 19:23, Lawrence D'Oliveiro wrote:
On Mon, 18 Nov 2024 09:45:07 -0500, Phillip Frabott wrote:
I think the point that is being made by calling pipes a "temp files" >>>>> is that they are not persistent.
Named pipes can indeed be persistent.
Sure, but then your just creating a file with all the limitations that
come from that.
Not at all. It still has the same synchronization behaviour.
IPC only benefits you when you use unnamed or traditional pipesCertainly not.
(performance and resources).
I guess it just depends on what you are doing. And in perspective,
most pipes are generally used for small amounts of data, the smaller
then data the less benefits you see between unnamed vs named pipes. I
mean 100-bytes has zero performance differences between named and
unnamed while a 10MB pipe will always show that unnamed pipes are
faster then named pipes.
So it's just depends on what you are doing and the data you have. But
as far as I know named pipes still go away when you turn the machine
off unless you are redirecting /tmp to hard storage.
On 11/21/2024 02:22, Lawrence D'Oliveiro wrote:
On Thu, 21 Nov 2024 02:05:46 -0500, Phillip Frabott wrote:
On 11/19/2024 19:23, Lawrence D'Oliveiro wrote:
On Mon, 18 Nov 2024 09:45:07 -0500, Phillip Frabott wrote:
I think the point that is being made by calling pipes a "temp files" >>>>> is that they are not persistent.
Named pipes can indeed be persistent.
Sure, but then your just creating a file with all the limitations that
come from that.
Not at all. It still has the same synchronization behaviour.
IPC only benefits you when you use unnamed or traditional pipes
(performance and resources).
Certainly not.
I guess it just depends on what you are doing.
And in perspective, most pipes are generally used for small amounts of
data ...
We had to drop named pipes solely because of the performance hit
because it is writing to a file system so it's being controlled by the
file system, even if that file system is in memory.
As the demand grows, we are actually at the limits of performance that
even unnamed pipes gives us. So we are starting to migrate to UNIX
sockets which has about double to bandwidth and performance of pipes.
This remark makes me wonder if you’ve got the wrong end of the stick
about what a named pipe is. They are really not the same as regular
files, temporary or otherwise.
On Thu, 21 Nov 2024 21:55:37 -0500, Phillip Frabott wrote:
We had to drop named pipes solely because of the performance hit
because it is writing to a file system so it's being controlled by the
file system, even if that file system is in memory.
That doesn’t make any sense, if we were talking about Linux. Is this on Windows, by any chance?
As the demand grows, we are actually at the limits of performance that
even unnamed pipes gives us. So we are starting to migrate to UNIX
sockets which has about double to bandwidth and performance of pipes.
Not sure how that works, given that Unix sockets are actually a more
complex mechanism than pipes.
Doesn't the named pipe connection work through the filesystem code? That could add overhead.
Can't use named pipes on just any filesystem -- won't work on NFS for example, unless I'm mistaken.
On Thu, 21 Nov 2024 10:12:18 -0500, Phillip Frabott wrote:
On 11/21/2024 02:22, Lawrence D'Oliveiro wrote:
On Thu, 21 Nov 2024 02:05:46 -0500, Phillip Frabott wrote:
On 11/19/2024 19:23, Lawrence D'Oliveiro wrote:
On Mon, 18 Nov 2024 09:45:07 -0500, Phillip Frabott wrote:
I think the point that is being made by calling pipes a "temp files" >>>>>> is that they are not persistent.
Named pipes can indeed be persistent.
Sure, but then your just creating a file with all the limitations that >>>> come from that.
Not at all. It still has the same synchronization behaviour.
IPC only benefits you when you use unnamed or traditional pipes
(performance and resources).
Certainly not.
I guess it just depends on what you are doing.
No it doesn’t. Named or not, pipes are pipes.
And in perspective, most pipes are generally used for small amounts of
data ...
I have used them to transfer quite large amounts, quickly and reliably.
On 11/21/24 4:56 PM, Lawrence D'Oliveiro wrote:
I have used them to transfer quite large amounts, quickly and reliably.
Yep, WILL work. No question.
The question is HOW MUCH should you intend to send back
and forth using pipes (or any other method) between the
parent and children.
On 22 Nov 2024 06:09:05 GMT, vallor wrote:
Doesn't the named pipe connection work through the filesystem code? That
could add overhead.
No. The only thing that exists in the filesystem is the “special file” entry in the directory. Opening that triggers special-case processing in
the kernel that creates the usual pipe buffering/synchronization
structures (or links up with existing structures created by some prior opening of the same special file, perhaps by a different process), not dependent on any filesystem.
I just tried creating a C program to do speed tests on data transfers
through pipes and socket pairs between processes. I am currently setting
the counter to 10 gigabytes, and transferring that amount of data (using whichever mechanism) only takes a couple of seconds on my system.
So the idea that pipes are somehow not suited to large data transfers is patently nonsense.
Can't use named pipes on just any filesystem -- won't work on NFS for
example, unless I'm mistaken.
Hard to believe NFS could stuff that up, but there you go ...
On 11/21/2024 13:38, Richard Kettlewell wrote:
This remark makes me wonder if you’ve got the wrong end of the stick
about what a named pipe is. They are really not the same as regular
files, temporary or otherwise.
From a performance perspective they are. At least from the work I've
done. We had to drop named pipes solely because of the performance hit because it is writing to a file system
On Fri, 22 Nov 2024 03:12:43 -0000 (UTC), Lawrence D'Oliveiro <ldo@nz.invalid> wrote in <vhosra$1171f$1@dont-email.me>:
On Thu, 21 Nov 2024 21:55:37 -0500, Phillip Frabott wrote:
We had to drop named pipes solely because of the performance hit
because it is writing to a file system so it's being controlled by the
file system, even if that file system is in memory.
That doesn’t make any sense, if we were talking about Linux. Is this on
Windows, by any chance?
Doesn't the named pipe connection work through the filesystem code?
That could add overhead.
Can't use named pipes on just any filesystem -- won't work on NFS
for example, unless I'm mistaken.
As the demand grows, we are actually at the limits of performance that
even unnamed pipes gives us. So we are starting to migrate to UNIX
sockets which has about double to bandwidth and performance of pipes.
Not sure how that works, given that Unix sockets are actually a more
complex mechanism than pipes.
With Unix sockets, once the connection is made, it's all in-memory networking.
I suspect (but don't know) that named pipes require the data to pass
through the filesystem for each write.
vallor <vallor@cultnix.org> wrote:
On Fri, 22 Nov 2024 03:12:43 -0000 (UTC), Lawrence D'Oliveiro
<ldo@nz.invalid> wrote in <vhosra$1171f$1@dont-email.me>:
On Thu, 21 Nov 2024 21:55:37 -0500, Phillip Frabott wrote:
We had to drop named pipes solely because of the performance hit
because it is writing to a file system so it's being controlled by
the file system, even if that file system is in memory.
That doesn’t make any sense, if we were talking about Linux. Is this
on Windows, by any chance?
Doesn't the named pipe connection work through the filesystem code?
That could add overhead.
Only to the extent that a filesystem lookup has to occur to lookup the
name in order to open() the name.
Once you have a file descriptor back from the open() call, there is no difference at all kernel wise betwenn the two, they are one and the same block of kernel code.
Can't use named pipes on just any filesystem -- won't work on NFS for
example, unless I'm mistaken.
Correct, you need a filesystem that supports storing a 'name' that that
is a reference to a pipe, so windows filesystems are out.
Named pipes appear as 'pipe' nodes across NFS (just tested this to be certian). And, so long as all the "accessors" of the named pipe are
running on the same Linux machine with the NFS mount containing the pipe node, the named pipe works as expected (just tested this as well).
But a named pipe on NFS does not give you a machine to machine (two
different machines) transmit channel.
As the demand grows, we are actually at the limits of performance
that even unnamed pipes gives us. So we are starting to migrate to
UNIX sockets which has about double to bandwidth and performance of
pipes.
Not sure how that works, given that Unix sockets are actually a more
complex mechanism than pipes.
With Unix sockets, once the connection is made, it's all in-memory
networking.
Correct.
I suspect (but don't know) that named pipes require the data to pass
through the filesystem for each write.
Incorrect. The only 'filesystem' access for named pipes is during the
open() call to look up the name from the filesystem. Once you get the
file descriptor back, it is the exact same in-memory FIFO queue as an anonymous pipe created via pipe() (at least on Linux).
(Haven't figured out how to increase ulimit -p yet, doesn't seem
to want to increase, even as root...)
I tested it too (with an NFS v4.1 filesystem), and yes, mkfifo makes a
named pipe, and it works as expected. (Didn't expect it to work across machines, though that would be a neat trick.)
(Haven't figured out how to increase ulimit -p yet, doesn't seem to want
to increase, even as root...)
On Fri, 22 Nov 2024 01:44:35 -0500, 186282@ud0s4.net wrote:
On 11/21/24 4:56 PM, Lawrence D'Oliveiro wrote:
I have used them to transfer quite large amounts, quickly and reliably.
Yep, WILL work. No question.
The question is HOW MUCH should you intend to send back
and forth using pipes (or any other method) between the
parent and children.
How about 10 gigabytes, which I was able to transfer in two seconds? Is
that “too much” for you?
On 11/22/24 1:49 AM, Lawrence D'Oliveiro wrote:
On Fri, 22 Nov 2024 01:44:35 -0500, 186282@ud0s4.net wrote:
On 11/21/24 4:56 PM, Lawrence D'Oliveiro wrote:
I have used them to transfer quite large amounts, quickly and
reliably.
Yep, WILL work. No question.
The question is HOW MUCH should you intend to send back and forth
using pipes (or any other method) between the parent and children.
How about 10 gigabytes, which I was able to transfer in two seconds? Is
that “too much” for you?
Yep - though you may have had a 'radical vision' when writing your parent/child app .... just because *I* wouldn't do it .......
On Fri, 22 Nov 2024 23:44:24 -0500, 186282@ud0s4.net wrote:
On 11/22/24 1:49 AM, Lawrence D'Oliveiro wrote:
On Fri, 22 Nov 2024 01:44:35 -0500, 186282@ud0s4.net wrote:
On 11/21/24 4:56 PM, Lawrence D'Oliveiro wrote:
I have used them to transfer quite large amounts, quickly and
reliably.
Yep, WILL work. No question.
The question is HOW MUCH should you intend to send back and forth
using pipes (or any other method) between the parent and children.
How about 10 gigabytes, which I was able to transfer in two seconds? Is
that “too much” for you?
Yep - though you may have had a 'radical vision' when writing your
parent/child app .... just because *I* wouldn't do it .......
You didn’t realize IPC on *nix is industrial-strength?
On 11/23/24 12:19 AM, Lawrence D'Oliveiro wrote:
You didn’t realize IPC on *nix is industrial-strength?
I know it CAN handle BIG transactions.
But SHOULD it ?
On Mon, 18 Nov 2024 01:20:53 -0500, 186282@ud0s4.net wrote:
Hey, if I feel the need to use files instead of 'pipes' then I WANT TO
USE FILES INSTEAD OF PIPES.
Except ... you were trying to argue that there was no fundamental
difference between pipes and files anyway. That you could somehow do everything you could do with pipes by using temporary files.
Files DO have a hidden advantage - many largely dis-related programs
can ACCESS them.
On Tue, 3 Dec 2024 01:19:53 -0500, 186282@ud0s4.net wrote:
Files DO have a hidden advantage - many largely dis-related programs
can ACCESS them.
On *nix systems, pipes can be accessed just as easily.
On 12/3/24 1:49 AM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 01:19:53 -0500, 186282@ud0s4.net wrote:
Files DO have a hidden advantage - many largely dis-related programs
can ACCESS them.
On *nix systems, pipes can be accessed just as easily.
I can't really argue over pipes/files ...
On 11/19/24 7:22 PM, Lawrence D'Oliveiro wrote:
On Mon, 18 Nov 2024 01:20:53 -0500, 186282@ud0s4.net wrote:
Hey, if I feel the need to use files instead of 'pipes' then I WANT TO
USE FILES INSTEAD OF PIPES.
Except ... you were trying to argue that there was no fundamental
difference between pipes and files anyway. That you could somehow do
everything you could do with pipes by using temporary files.
Yep.
But I *just may not WANT to* :-)
Files DO have a hidden advantage - many largely
dis-related programs can ACCESS them. This can
give you stats, insight, 'intelligence'. Pipes
are basically restricted to the original parent
and children. Good reasons for that, sometimes,
but not *always*.
In article <lNycnZghasCXPtP6nZ2dnZfqnPSdnZ2d@earthlink.com>,
"186282@ud0s4.net" <186283@ud0s4.net> writes:
On 11/19/24 7:22 PM, Lawrence D'Oliveiro wrote:
On Mon, 18 Nov 2024 01:20:53 -0500, 186282@ud0s4.net wrote:
Hey, if I feel the need to use files instead of 'pipes' then I WANT TO >>>> USE FILES INSTEAD OF PIPES.
Except ... you were trying to argue that there was no fundamental
difference between pipes and files anyway. That you could somehow do
everything you could do with pipes by using temporary files.
Yep.
But I *just may not WANT to* :-)
Files DO have a hidden advantage - many largely
dis-related programs can ACCESS them. This can
give you stats, insight, 'intelligence'. Pipes
are basically restricted to the original parent
and children. Good reasons for that, sometimes,
but not *always*.
Named pipes can allow communication between unrelated processes.
Using files means there has to be locking or some coordination, so
that the receiver only reads the file when the contents are in a
consistent state.
Renaming a file (on the same filesystem, where it's
not a copy and delete) is atomic, so if the file is created in one
directory and moved to a parallel directory when complete, the
receiving program can just grab it from there, perhaps after being
signalled to wake up and scan the directory. That works somewhat
efficiently even without modern locking or filesystem change
notification mechanisms.
A file has the advantage that one can seek on it, which may simplify
some things; for example if a header has a checksum over the following
data, it's easier to seek back and fill that field in with its final
value. Otherwise one may have to use a temporary file internally anyway.
If you create a tmpfs /ram directory then pipes or named
files would both work in ram, and there would be no
functional difference.
On 14/12/2024 15:54, root wrote:
If you create a tmpfs /ram directory then pipes or named
files would both work in ram, and there would be no
functional difference.
I am not sure how pipes work as to be certain of that.
With a pipe or FIFO, you just use simple read and write operations and
the system handles all the messy stuff for you. If the pipe reaches
capacity, write blocks until there is room to write some more; if the
pipe becomes empty, read blocks until there is more data available; when
read returns EOF that's the end of the data.
On Tue, 17 Dec 2024 13:34:30 +0000, Geoff Clare wrote:
With a pipe or FIFO, you just use simple read and write operations and
the system handles all the messy stuff for you. If the pipe reaches
capacity, write blocks until there is room to write some more; if the
pipe becomes empty, read blocks until there is more data available; when
read returns EOF that's the end of the data.
Yup. Furthermore:
* When the last writer closes its end, any remaining read attempts get
EOF.
* When the last reader closes its end, any remaining write attempts get “broken pipe”.
On 12/17/24 8:23 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Dec 2024 13:34:30 +0000, Geoff Clare wrote:
With a pipe or FIFO, you just use simple read and write operations
and the system handles all the messy stuff for you. If the pipe
reaches capacity, write blocks until there is room to write some
more; if the pipe becomes empty, read blocks until there is more
data available; when read returns EOF that's the end of the data.
Yup. Furthermore:
* When the last writer closes its end, any remaining read attempts
get EOF.
* When the last reader closes its end, any remaining write attempts
get “broken pipe”.
But you're still limited to the amount of RAM the system can
access.
These days that's probably a LOT - but might NOT be,
esp for 'embedded' type boards like the older PIs,
BBBs and such. Never assume the user has essentially
infinite RAM.
186282@ud0s4.net <186283@ud0s4.net> wrote:
On 12/17/24 8:23 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Dec 2024 13:34:30 +0000, Geoff Clare wrote:
With a pipe or FIFO, you just use simple read and write operations
and the system handles all the messy stuff for you. If the pipe
reaches capacity, write blocks until there is room to write some
more; if the pipe becomes empty, read blocks until there is more
data available; when read returns EOF that's the end of the data.
Yup. Furthermore:
* When the last writer closes its end, any remaining read attempts
get EOF.
* When the last reader closes its end, any remaining write attempts
get “broken pipe”.
But you're still limited to the amount of RAM the system can
access.
Not with a pipe or FIFO, which is what is being discussed above.
The amount of data you can transfer over a pipe is not in any way
limited by system memory size or any other system imposed limits.
These days that's probably a LOT - but might NOT be,
esp for 'embedded' type boards like the older PIs,
BBBs and such. Never assume the user has essentially
infinite RAM.
The system will not have infinite RAM. You can transfer infinite data
over a pipe (although it will take a while to reach infinity).
186282@ud0s4.net <186283@ud0s4.net> wrote:
But you're still limited to the amount of RAM the system can
access.
Not with a pipe or FIFO, which is what is being discussed above.
The amount of data you can transfer over a pipe is not in any way
limited by system memory size or any other system imposed limits.
These days that's probably a LOT - but might NOT be,
esp for 'embedded' type boards like the older PIs,
BBBs and such. Never assume the user has essentially
infinite RAM.
The system will not have infinite RAM. You can transfer infinite data
over a pipe (although it will take a while to reach infinity).
Rich <rich@example.invalid> writes:
186282@ud0s4.net <186283@ud0s4.net> wrote:
But you're still limited to the amount of RAM the system can
access.
Not with a pipe or FIFO, which is what is being discussed above.
The amount of data you can transfer over a pipe is not in any way
limited by system memory size or any other system imposed limits.
Quite. I’m not sure why this discussion has restarted but it was clear
from last time round that some of the participants don’t know what a
pipe is, and aren’t particularly interested in finding out.
These days that's probably a LOT - but might NOT be, esp for
'embedded' type boards like the older PIs, BBBs and such. Never
assume the user has essentially infinite RAM.
The system will not have infinite RAM. You can transfer infinite data
over a pipe (although it will take a while to reach infinity).
I think I’d use a slightly weaker term than ‘infinite’, something will put an upper bound on it, even if it’s the heat death of the universe.
On 2024-12-18, Rich <rich@example.invalid> wrote:
186282@ud0s4.net <186283@ud0s4.net> wrote:
On 12/17/24 8:23 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Dec 2024 13:34:30 +0000, Geoff Clare wrote:
With a pipe or FIFO, you just use simple read and write operations
and the system handles all the messy stuff for you. If the pipe
reaches capacity, write blocks until there is room to write some
more; if the pipe becomes empty, read blocks until there is more
data available; when read returns EOF that's the end of the data.
Yup. Furthermore:
* When the last writer closes its end, any remaining read attempts
get EOF.
* When the last reader closes its end, any remaining write attempts
get “broken pipe”.
But you're still limited to the amount of RAM the system can
access.
Not with a pipe or FIFO, which is what is being discussed above.
The amount of data you can transfer over a pipe is not in any way
limited by system memory size or any other system imposed limits.
These days that's probably a LOT - but might NOT be,
esp for 'embedded' type boards like the older PIs,
BBBs and such. Never assume the user has essentially
infinite RAM.
The system will not have infinite RAM. You can transfer infinite data
over a pipe (although it will take a while to reach infinity).
A pipe is _NOT_ limited to system RAM!
Using a named pipe on a Raspberry Pi model 1 with a _half_ GB of
total RAM, I would routinely transfer _several_ GB in a single
stream from an mplayer process to a netcat process. The only
reason that's not currently happening every night these days is
the amplified TV antenna lost too much gain due to age, attic
heat, etc.
On Wed, 18 Dec 2024 14:02:52 -0000 (UTC)
Rich <rich@example.invalid> wrote:
The amount of data you can transfer over a pipe is not in any way
limited by system memory size or any other system imposed limits.
Quite. I’m not sure why this discussion has restarted but it was
clear from last time round that some of the participants don’t know
what a pipe is, and aren’t particularly interested in finding out.
Yes, our local nymshift troll seems to clearly not know what a pipe
is, nor care to learn either.
I *think* what he's meaning to say is this: while you can transfer any arbitrary amount of data *through* a pipe, there is an upper limit to
how much you can have *in* a pipe at any one time; eventually, you hit
either *A.* an OS-imposed limit on buffer size, at which point things
start blocking as already discussed, or *B.* the upper bounds of system memory, at which point the system will either start swapping (in which
case you lose any speed advantage) or blocking (as with limited buffer
size.)
That said, what probably shouldn't need saying here is that if you're
filling up all available space in a pipe such that you're regularly
hitting these limits, you're probably doing pipes wrong.
Rich <rich@example.invalid> wrote:
Yes, our local nymshift troll seems to clearly not know what a pipe
is, nor care to learn either.
I *think* what he's meaning to say is this: while you can transfer any arbitrary amount of data *through* a pipe, there is an upper limit to
how much you can have *in* a pipe at any one time; eventually, you hit
either *A.* an OS-imposed limit on buffer size, at which point things
start blocking as already discussed, or *B.* the upper bounds of
system memory, at which point the system will either start swapping
(in which case you lose any speed advantage) or blocking (as with
limited buffer size.)
That said, what probably shouldn't need saying here is that if you're
filling up all available space in a pipe such that you're regularly
hitting these limits, you're probably doing pipes wrong.
Robert Riches <spamtrap42@jacob21819.net> wrote:
On 2024-12-18, Rich <rich@example.invalid> wrote:
186282@ud0s4.net <186283@ud0s4.net> wrote:
On 12/17/24 8:23 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Dec 2024 13:34:30 +0000, Geoff Clare wrote:
With a pipe or FIFO, you just use simple read and write operations >>>>>> and the system handles all the messy stuff for you. If the pipe
reaches capacity, write blocks until there is room to write some
more; if the pipe becomes empty, read blocks until there is more
data available; when read returns EOF that's the end of the data.
Yup. Furthermore:
* When the last writer closes its end, any remaining read attempts
get EOF.
* When the last reader closes its end, any remaining write attempts
get “broken pipe”.
But you're still limited to the amount of RAM the system can
access.
Not with a pipe or FIFO, which is what is being discussed above.
The amount of data you can transfer over a pipe is not in any way
limited by system memory size or any other system imposed limits.
These days that's probably a LOT - but might NOT be,
esp for 'embedded' type boards like the older PIs,
BBBs and such. Never assume the user has essentially
infinite RAM.
The system will not have infinite RAM. You can transfer infinite data
over a pipe (although it will take a while to reach infinity).
A pipe is _NOT_ limited to system RAM!
Using a named pipe on a Raspberry Pi model 1 with a _half_ GB of
total RAM, I would routinely transfer _several_ GB in a single
stream from an mplayer process to a netcat process. The only
reason that's not currently happening every night these days is
the amplified TV antenna lost too much gain due to age, attic
heat, etc.
While you are correct, you responded to the wrong post. I pointed out
to the nymshift troll the exact statement you made to me.
I'm not certain, but I think I might have killfiled the nymshift
troll, so your post was the only one for which I had a reference in
order to contradict said nymshift troll.
Robert Riches <spamtrap42@jacob21819.net> wrote:
I'm not certain, but I think I might have killfiled the nymshift
troll, so your post was the only one for which I had a reference in
order to contradict said nymshift troll.
Ah, ok, now I get why you replied to my post. No worries in that case.