Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 30 |
Nodes: | 6 (1 / 5) |
Uptime: | 66:29:29 |
Calls: | 414 |
Calls today: | 1 |
Files: | 1,015 |
Messages: | 94,229 |
Posted today: | 1 |
On 2025-03-30 11:33 p.m., Dan Cross wrote:
I think this stems from this idea you seem have that threadsHaving some trouble following the discussion as I do not have a lot of >experience working with software threads. I know they need
somehow turn into "processors/cores", whatever that means, as
these are obviously not the hardware devices, when interrupts
are disabled on the CPUs they are running on. Near as I can
tell this is your own unique invention, and nothing else uses
that terminology; use by the systems you cited is not supported
by evidence.
Consequently, I find your repeated assertions about these terms
rather strange.
- Dan C.
synchronization means.
Are you talking about two different kinds of “threads”? RISCV at least >refers to ‘harts’ which I think stands for hardware threads. A CPU core >may support multiple harts, which implies multiple copies of the
processor state. In a multi-core system there could be multiple harts
even though each core is only supporting a single one. I am under the >impression hardware and software threads are not the same thing.
I hope I got the lingo correct.
Robert Finch <robfi680@gmail.com> writes:
On 2025-03-30 11:33 p.m., Dan Cross wrote:
I think this stems from this idea you seem have that threadsHaving some trouble following the discussion as I do not have a lot of
somehow turn into "processors/cores", whatever that means, as
these are obviously not the hardware devices, when interrupts
are disabled on the CPUs they are running on. Near as I can
tell this is your own unique invention, and nothing else uses
that terminology; use by the systems you cited is not supported
by evidence.
Consequently, I find your repeated assertions about these terms
rather strange.
- Dan C.
experience working with software threads. I know they need
synchronization means.
Are you talking about two different kinds of “threads� RISCV at least
refers to ‘harts’ which I think stands for hardware threads. A CPU core
may support multiple harts, which implies multiple copies of the
processor state. In a multi-core system there could be multiple harts
even though each core is only supporting a single one. I am under the
impression hardware and software threads are not the same thing.
I hope I got the lingo correct.
In the abstract, a thread can be considered a sequence of instructions executed with a consistent processor state (registers, flags, MMU).
Threads, using that definition, can be implemented fully in software
(using OS facilities for coordination between entities). User-mode
threads are an example of this, and were implemented in systems that
did not have Operating system support for threads as a schedulable
entity.
Operating systems later added explicit support for multiple threads
of execution to share a single address space by defining a thread
schedulable entity encapsulating the thread context (general register
state, et alia). POSIX Pthreads was the standard on the Unix side.
Subsequently, the hardware vendors realized that they could better
utilize the hardware by creating the concept of a 'hardware thread'
where the processor resources for a core could be divvied up and
shared by multiple hardware contexts (Intel called it hyperthreading).
Scott Lurndal wrote:
In the abstract, a thread can be considered a sequence of instructions
executed with a consistent processor state (registers, flags, MMU).
=20
Threads, using that definition, can be implemented fully in software
(using OS facilities for coordination between entities). User-mode
threads are an example of this, and were implemented in systems that
did not have Operating system support for threads as a schedulable
entity.
=20
Operating systems later added explicit support for multiple threads
of execution to share a single address space by defining a thread
schedulable entity encapsulating the thread context (general register
state, et alia). POSIX Pthreads was the standard on the Unix side.
=20
Subsequently, the hardware vendors realized that they could better
utilize the hardware by creating the concept of a 'hardware thread'
where the processor resources for a core could be divvied up and
shared by multiple hardware contexts (Intel called it hyperthreading).
=20
In my programmer world view, a pure user software thread isn't that=20 >interesting, it typically allows the programmer to control (more or=20
less) when it can be interrupted.
On 4/1/2025 8:31 AM, Scott Lurndal wrote:
Terje Mathisen <terje.mathisen@tmsw.no> writes:
Scott Lurndal wrote:
In the abstract, a thread can be considered a sequence of instructions >>>> executed with a consistent processor state (registers, flags, MMU).
=20
Threads, using that definition, can be implemented fully in software
(using OS facilities for coordination between entities). User-mode
threads are an example of this, and were implemented in systems that
did not have Operating system support for threads as a schedulable
entity.
=20
Operating systems later added explicit support for multiple threads
of execution to share a single address space by defining a thread
schedulable entity encapsulating the thread context (general register
state, et alia). POSIX Pthreads was the standard on the Unix side.
=20
Subsequently, the hardware vendors realized that they could better
utilize the hardware by creating the concept of a 'hardware thread'
where the processor resources for a core could be divvied up and
shared by multiple hardware contexts (Intel called it hyperthreading). >>>> =20
In my programmer world view, a pure user software thread isn't that=20
interesting, it typically allows the programmer to control (more or=20
less) when it can be interrupted.
Using the pre-pthreads posix facilities (signals and getcontext/setcontext) >> to support user-level threads wasn't uncommon. Unix SVR4ES/MP actually had >> both kernel threads (called lightweight processes (LWP)) and user-level
threads in an M-N setup (M user threads multiplexed on N kernel threads).
Didn't turn out to be particularly useful.
Are you referring to green threads? Or Fibers?
On 4/1/2025 1:01 PM, Scott Lurndal wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 4/1/2025 8:31 AM, Scott Lurndal wrote:
Terje Mathisen <terje.mathisen@tmsw.no> writes:
Scott Lurndal wrote:
In the abstract, a thread can be considered a sequence of instructions >>>>>> executed with a consistent processor state (registers, flags, MMU). >>>>>> =20
Threads, using that definition, can be implemented fully in software >>>>>> (using OS facilities for coordination between entities). User-mode >>>>>> threads are an example of this, and were implemented in systems that >>>>>> did not have Operating system support for threads as a schedulable >>>>>> entity.
=20
Operating systems later added explicit support for multiple threads >>>>>> of execution to share a single address space by defining a thread
schedulable entity encapsulating the thread context (general register >>>>>> state, et alia). POSIX Pthreads was the standard on the Unix side. >>>>>> =20
Subsequently, the hardware vendors realized that they could better >>>>>> utilize the hardware by creating the concept of a 'hardware thread' >>>>>> where the processor resources for a core could be divvied up and
shared by multiple hardware contexts (Intel called it hyperthreading). >>>>>> =20
In my programmer world view, a pure user software thread isn't that=20 >>>>> interesting, it typically allows the programmer to control (more or=20 >>>>> less) when it can be interrupted.
Using the pre-pthreads posix facilities (signals and getcontext/setcontext)
to support user-level threads wasn't uncommon. Unix SVR4ES/MP actually had
both kernel threads (called lightweight processes (LWP)) and user-level >>>> threads in an M-N setup (M user threads multiplexed on N kernel threads). >>>>
Didn't turn out to be particularly useful.
Are you referring to green threads? Or Fibers?
Neither.
I think we are talking about user threads, ala PThreads, vs kernel
"realm" threads... Is that right?
https://github.com/ZoloZiak/WinNT4/tree/master/private/ntos
Unix SVR4ES/MP actually had
both kernel threads (called lightweight processes (LWP)) and user-level >threads in an M-N setup (M user threads multiplexed on N kernel threads).
Didn't turn out to be particularly useful.
scott@slp53.sl.home (Scott Lurndal) writes:
Unix SVR4ES/MP actually had
both kernel threads (called lightweight processes (LWP)) and user-level >>threads in an M-N setup (M user threads multiplexed on N kernel threads).
Didn't turn out to be particularly useful.
Maybe the SVR4ES/MP stuff was not particularly useful. Combining
user-level threads and kernel threads in an M-N setup has turned out
to be very useful in, e.g., Erlang applications.
In my programmer world view, a pure user software thread isn't that >interesting, it typically allows the programmer to control (more or
less) when it can be interrupted.
TerjeGeorge
On Tue, 1 Apr 2025 16:44:11 +0200, Terje Mathisen
<terje.mathisen@tmsw.no> wrote:
In my programmer world view, a pure user software thread isn't that >>interesting, it typically allows the programmer to control (more or
less) when it can be interrupted.
Done properly, user space threads can be pre-emptively scheduled and >otherwise interrupted.
Also, consider "scheduler activations", wherein the kernel provides
only virtual cores, and all threading is done in user space.
https://homes.cs.washington.edu/~tom/pubs/sched_act.pdf >https://www.cs.ucr.edu/~heng/teaching/cs202-sp18/lec7.pdf
There is an implementation of this idea on NetBSD >http://web.mit.edu/nathanw/www/usenix/freenix-sa/freenix-sa.html
It's not a terribly popular idea, but IMO it is an interesting one.
YMMV.
Dan Cross wrote:
In article <2025Apr2.082556@mips.complang.tuwien.ac.at>,
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Unix SVR4ES/MP actually had
both kernel threads (called lightweight processes (LWP)) and user-level >>>> threads in an M-N setup (M user threads multiplexed on N kernel threads). >>>>
Didn't turn out to be particularly useful.
Maybe the SVR4ES/MP stuff was not particularly useful. Combining
user-level threads and kernel threads in an M-N setup has turned out
to be very useful in, e.g., Erlang applications.
It's useful when threads are carefully managed, so that programs
can ensure that the set of OS threads is always available so that
runnable LWPs can be scheduled onto them. This implies that the
OS-managed threads should not block, as if they do, you lose
1/M'th of your parallelism. But the N:M model really breaks
down when LWPs become unrunnable in a way that does not involve
the user-level thread scheduler, something that can happen in
surprising places.
For instance, consider Unix/POSIX `open`: from an API
perspective this simply maps a symbolic file path name to a file
descriptor that can subsequently be used to perform IO on the
named file. While it is well-known that the interface is
defined so that it can block opening some kinds of devices, for
example, some terminal devices until the line is asserted, that
is not the usual case, and noteably `open` does no IO on the
file itself. So generally, most programs would expect that it
has no reason to block.
In article <2025Apr2.082556@mips.complang.tuwien.ac.at>,
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Unix SVR4ES/MP actually had
both kernel threads (called lightweight processes (LWP)) and user-level
threads in an M-N setup (M user threads multiplexed on N kernel threads). >>>
Didn't turn out to be particularly useful.
Maybe the SVR4ES/MP stuff was not particularly useful. Combining
user-level threads and kernel threads in an M-N setup has turned out
to be very useful in, e.g., Erlang applications.
It's useful when threads are carefully managed, so that programs
can ensure that the set of OS threads is always available so that
runnable LWPs can be scheduled onto them. This implies that the
OS-managed threads should not block, as if they do, you lose
1/M'th of your parallelism. But the N:M model really breaks
down when LWPs become unrunnable in a way that does not involve
the user-level thread scheduler, something that can happen in
surprising places.
For instance, consider Unix/POSIX `open`: from an API
perspective this simply maps a symbolic file path name to a file
descriptor that can subsequently be used to perform IO on the
named file. While it is well-known that the interface is
defined so that it can block opening some kinds of devices, for
example, some terminal devices until the line is asserted, that
is not the usual case, and noteably `open` does no IO on the
file itself. So generally, most programs would expect that it
has no reason to block.
Dan Cross wrote:
[snip]
For instance, consider Unix/POSIX `open`: from an API
perspective this simply maps a symbolic file path name to a file
descriptor that can subsequently be used to perform IO on the
named file. While it is well-known that the interface is
defined so that it can block opening some kinds of devices, for
example, some terminal devices until the line is asserted, that
is not the usual case, and noteably `open` does no IO on the
file itself. So generally, most programs would expect that it
has no reason to block.
The one case where open was a problem on traditional unix was
for line printers. The open of /dev/lp could block if the
printer (on a centronics port) was not-ready. And it was
an uninterruptable block, even SIGKILL was blocked.
In article <FoxHP.1477197$eNx6.766449@fx14.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
Dan Cross wrote:
[snip]
For instance, consider Unix/POSIX `open`: from an API
perspective this simply maps a symbolic file path name to a file
descriptor that can subsequently be used to perform IO on the
named file. While it is well-known that the interface is
defined so that it can block opening some kinds of devices, for
example, some terminal devices until the line is asserted, that
is not the usual case, and noteably `open` does no IO on the
file itself. So generally, most programs would expect that it
has no reason to block.
The one case where open was a problem on traditional unix was
for line printers. The open of /dev/lp could block if the
printer (on a centronics port) was not-ready. And it was
an uninterruptable block, even SIGKILL was blocked.
I'd worry more about, say a pathname that requires traversing
NFS for one reason or another (symlinks, or just on a mounted
filesystem). Nothing prevents an NFS server from becoming
inaccessible during a lookup.
cross@spitfire.i.gajendra.net (Dan Cross) writes:
In article <FoxHP.1477197$eNx6.766449@fx14.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
Dan Cross wrote:
[snip]
For instance, consider Unix/POSIX `open`: from an API
perspective this simply maps a symbolic file path name to a file
descriptor that can subsequently be used to perform IO on the
named file. While it is well-known that the interface is
defined so that it can block opening some kinds of devices, for
example, some terminal devices until the line is asserted, that
is not the usual case, and noteably `open` does no IO on the
file itself. So generally, most programs would expect that it
has no reason to block.
The one case where open was a problem on traditional unix was
for line printers. The open of /dev/lp could block if the
printer (on a centronics port) was not-ready. And it was
an uninterruptable block, even SIGKILL was blocked.
I'd worry more about, say a pathname that requires traversing
NFS for one reason or another (symlinks, or just on a mounted
filesystem). Nothing prevents an NFS server from becoming
inaccessible during a lookup.
However, you can specify a soft mount rather than a hard mount
to resolve that.
Also, consider "scheduler activations", wherein the kernel provides only virtual cores, and all threading is done in user space.
https://homes.cs.washington.edu/~tom/pubs/sched_act.pdf https://www.cs.ucr.edu/~heng/teaching/cs202-sp18/lec7.pdf
There is an implementation of this idea on NetBSD http://web.mit.edu/nathanw/www/usenix/freenix-sa/freenix-sa.html
On Thu, 03 Apr 2025 01:38:53 -0400, George Neuner wrote:
Also, consider "scheduler activations", wherein the kernel provides only
virtual cores, and all threading is done in user space.
https://homes.cs.washington.edu/~tom/pubs/sched_act.pdf
https://www.cs.ucr.edu/~heng/teaching/cs202-sp18/lec7.pdf
There is an implementation of this idea on NetBSD
http://web.mit.edu/nathanw/www/usenix/freenix-sa/freenix-sa.html
The scheduler activations implementation of threads was removed from
NetBSD in 2012.