Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 43:41:33 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
24 files (29,813K bytes) |
Messages: | 175,620 |
Haven't much been tapping away on this,
here's a brief design how to run a USENET,
and fill it up with the existing one.
On 04/28/2024 08:24 PM, Ross Finlayson wrote:
On 04/27/2024 09:01 AM, Ross Finlayson wrote:
On 04/25/2024 10:46 AM, Ross Finlayson wrote:
On 04/22/2024 10:06 AM, Ross Finlayson wrote:
On 04/20/2024 11:24 AM, Ross Finlayson wrote:
Well I've been thinking about the re-routine as a model of
cooperative
multithreading,
then thinking about the flow-machine of protocols
NNTP
IMAP <-> NNTP
HTTP <-> IMAP <-> NNTP
Both IMAP and NNTP are session-oriented on the connection, while,
HTTP, in terms of session, has various approaches in terms of HTTP >>>>>> 1.1
and connections, and the session ID shared client/server.
The re-routine idea is this, that each kind of method, is memoizable, >>>>>> and, it memoizes, by object identity as the key, for the method, all >>>>>> its callers, how this is like so.
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(r2);
return result(r2, r3);
}
}
The idea is that the executor, when it's submitted a reroutine,
when it runs the re-routine, in a thread, then it puts in a
ThreadLocal,
the re-routine, so that when a re-routine it calls, returns null
as it
starts an asynchronous computation for the input, then when
it completes, it submits to the executor the re-routine again.
Then rr1 runs through again, retrieving r2 which is memoized,
invokes rr3, which throws, after queuing to memoize and
resubmit rr1, when that calls back to resubmit r1, then rr1
routines, signaling the original invoker.
Then it seems each re-routine basically has an instance part
and a memoized part, and that it's to flush the memo
after it finishes, in terms of memoizing the inputs.
Result 1 rr(String a1) {
// if a1 is in the memo, return for it
// else queue for it and carry on
}
What is a re-routine?
It's a pattern for cooperative multithreading.
It's sort of a functional approach to functions and flow.
It has a declarative syntax in the language with usual
flow-of-control.
So, it's cooperative multithreading so it yields?
No, it just quits, and expects to be called back.
So, if it quits, how does it complete?
The entry point to re-routine provides a callback.
Re-routines only return results to other re-routines,
It's the default callback. Otherwise they just callback.
So, it just quits?
If a re-routine gets called with a null, it throws.
If a re-routine gets a null, it just continues.
If a re-routine completes, it callbacks.
So, can a re-routine call any regular code?
Yeah, there are some issues, though.
So, it's got callbacks everywhere?
Well, it's just got callbacks implicitly everywhere.
So, how does it work?
Well, you build a re-routine with an input and a callback,
you call it, then when it completes, it calls the callback.
Then, re-routines call other re-routines with the argument,
and the callback's in a ThreadLocal, and the re-routine memoizes >>>>>> all of its return values according to the object identity of the >>>>>> inputs,
then when a re-routine completes, it calls again with another >>>>>> ThreadLocal
indicating to delete the memos, following the exact same
flow-of-control
only deleting the memos going along, until it results all the >>>>>> memos in
the re-routines for the interned or ref-counted input are
deleted,
then the state of the re-routine is de-allocated.
So, it's sort of like a monad and all in pure and idempotent
functions?
Yeah, it's sort of like a monad and all in pure and idempotent >>>>>> functions.
So, it's a model of cooperative multithreading, though with no yield, >>>>>> and callbacks implicitly everywhere?
Yeah, it's sort of figured that a called re-routine always has a >>>>>> callback in the ThreadLocal, because the runtime has pre-emptive
multithreading anyways, that the thread runs through its
re-routines in
their normal declarative flow-of-control with exception handling, and >>>>>> whatever re-routines or other pure monadic idempotent functions it >>>>>> calls, throw when they get null inputs.
Also it sort of doesn't have primitive types, Strings must
always
be interned, all objects must have a distinct identity w.r.t. ==, and >>>>>> null is never an argument or return value.
So, what does it look like?
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(r2);
return result(r2, r3);
}
}
So, I expect that to return "result(r2, r3)".
Well, that's synchronous, and maybe blocking, the idea is
that it
calls rr2, gets a1, and rr2 constructs with the callback of rr1 and >>>>>> it's
own callback, and a1, and makes a memo for a1, and invokes
whatever is
its implementation, and returns null, then rr1 continues and invokes >>>>>> rr3
with r2, which is null, so that throws a NullPointerException, and >>>>>> rr1
quits.
So, ..., that's cooperative multithreading?
Well you see what happens is that rr2 invoked another
re-routine or
end routine, and at some point it will get called back, and that will >>>>>> happen over and over again until rr2 has an r2, then rr2 will memoize >>>>>> (a1, r2), and then it will callback rr1.
Then rr1 had quit, it runs again, this time it gets r2 from the >>>>>> (a1, r2) memo in the monad it's building, then it passes a
non-null r2
to rr3, which proceeds in much the same way, while rr1 quits again >>>>>> until
rr3 calls it back.
So, ..., it's non-blocking, because it just quits all the time, then >>>>>> happens to run through the same paces filling in?
That's the idea, that re-routines are responsible to build the >>>>>> monad and call-back.
So, can I just implement rr2 and rr3 as synchronous and blocking?
Sure, they're interfaces, their implementation is separate. If >>>>>> they don't know re-routine semantics then they're just synchronous >>>>>> and
blocking. They'll get called every time though when the re-routine >>>>>> gets
called back, and actually they need to know the semantics of
returning
an Object or value by identity, because, calling equals() to
implement
Memo usually would be too much, where the idea is to actually
function
only monadically, and that given same Object or value input, must
return
same Object or value output.
So, it's sort of an approach as a monadic pure idempotency?
Well, yeah, you can call it that.
So, what's the point of all this?
Well, the idea is that there are 10,000 connections, and any
time
one of them demultiplexes off the connection an input command
message,
then it builds one of these with the response input to the
demultiplexer
on its protocol on its connection, on the multiplexer to all the
connections, with a callback to itself. Then the re-routine is
launched
and when it returns, it calls-back to the originator by its
callback-number, then the output command response writes those back >>>>>> out.
The point is that there are only as many Theads as cores so the >>>>>> goal is that they never block,
and that the memos make for interning Objects by value, then the
goal is
mostly to receive command objects and handles to request bodies and >>>>>> result objects and handles to response bodies, then to call-back with >>>>>> those in whatever serial order is necessary, or not.
So, won't this run through each of these re-routines umpteen times? >>>>>>
Yeah, you figure that the runtime of the re-routine is on the >>>>>> order
of n^2 the order of statements in the re-routine.
So, isn't that terrible?
Well, it doesn't block.
So, it sounds like a big mess.
Yeah, it could be. That's why to avoid blocking and callback >>>>>> semantics, is to make monadic idempotency semantics, so then the
re-routines are just written in normal synchronous flow-of-control, >>>>>> and
they're well-defined behavior is exactly according to flow-of-control >>>>>> including exception-handling.
There's that and there's basically it only needs one Thread, so, >>>>>> less Thread x stack size, for a deep enough thread call-stack. Then >>>>>> the
idea is about one Thread per core, figuring for the thread to
always be
running and never be blocking.
So, it's just normal flow-of-control.
Well yeah, you expect to write the routine in normal
flow-of-control, and to test it with synchronous and in-memory
editions
that just run through synchronously, and that if you don't much
care if
it blocks, then it's the same code and has no semantics about the
asynchronous or callbacks actually in it. It just returns when it's >>>>>> done.
So what's the requirements of one of these again?
Well, the idea is, that, for a given instance of a re-routine, >>>>>> it's
an Object, that implements an interface, and it has arguments, and it >>>>>> has a return value. The expectation is that the re-routine gets
called
with the same arguments, and must return the same return value. This >>>>>> way later calls to re-routines can match the same expectation,
same/same.
Also, if it gets different arguments, by Object identity or
primitive value, the re-routine must return a different return value, >>>>>> those being same/same.
The re-routine memoizes its arguments by its argument list,
Object
or primitive value, and a given argument list is same if the order >>>>>> and
types and values of those are same, and it must return the same
return
value by type and value.
So, how is this cooperative multithreading unobtrusively in
flow-of-control again?
Here for example the idea would be, rr2 quits and rr1 continues, rr3 >>>>>> quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits. >>>>>> When rr2's or rr3's memo-callback completes, then it calls-back
rr1. as
those come in, at some point rr4 will be fulfilled, and thus rr4 will >>>>>> quit and rr1 will quit. When rr4's callback completes, then it will >>>>>> call-back rr1, which will finally complete, and then call-back
whatever
called r1. Then rr1 runs itself through one more time to
delete or decrement all its memos.
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(a1);
Result4 r4 = reroutine4.rr4(a1, r2, r3);
return Result1.r4(a1, r4);
}
}
The idea is that it doesn't block when it launchs rr2 and rr3, until >>>>>> such time as it just quits when it tries to invoke rr4 and gets a
resulting NullPointerException, then eventually rr4 will complete
and be
memoized and call-back rr1, then rr1 will be called-back and then
complete, then run itself through to delete or decrement the
ref-count
of all its memo-ized fragmented monad respectively.
Thusly it's cooperative multithreading by never blocking and always >>>>>> just
launching callbacks.
There's this System.identityHashCode() method and then there's a
notion
of Object pools and interning Objects then as for about this way that >>>>>> it's about numeric identity instead of value identity, so that when >>>>>> making memo's that it's always "==" and for a HashMap with
System.identityHashCode() instead of ever calling equals(), when
calling
equals() is more expensive than calling == and the same/same
memo-ization is about Object numeric value or the primitive scalar >>>>>> value, those being same/same.
https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-
So, you figure to return Objects to these connections by their
session
and connection and mux/demux in these callbacks and then write those >>>>>> out?
Well, the idea is to make it so that according to the protocol, the >>>>>> back-end sort of knows what makes a handle to a datum of the sort, >>>>>> given
the protocol and the protocol and the protocol, and the callback is >>>>>> just
these handles, about what goes in the outer callbacks or outside the >>>>>> re-routine, those can be different/same. Then the single writer
thread
servicing the network I/O just wants to transfer those handles,
or, as
necessary through the compression and encryption codecs, then write >>>>>> those out, well making use of the java.nio for scatter/gather and
vector
I/O in the non-blocking and asynchronous I/O as much as possible.
So, that seems a lot of effort to just passing the handles, ....
Well, I don't want to write any code except normal flow-of-control. >>>>>>
So, this same/same bit seems onerous, as long as different/same has a >>>>>> ref-count and thus the memo-ized monad-fragment is maintained when >>>>>> all
sorts of requests fetch the same thing.
Yeah, maybe you're right. There's much to be gained by re-using
monadic
pure idempotent functions yet only invoking them once. That gets
into
value equality besides numeric equality, though, with regards to
going
into re-routines and interning all Objects by value, so that inside >>>>>> and
through it's all "==" and System.identityHashCode, the memos, then >>>>>> about
the ref-counting in the memos.
So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here? >>>>>>
Yeah, it's a thing.
So, I think this needs a much cleaner and well-defined definition, to >>>>>> fully explore its meaning.
Yeah, I suppose. There's something to be said for reading it again. >>>>>>
ReRoutines: monadic functional non-blocking asynchrony in the language >>>>>
Implementing a sort of Internet protocol server, it sort of has
three or
four kinds of machines.
flow-machine: select/epoll hardware driven I/O events
protocol-establishment: setting up and changing protocol (commands,
encryption/compression)
protocol-coding: block coding in encryption/compression and
wire/object
commands/results
routine: inside the objects of the commands of the protocol,
commands/results
Then, it often looks sort of like
flow <-> protocol <-> routine <-> protocol <-> flow
On either outer side of the flow is a connection, it's a socket or the >>>>> receipt or sending of a datagram, according to the network interface >>>>> and
select/epoll.
The establishment of a protocol looks like
connection/configuration/commencement/conclusion, or setup/teardown. >>>>> Protocols get involved renegotiation within a protocol, and for
example
upgrade among protocols. Then the protocol is setup and established. >>>>>
The idea is that a protocol's coding is in three parts for
coding/decoding, compression/decompression, and
(en)cryption/decryption,
or as it gets set up.
flow->decrypt->decomp->decod->routine->cod->comp->crypt->flow-v
flow<-crypt<-comp<-cod<-routine<-decod<-decomp<-decrypt<-flow<-
Whenever data arrives, the idea goes, is that the flow is interpreted >>>>> according to the protocol, resulting commands, then the routine
derives
results from the commands, as by issuing others, in their
protocols, to
the backend flow. Then, the results get sent back out through the
protocol, to the frontend, the clients of what it serves the protocol >>>>> the server.
The idea is that there are about 10,000 connections at a time, or more >>>>> or less.
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
...
Then, the routine in the middle, has that there's one processor,
and on
the processor are a number of cores, each one independent. Then, the >>>>> operating system establishes that each of the cores, has any number of >>>>> threads-of-control or threads, and each thread has the state of
where it
is in the callstack of routines, and the threads are preempted so that >>>>> multithreading, that a core runs multiple threads, gives each thread >>>>> some running from the entry to the exit of the thread, in any given
interval of time. Each thread-of-control is thusly independent,
while it
must synchronize with any other thread-of-control, to establish common >>>>> or mutual state, and threads establish taking turns by mutual
exclusion,
called "mutex".
Into and out of the protocol, coding, is either a byte-sequence or
block, or otherwise the flow is a byte-sequence, that being serial,
however the protocol multiplexes and demultiplexes messages, the
commands and their results, to and from the flow.
Then the idea is that what arrives to/from the routine, is objects in >>>>> the protocol, or handles to the transport of byte sequences, in the
protocol, to the flow.
A usual idea is that there's a thread that services the flow, where, >>>>> how
it works is that a thread blocks waiting for there to be any I/O,
input/output, reading input from the flow, and writing output to the >>>>> flow. So, mostly the thread that blocks has that there's one thread
that
blocks on input, and when there's any input, then it reads or
transfers
the bytes from the input, into buffers. That's its only job, and only >>>>> one thread can block on a given select/epoll selector, which is any
given number of ports, the connections, the idea being that it just
blocks until select returns for its keys of interest, it services each >>>>> of the I/O's by copying from the network interface's buffers into the >>>>> program's buffers, then other threads do the rest.
So, if a thread results waiting at all for any other action to
complete
or be ready, it's said to "block". While a thread is blocked, the
CPU or
core just skips it in scheduling the preemptive multithreading, yet it >>>>> still takes some memory and other resources and is in the scheduler of >>>>> the threads.
The idea that the I/O thread, ever blocks, is that it's a feature of >>>>> select/epoll that hardware results waking it up, with the idea that
that's the only thread that ever blocks.
So, for the other threads, in the decryption/decompression/decoding
and
coding/compression/cryption, the idea is that a thread, runs through >>>>> those, then returns what it's doing, and joins back to a limited
pool of
threads, with a usual idea of there being 1 core : 1 thread, so that >>>>> multithreading is sort of simplified, because as far as the system
process is concerned, it has a given number of cores and the system
preemptively multithreads it, and as far as the virtual machine is
concerned, is has a given number of cores and the virtual machine
preemptively multithreads its threads, about the thread-of-control, in >>>>> the flow-of-control, of the thing.
A usual way that the routine muliplexes and demultiplexes objects in >>>>> the
protocol from a flow's input back to a flow's output, has that the
thread-per-connection model has that a single thread carries out the >>>>> entire task through the backend flow, blocking along the way, until it >>>>> results joining after writing back out to its connection. Yet, that
has
a thread per each connection, and threads use scheduling and heap
resources. So, here thread-per-connection is being avoided.
Then, a usual idea of the tasks, is that as I/O is received and flows >>>>> into the decryption/decompression/decoding, then what's decoded,
results
the specification of a task, the command, and the connection, where to >>>>> return its result. The specification is a data structure, so it's an >>>>> object or Object, then. This is added to a queue of tasks, where
"buffers" represent the ephemeral storage of content in transport the >>>>> byte-sequences, while, the queue is as usually a first-in/first-out
(FIFO) queue also, of tasks.
Then, the idea is that each of the cores consumes task specifications >>>>> from the task queue, performs them according to the task
specification,
then the results are written out, as coded/compressed/crypted, in the >>>>> protocol.
So, to avoid the threads blocking at all, introduces the idea of
"asynchrony" or callbacks, where the idea is that the "blocking" and >>>>> "synchronous" has that anywhere in the threads' thread-of-control
flow-of-control, according to the program or the routine, it is
current
and synchronous, the value that it has, then with regards to what it >>>>> returns or writes, as the result. So, "asynchrony" is the idea that
there's established a callback, or a place to pause and continue,
then a
specification of the task in the protocol is put to an event queue and >>>>> executed, or from servicing the O/I's of the backend flow, that what >>>>> results from that, has the context of the callback and
returns/writes to
the relevant connection, its result.
I -> flow -> protocol -> routine -> protocol -> flow -> O -v
O <- flow <- protocol <- routine <- protocol <- flow <- I <-
The idea of non-blocking then, is that a routine either provides a
result immediately available, and is non-blocking, or, queues a task >>>>> what results a callback that provides the result eventually, and is
non-blocking, and never invokes any other routine that blocks, so is >>>>> non-blocking.
This way a thread, executing tasks, always runs through a task, and
thus
services the task queue or TQ, so that the cores' threads are always >>>>> running and never blocking. (Besides the I/O and O/I threads which
block
when there's no traffic, and usually would be constantly woken up and >>>>> not waiting blocked.) This way, the TQ threads, only block when
there's
nothing in the TQ, or are just deconstructed, and reconstructed, in a >>>>> "pool" of threads, the TQ's executor pool.
Enter the ReRoutine
The idea of a ReRoutine, a re-routine, is that it is a usual
procedural
implementation as if it were synchronous, and agnostic of callbacks. >>>>>
It is named after "routine" and "co-routine". It is a sort of
co-routine
that builds a monad and is aware its originating caller, re-caller,
and
callback, or, its re-routine caller, re-caller, and callback.
The idea is that there are callbacks implicitly at each method
boundary,
and that nulls are reserved values to indicate the result or lack
thereof of re-routines, so that the code has neither callbacks nor any >>>>> nulls.
The originating caller has that the TQ, has a task specification, the >>>>> session+attachment of the client in the protocol where to write the
output, and the command, then the state of the monad of the task, that >>>>> lives on the heap with the task specification and task object. The TQ >>>>> consumers or executors or the executor, when a thread picks up the
task,
it picks up or builds ("originates") the monad state, which is the
partial state of the re-routine and a memo of the partial state of the >>>>> re-routine, and installs this in the thread local storage or
ThreadLocal, for the duration of the invocation of the re-routine.
Then
the thread enters the re-routine, which proceeds until it would block, >>>>> where instead it queues a command/task with callback to re-call it to >>>>> re-launch it, and throw a NullPointerException and quits/returns.
This happens recursively and iteratively in the re-routine implemented >>>>> as re-routines, each re-routine updates the partial state of the
monad,
then that as a re-routine completes, it re-launches the calling
re-routine, until the original re-routine completes, and it calls the >>>>> original callback with the result.
This way the re-routine's method body, is written as plain declarative >>>>> procedural code, the flow-of-control, is exactly as if it were
synchronous code, and flow-of-control is exactly as if written in the >>>>> language with no callbacks and never nulls, and exception-handling as >>>>> exactly defined by the language.
As the re-routine accumulates the partial results, they live on the
heap, in the monad, as a member of the originating task's object the >>>>> task in the task queue. This is always added back to the queue as
one of
the pending results of a re-routine, so it stays referenced as an
object
on the heap, then that as it is completed and the original re-routine >>>>> returns, then it's no longer referenced and the garbage-collector can >>>>> reclaim it from the heap or the allocator can delete it.
Well, for the re-routine, I sort of figure there's a Callstack and a >>>>> Callback type
class Callstack {
Stack<Callback> callstack;
}
interface Callback {
void callback() throws Exception;
}
and then a placeholder sort of type for Callflush
class Callflush {
Callstack callstack;
}
with the idea that the presence in ThreadLocals is to be sorted out, >>>>> about a kind of ThreadLocal static pretty much.
With not returning null and for memoizing call-graph dependencies,
there's basically for an "unvoid" type.
class unvoid {
}
Then it's sort of figure that there's an interface with some defaults, >>>>> with the idea that some boilerplate gets involved in the Memoization. >>>>>
interface Caller {}
interface Callee {}
class Callmemo {
memoize(Caller caller, Object[] args);
flush(Caller caller);
}
Then it seems that the Callstack should instead be of a Callgraph, and >>>>> then what's maintained from call to call is a Callpath, and then
what's
memoized is all kept with the Callgraph, then with regards to
objects on
the heap and their distinctness, only being reachable from the
Callgraph, leaving less work for the garbage collector, to maintain
the
heap.
The interning semantics would still be on the class level, or for
constructor semantics, as with regards to either interning Objects for >>>>> uniqueness, or that otherwise they'd be memoized, with the key being >>>>> the
Callpath, and the initial arguments into the Callgraph.
Then the idea seems that the ThreaderCaller, establishes the Callgraph >>>>> with respect to the Callgraph of an object, installing it on the
thread,
otherwise attached to the Callgraph, with regards to the ReRoutine.
About the ReRoutine, it's starting to come together as an idea,
what is
the apparatus for invoking re-routines, that they build the monad of >>>>> the
IOE's (inputs, outputs, exceptions) of the re-routines in their
call-graph, in terms of ThreadLocals of some ThreadLocals that callers >>>>> of the re-routines, maintain, with idea of the memoized monad along
the
way, and each original re-routine.
class IOE <O, E> {
Object[] input;
Object output;
Exception exception;
}
So the idea is that there are some ThreadLocal's in a static
ThreadGlobal
public class ThreadGlobals {
public static ThreadLocal<MonadMemo> monadMemo;
}
where callers or originators or ReRoutines, keep a map of the
Runnables
or Callables they have, to the MonadMemo's,
class Originator {
Map<? extends ReRoutineMapKey, MonadMemo> monadMemoMap;
}
then when it's about to invoke a Runnable, if it's a ReRoutine,
then it
either retrieves the MonadMemo or makes a new one, and sets it on the >>>>> ThreadLocal, then invokes the Runnable, then clears the ThreadLocal. >>>>>
Then a MonadMemo, pretty simply, is a List of IOE's, that when the
ReRoutine runs through the callgraph, the callstack is indicated by a >>>>> tree of integers, and the stack path in the ReRoutine, so that any
ReRoutine that calls ReRoutines A/B/C, points to an IOE that it
finds in
the thing, then it's default behavior is to return its memo-ized
value,
that otherwise is making the callback that fills its memo and
re-invokes
all the way back the Original routine, or just its own entry point.
This is basically that the Originator, when the ReRoutine quits out, >>>>> sort of has that any ReRoutine it originates, also gets filled up by >>>>> the
Originator.
So, then the Originator sort of has a map to a ReRoutine, then for any >>>>> Path, the Monad, so that when it sets the ThreadLocal with the
MonadMemo, it also sets the Path for the callee, launches it again
when
its callback returned to set its memo and relaunch it, then back up
the
path stack to the original re-routine.
One of the issues here is "automatic parallelization". What I mean by >>>>> that is that the re-routine just goes along and when it gets nulls
meaning "pending" it just continues along, then expects
NullPointerExceptions as "UnsatisifiedInput", to quit, figuring it
gets
relaunched when its input is satisfied.
This way then when routines serially don't depend on each others'
outputs, then they all get launched apiece, parallelizing.
Then, I wonder about usual library code, basically about Collections >>>>> and
Streams, and the usual sorts of routines that are applied to the
arguments, and how to basically establish that the rule of re-routine >>>>> code is that anything that gets a null must throw a
NullPointerException, so the re-routine will quit until the arguments >>>>> are satisfied, the inputs to library code. Then with the Memo being
stored in the MonadMemo, it's figured that will work out regardless
the
Objects' or primitives' value, with regards to Collections and Stream >>>>> code and after usual flow-of-control in Iterables for the for
loops, or
whatever other application library code, that they will be run each
time
the re-routine passes their section with satisfied arguments, then as >>>>> with regards to, that the Memo is just whatever serial order the
re-routine passes, not needing to lookup by Object identity which is >>>>> otherwise part of an interning pattern.
void rr1(String s1) {
List<String> l1 = rr2.get(s1);
Map<String, String> m1 = new LinkedHashMap<>();
l1.stream().forEach(s -> m1.put(s, rr3.get(s)));
return m1;
}
See what I figure is that the order of the invocations to rr3.get() is >>>>> serial, so it really only needs to memoize its OE, Output|Exception, >>>>> then about that putting null values in the Map, and having to check
the
values in the Map for null values, and otherwise to make it so that
the
semantics of null and NullPointerException, result that satisfying
inputs result calls, and unsatisfying inputs result quits, figuring
those unsatisfying inputs are results of unsatisfied outputs, that
will
be satisfied when the callee gets populated its memo and makes the
callback.
If the order of invocations is out-of-order, gets again into whether >>>>> the
Object/primitive by value needs to be the same each time, IOE, about >>>>> the
library code in Collections, Streams, parallelStream, and Iterables, >>>>> and
basically otherwise that any kind of library code, should throw
NullPointerException if it gets an "unexpected" null or what doesn't >>>>> fulfill it.
The idea though that rr3 will get invoked say 1000 times with the
rr2's
result, those each make their call, then re-launch 1000 times, has
that
it's figured that the Executor, or Originator, when it looks up and
loads the "ReRoutineMapKey", is to have the count of those and whether >>>>> the count is fulfilled, then to no-op later re-launches of the
call-backs, after all the results are populated in the partial monad >>>>> memo.
Then, there's perhaps instead as that each re-routine just checks its >>>>> input or checks its return value for nulls, those being unsatisfied. >>>>>
(The exception handling thoroughly or what happens when rr3 throws and >>>>> this kind of thing is involved thoroughly in library code.)
The idea is it remains correct if the worst thing nulls do is throw
NullPointerException, because that's just a usual quit and means
another
re-launch is coming up, and that it automatically queues for
asynchronous parallel invocation each the derivations while resulting >>>>> never blocking.
It's figured that re-routines check their inputs for nulls, and throw >>>>> quit, and check their inputs for library container types, and checking >>>>> any member of a library container collection for null, to throw quit, >>>>> and then it will result that the automatic asynchronous
parallelization
proceeds, while the re-routines are never blocking, there's only as
much
memory on the heap of the monad as would be in the lifetime of the
original re-routine, and whatever re-calls or re-launches of the
re-routine established local state in local variables and library
code,
would come in and out of scope according to plain stack unwinding.
Then there's still the perceived deficiency that the re-routine's
method
body will be run many times, yet it's only run as many times as result >>>>> throwing-quit, when it reaches where its argument to the re-routine or >>>>> result value isn't yet satisfied yet is pending.
It would re-run the library code any number of times, until it results >>>>> all non-nulls, then the resulting satisfied argument to the following >>>>> re-routines, would be memo-ized in the monad, and the return value of >>>>> the re-routine thus returning immediately its value on the partial
monad.
This way each re-call of the re-routine, mostly encounters its own
monad
results in constant time, and throws-quit or gets thrown-quit only
when
it would be unsatisfying, with the expectation that whatever
throws-quit, either NullPointerException or extending
NullPointerException, will have a pending callback, that will queue
on a
TQ, the task specification to re-launch and re-enter the original or >>>>> derived, re-routine.
The idea is sort of that it's sort of, Java with non-blocking I/O and >>>>> ThreadLocal (1.7+, not 17+), or you know, C/C++ with non-blocking I/O >>>>> and thread local storage, then for the abstract or interface of the
re-routines, how it works out that it's a usual sort of model of
co-operative multithreading, the re-routine, the routine "in the
language".
Then it's great that the routine can be stubbed or implemented
agnostic
of asynchrony, and declared in the language with standard libraries, >>>>> basically using the semantics of exception handling and convention of >>>>> re-launching callbacks to implement thread-of-control flow-of-control, >>>>> that can be implemented in the synchronous and blocking for unit tests >>>>> and modules of the routine, making a great abstraction of
flow-of-control.
Basically anything that _does_ block then makes for having its own
thread, whose only job is to block and when it unblocks, throw-toss
the
re-launch toward the origin of the re-routine, and consume the next
blocking-task off the TQ. Yet, the re-routines and their servicing the >>>>> TQ only need one thread and never block. (And scale in core count and >>>>> automatically parallelize asynchronous requests according to satisfied >>>>> inputs.)
Mostly the idea of the re-routine is "in the language, it's just
plain,
ordinary, synchronous routine".
Protocol Establishment
Each of these protocols is a combined sort of protocol, then according >>>> to different modes, there's established a protocol, then data flows in >>>> the protocol (in time).
stream-based (connections)
sockets, TCP/IP
sctp SCTP
message-based (datagrams)
datagrams, UDP
The idea is that connections can have state and session state, while,
messages do not.
Abstractly then there's just that connections make for reading from the >>>> connection, or writing to the connection, byte-by-byte,
while messages make for receiving a complete message, or writing a
complete message. SCTP is sort of both.
A bit more concretely, the non-blocking or asychronous or vector I/O,
means that when some bytes arrive the connection is readable, and while >>>> the output buffer is not full a connection is writeable.
For messages it's that when messages arrive messages are readable, and >>>> while the output buffer is not full messages are writeable.
Otherwise bytes or messages that pile up while not readable/writeable
pile up and in cases of limited resources get lost.
So, the idea is that when bytes arrive, whatever's servicing the I/O's >>>> has that the connection has data to read, and, data to write.
The usual idea is that an abstract Reader thread, will give any or all >>>> of the connections something to read, in an arbitrary order,
at an arbitrary rate, then the role of the protocol, is to consume the >>>> bytes to read, thus releasing the buffers, that the Reader, writes to. >>>>
Inputting/Reading
Writing/Outputting
The most usual idea of client-server is that
client writes to server then reads from server, while,
server reads from client then writes to client.
Yet, that is just a mode, reads and writes are peer-peer,
reads and writes in any order, while serial according to
that bytes in the octet stream arrive in an order.
There isn't much consideration of the out-of-band,
about sockets and the STREAMS protocol, for
that bytes can arrive out-of-band.
So, the layers of the protocol, result that some layers of the protocol >>>> don't know anything about the protocol, all they know is sequences of
bytes, and, whatever session state is involved to implement the codec, >>>> of the layers of the protocol. All they need to know is that given that >>>> all previous bytes are read/written, that the connection's state is
synchronized, and everything after is read/written through the layer.
Mostly once encryption or compression is setup it's never toredown.
Encryption, TLS
Compression, LZ77 (Deflate, gzip)
The layers of the protocol, result that some layers of the protocol,
only indicate state or conditions of the session.
SASL, Login, AuthN/AuthZ
So, for NNTP, a connection, usually enough starts with no layers,
then in the various protocols and layers, get negotiated to get
established,
combinations of the protocols and layers. Other protocols expect to
start with layers, or not, it varies.
Layering, then, either is in the protocol, to synchronize the session
then establish the layer in the layer protocol then maintain the layer >>>> in the main protocol, has that TLS makes a handsake to establish a
encryption key for all the data, then the TLS layer only needs to
encrypt and decrypt the data by that key, while for Deflate, it's
usually the only option, then after it's setup as a layer, then
everything other way reads/writes gets compressed.
client -> REQUEST
RESPONSE <- server
In some protocols these interleave
client -> REQUEST1
client -> REQUEST2
RESPONSE1A <- server
RESPONSE2A <- server
RESPONSE1B <- server
RESPONSE2B <- server
This then is called multiplexing/demultiplexing, for protocols like
IMAP
and HTTP/2,
and another name for multiplexer/demultiplexer is mux/demux.
So, for TLS, the idea is that usually most or all of the connections
will be using the same algorithms with different keys, and each
connection will have its own key, so the idea is to completely separate >>>> TLS establishment from TLS cryptec (crypt/decryp), so, the layer need
only key up the bytes by the connection's key, in their TLS frames.
Then, most of the connections will use compression, then the idea is
that the data is stored at rest compressed already and in a form
that it
can be concatenated, and that similarly as constants are a bunch of the >>>> textual context of the text-based protocol, they have compressed and
concatenable constants, with the idea that the Deflate compec
(comp/decomp) just passes those along concatenating them, or actively
compresses/decompresses buffers of bytes or as of sequences of bytes.
The idea is that Readers and Writers deal with bytes at a time,
arbitrarily many, then that what results being passed around as the
data, is as much as possible handles to the data. So, according to the >>>> protocol and layers, indicates the types, that the command routines,
get
and return, so that the command routines can get specialized, when the >>>> data at rest, is already layerized, and otherwise to adapt to the more >>>> concrete abstraction, of the non-blocking, asynchronous, and vector
I/O,
of what results the flow-machine.
When the library of the runtime of the framework of the language
provides the cryptec or compec, then, there's issues, when, it doesn't >>>> make it so for something like "I will read and write you the bytes
as of
making a TLS handshake, then return the algorithm and the key and that >>>> will implement the cryptec", or, "compec, here's either some data or
handles of various types, send them through", it's to be figured out.
The idea for the TLS handshake, is basically to sit in the middle, i.e. >>>> to read and write bytes as of what the client and server send, then
figuring out what is the algorithm and key and then just using that as >>>> the cryptec. Then after TLS algorithm and key is established the
rest is
sort of discarded, though there's some idea about state and session,
for
the session key feature in TLS. The TLS 1.2 also includes comp/decomp, >>>> though, it's figured that instead it's a feature of the protocol
whether
it supports compression, point being that's combining layers, and to be >>>> implemented about these byte-sequences/handles.
mux/demux
crypt/decrypt
comp/decomp
cod/decod
codec
So, the idea is to implement toward the concrete abstraction of
nonblocking vector I/O, while, remaining agnostic of that, so that all >>>> sorts the usual test routines yet particularly the composition of
layers
and establishment and upgrade of protocols, is to happen.
Then, from the byte sequences or messages as byte sequences, or handles >>>> of byte sequences, results that in the protocol, the protocol either
way
in/out has a given expected set of alternatives that it can read, then >>>> as of derivative of those what it will write.
So, after the layers, which are agnostic of anything but
byte-sequences,
and their buffers and framing and chunking and so on, then is the
protocol, or protocols, of the command-set and request/response
semantics, and ordering/session statefulness, and lack thereof.
Then, a particular machine in the flow-machine is as of the
"Recognizer"
and "Parser", then what results "Annunciators" and "Legibilizers",
as it
were, of what's usually enough called "Deserialization", reading off
from a serial byte-sequence, and "Serialization, writing off to a
serial
byte-sequence, first the text of the commands or the structures in
these
text-based protocols, the commands and their headers/bodies/payloads,
then the Objects in the object types of the languages of the runtime,
where then the routines of the servicing of the protocol, are
defined in
types according to the domain types of the protocol (and their
representations as byte-sequences and handles).
As packets and bytes arrive in the byte-sequence, the Recognizer/Parser >>>> detects when there's a fully-formed command, and its payload, after the >>>> Mux/Demux Demultiplexer, has that the Demultiplexer represents any
given
number of separate byte-sequences, then according to the protocol
anything their statefulness/session or orderedness/unorderedness.
So, the Demultiplexer is to Recognize/Parse from the combined input
byte-stream its chunks, that now the connection, has any number of
ordered/unordered byte-sequences, then usually that those are ephemeral >>>> or come and go, while the connection endures, with the most usual
notion
that there's only one stream and it's ordered in requets and ordered in >>>> responses, then whether commands gets pipelined and requests need not
await their responses (they're ordered), and whether commands are
numbers and their responses get associated with their command sequence >>>> numbers (they're unordered and the client has its own mux/demux to
relate them).
So, the Recognizer/Parser, theoretically only gets a byte at a time, or >>>> even none, and may get an entire fully-formed message (command), or
not,
and may get more bytes than a fully-formed message, or not, and the
bytes may be a well-formed message, or not, and valid, or not.
Then the job of the Recognizer/Parser, is from the beginning of the
byte-sequence, to Recognize a fully-formed message, then to create an
instance of the command object related to the handle back through the
mux/demux to the multiplexer, called the attachment to the connection, >>>> or the return address according to the attachment representing any
routed response and usually meaning that the attachment is the
user-data
and any session data attached to the connection and here of the
mux/demux of the connection, the job of the Recognizer/Parser is to
work
any time input is received, then to recognize and parse any number of
fully-formed messages from the input, create those Commands
according to
the protocol, that the attachment includes the return destination, and, >>>> thusly release those buffers or advance the marker on the Input
byte-sequence, so that the resources are freed, and later
Recognizings/Parsing starts where it left off.
The idea is that bytes arrive, the Recognizer/Parser has to determine
when there's a fully-formed message, consume that and service the
buffers the byte-sequence, having created the derived command.
Now, commands are small, or so few words, then the
headers/body/payload,
basically get larger and later unboundedly large. Then, the idea is
that
the protocol, has certain modes or sub-protocols, about "switching
protocols", or modes, when basically the service of the routine changes >>>> from recognizing and servicing the beginning to ending of a command, to >>>> recognizing and servicing an arbitrarily large payload, or, for
example,
entering a mode where streamed data arrives or whatever sort, then that >>>> according to the length or content of the sub-protocol format, the
Recognizer's job includes that the sub-protocol-streaming, modes, get
into that "sub-protocols" is a sort of "switching protocols", the only >>>> idea though being going into the sub-protocol then back out to the main >>>> protocol, while "switching protocols" is involved in basically any the >>>> establishment or upgrade of the protocol, with regards to the stateful >>>> connection (and not stateless messages, which always are according to
their established or simply some fixed protocol).
This way unboundedly large inputs, don't actually live in the
buffers of
the Recognizers that service the buffers of the Inputters/Readers and
Multiplexers/Demultiplexers, instead define modes where they will be
streaming through arbitrarily large payloads.
Here for NNTP and so on, the payloads are not considered arbitrarily
large, though, it's sort of a thing that sending or receiving the
payload of each message, can be defined this way so that in very, very >>>> limited resources of buffers, that the flow-machine keeps flowing.
Then, here, the idea is that these commands and their payloads, have
their outputs that are derived as a function of the inputs. It's
abstractly however this so occurs is the way it is. The idea here is
that the attachment+command+payload makes a re-routine task, and is
pushed onto a task queue (TQ). Then it's figured that the TQ represents >>>> abstractly the execution of all the commands. Then, however many Task
Workers or TW, or the TQ that runs itself, get the oldest task from the >>>> queue (FIFO) and run it. When it's complete, then there's a response
ready in byte-sequences are handles, these are returned to the
attachment.
(The "attachment" usually just means a user or private datum associated >>>> with the connection to identify its session with the connection
according to non-blocking I/O, here it also means the mux/demux
"remultiplexer" attachment, it's the destination of any response
associated with a stream of commands over the connection.)
So, here then the TQ basically has the idea of the re-routine, that is >>>> non-blocking and involves the asynchronous fulfillment of the
routine in
the domain types of the domain of object types that the protocol adapts >>>> as an adapter, that the domain types fulfill as adapted. Then for NNTP >>>> that's like groups and messages and summaries and such, the objects.
For
IMAP its mailboxes and messages to read, for SMTP its emails to send,
with various protocols in SMTP being separate protocols like DKIM or
what, for all these sorts protocols. For HTTP and HTTP/2 it's usual
HTTP
verbs, usually HTTP 1.1 serial and pipelined requests over a
connection,
in HTTP/2 mutiplexed requests over a connection. Then "session" means
broadly that it may be across connections, what gets into the
attachment
and the establishment and upgrade of protocol, that sessions are
stateful thusly, yet granularly, as to connections yet as to each
request.
Then, the same sort of thing is the same sort of thing to back-end,
whatever makes for adapters, to domain types, that have their
protocols,
and what results the O/I side to the I/O side, that the I/O side is the >>>> server's client-facing side, while the O/I side is the
server-as-a-client-to-the-backend's, side.
Then, the O/I side is just the same sort of idea that in the
flow-machine, the protocols get established in their layers, so that
all
through the routine, then the domain type are to get specialized to
when
byte-sequences and handles are known well-formed in compatible
protocols, that the domain and protocol come together in their
definition, basically so it results that from the back-end is retrieved >>>> for messages by their message-ID that are stored compressed at rest, to >>>> result passing back handles to those, for example a memory-map range
offset to an open handle of a zip file that has the concatenable entry >>>> of the message-Id from the groups' day's messages, or a list of those
for a range of messages, then the re-routine results passing the
handles
back out to the attachment, which sends them right out.
So, this way there's that besides the TQ and its TW's, that those
are to
never block or be long-running, that anything that's long-running is on >>>> the O/I side, and has its own resources, buffers, and so on, where of
course all the resources here of this flow-machine are shared by all
the
flow-machines in the flow-machine, in the sense that they are not
shared
yet come from a common resource altogether, and are exclusive. (This
gets into the definition of "share" as with regards to "free to share, >>>> or copy" and "exclusive to share, a.k.a. taking turns, not cutting in
line, and not stealing nor hoarding".)
Then on the O/I side or the backend side, it's figured the backend is
any kind of adapters, like DB adapters or FS adapters or WS adapters,
database or filesystem or webservice, where object-stores are
considered
filesystem adapters. What that gets into is "pools" like client pools, >>>> connection pools, resource pools, that a pool is usually enough
according to a session and the establishment of protocol, then with
regards to servicing the adapter and according to the protocol and the >>>> domain objects that thusly implement the protocol, the backend side has >>>> its own dedicated routines and TW's, or threads of execution, with
regards to that the backend side basically gets a callback+request and >>>> the job is to invoke the adapter with the request, and invoke the
callback with the response, then whether for example the callback is
actually the original attachment, or it involves "bridging the
unbounded
sub-protocol", what it means for the adapter to service the command.
Then the adapter is usually either provided as with intermediate or
domain types, or, for example it's just another protocol flow machine
and according to the connections or messaging or mux/demux or
establishing and upgrading layers and protocols, it basically works the >>>> same way as above in reverse.
Here "to service" is the usual infinitive that for the noun means "this >>>> machine provides a service" yet as a verb that service means to operate >>>> according to the defined behavior of the machine in the resources of
the
machine to meet the resource needs of the machine's actions in the
capabilities and limits of the resources of the machine, where this
"I/O
flow-machine: a service" is basically one "node" or "process" in a
usual
process model, allocated its own quota of resources according to the
process and its environment model in the runtime in the system, and
that's it. So, there's servicing as the main routine, then also what it >>>> means the maintenance servicing or service of the extended routine.
Then, for protocols it's "implement this protocol according to its
standards according to the resources in routine".
You know, I don't know where they have one of these anywhere, ....
So, besides attachment+command+payload, also is for indicating the
protocol and layers, where it can inferred for the response, when the
callback exists or as the streaming sub-protocol starts|continues|ends,
what the response can be, in terms of domain objects, or handles, or
byte sequences, in terms of domain objects that can result handles to
transfer or byte-sequences to read or write,
attachment+command+payload+protocols "ACPP" data structure.
Another idea that seems pretty usual, is when the payload is off to the
side, about picking up the payload when the request arrives, about when
the command, in the protocol, involves that the request payload, is off
to the side, to side-load the payload, where usually it means the
payload is large, or bigger than the limits of the request size limit in >>> the protocol, it sort of seems a good idea, to indicate for the
protocol, whether it can resolve resource references, "external", then
that accessing them as off to the side happens before ingesting the
command or as whether it's the intent to reference the external
resource, and when, when the external resource off to the side, "is",
part of the request payload, or otherwise that it's just part of the
routine.
That though would get into when the side effect of the routine, is to
result the external reference or call, that it's figured that would all
be part of the routine. It depends on the protocol, and whether the
payload "is" fully-formed, with or without the external reference.
Then HTTP/2 and Websockets have plenty going on about the multiplexer,
where it's figured that multiplexed attachments, or "remultiplexer
attachment", RMA, out from the demultiplexer and back through the
multiplexer, have then that's another sort of protocol machine, in terms >>> of the layers, and about whether there's a thread or not that
multiplexing requires any sort of state on otherwise the connections'
attachment, that all the state of the multiplexer is figured lives in a
data structure on the actual attachment, while the logic should be
re-entrant and just a usual module for the protocol(s).
It's figured then that the attachment is a key, with respect to a key
number for the attachment, then that in the multiplexing or muxing
protocols, there's a serial number of the request or command. There's a
usual idea to have serial numbers for commands besides, for each
connection, and then even serial numbers for commands for the lifetime
of the runtime. Then it's the usual metric of success or the error rate
how many of those are successes and how many are failures, that
otherwise the machine is pretty agnostic that being in the protocol.
Timeouts and cancels are sort of figured to be attached to the monad and >>> the re-routine. It's figured that for any command in the protocol, it
has a timeout. When a command is received, is when the timeout countdown >>> starts, abstractly wall-clock time or system time. So, the ACPP has also >>> the timeout time, so, the task T has an ACPP
attachment-command-payload-protocol and a routine or reroutine R or RR.
Then also it has some metrics M or MT, here start time and expiry time,
and the serial numbers. So, how timeouts work is that when T is to be
picked up to a TW, first TW checks whether M.time is past expiry, then
if so it cancels the monad and results returning howsoever in the
protocol the timeout. If not what's figured is that before the
re-routine runs through, it just tosses T back on the TQ anyway, so that >>> then whenever it comes up again, it's just checked again until such time >>> as the task T actually completed, or it expires, or it was canceled, or
otherwise concluded, according to the combination of the monad of the
R/RR, and M.time, and system time. Now, this seems bad, because an
otherwise empty queue, would constantly be thrashing, so it's bad. Then, >>> what's to be figured is some sort of parameter, "toss when", that then
though would have timeout priority queues, or buckets of sorts with
regards to tossing all the tasks T back on the TQ for no other reason
than to check their timeout.
It's figured that the monad of the re-routine is all the heap objects
and references to handles of the outstanding command. So, when the
re-routine is completed/canceled/concluded, then all the resources of
the monad should be freed. Then it's figured that any routine to access
the monad is re-entrant, and so that it results that access to the monad >>> is atomic, to build the graph of memos in the monad, then that access to >>> each memo is atomic as after access to the monad itself, so that the
access to the monad is thread-safe (and to be non-blocking, where the
only thing that happens to the monad is adding re-routine paths, and
getting and setting values of object values and handles, then releasing
all of it [, after picking out otherwise the result]).
So it's figured that if there's a sort of sweeper or closer being the
usual idea of timeouts, then also in the case that for whatever reason
the asynchronous backend fails, to get a success or error result and
callback, so that the task T
T{
RMA attachment; // return/remultiplexer attachment
PCP command; // protocol command/payload
RR routine; // routine / re-routine (monad)
MT metrics; // metrics/time
}
has that timeouts, are of a sort of granularity. So, it's not so much
that timeouts need to be delivered at a given exact time, as delivered
within a given duration of time. The idea is that timeouts both call a
cancel on the routine and result an error in the protocol. (Connection
and socket timeouts or connection drops or closures and so on, should
also result cancels as some form of conclusion cleans up the monad's
resources.)
There's also that timeouts are irrelevant after conclusion, yet if
there's a task queue of timeouts, not to do any work fishing them out,
just letting them expire. Yet, given that timeouts are usually much
longer than actual execution times, there's no point keeping them
around.
Then it's figured each routine and sub-routine, has its timing, then
it's figured to have that the RR and MT both have the time, then as with >>> regards to, the RR and MT both having a monad, then whether it's the
same monad what it's figured, is what it's figured.
TASK {
RMA attachment; // return/remultiplexer attachment
PCP command; // protocol command/payload
RRMT routine; // routine / re-routine, metrics / time (monad)
}
Then it's figured that any sub-routine checks the timeout overall, and
the timeouts up the re-routine, and the timeout of the task, resulting a >>> cancel in any timeout, then basically to push that on the back of the
task queue or LIFO last-in-first-out, which seems a bad idea, though
that it's to expeditiously return an error and release the resources,
and cancel any outstanding requests.
So, any time a task is touched, there's checking the attachment whether
it's dropped, checking the routine whether it's canceled, with the goal
of that it's all cleaned up to free the resources, and to close any
handles opened in the course of building the monad of the routine's
results.
Otherwise while a command is outstanding there's not much to be done
about it, it's either outstanding and not started or outstanding and
started, until it concludes and there's a return, the idea being that
the attachment can drop at any time and that would be according to the
Inputter/Reader or Recognizer/Parser (an ill-formed command results
either an error or a drop), the routine can conclude at any time either
completing or being canceled, then that whether any handles are open in
the payload, is that a drop in the attachment, disconnect in the
[streaming] command, or cancel in the routine, ends each of the three,
each of those two, or that one.
(This is that the command when 'streaming sub-protocol' results a bunch
of commands in a sub-protocol that's one command in the protocol.)
The idea is that the RMA is only enough detail to relate to the current
state in the attachment of the remultiplexing, the command is enough
state to describe its command and payload and with regards to what
protocol it is and what sub-protocols it entered and what protocol it
returns to, and the routine is the monad of the entire state of the
routine, either value objects or open handles, to keep track of all the
things according to these things.
So, still it's not quite clear how to have the timeout in the case that
the backend hangs, or drops, or otherwise that there's no response from
the adapter, what's a timeout. This sort of introduces re-try logic to
go along with time-out logic.
The re-try logic, involves that anything can fail, and some things can
be re-tried when they fail. The re-try logic would be part of the
routine or re-routine, figuring that any re-tries still have to live in
the time of the command. Then re-tries are kind of like time-outs, it's
usual that it's not just hammering the re-tries, yet a usual sort of
back-off and retry-count, or retry strategy, and then whether that it
involves that it should be a new adapter handle from the pool, about
that adapter handles from the pool should be round-robin and when there
are retry-able errors that usually means the adapter connection is
un-usable, that getting a new adapter connection will get a new one and
whether retry-able errors plainly enough indicate to recycle the adapter >>> pool.
Then, retry-logic also involves resource-down, what's called
circuit-breaker when the resource is down that it's figured that it's
down until it's back up. [It's figured that errors by default are _not_
retry-able, and, then as about the resource-health or
backend-availability, what gets involved in a model of critical
resource-recycling and backend-health.]
About server-push, there's an idea that it involves the remultiplexer
and that the routine, according to the protocol, synthesizes tasks and
is involved with the remultiplexer, to result it makes tasks then that
run like usual tasks. [This is part of the idea also of the mux or
remux, about 1:many commands/responses, and usually enough their
serials, and then, with regards to "opportunistic server push", how to
drop the commands that follow that would otherwise request the
resources. HTTP/2 server-push looks deprecated, while then there's
WebSocket, which basically makes for a different sort of use-case
peer-peer than client-server. For IMAP is the idea that when there are
multiple responses to single commands then that's basically in the
mux/remux. For pipelined commands and also for serial commands is the
mux/remux. The pipelined commands would result state building in the
mux/remux when they're returned disordered, with regards to results and
the handles, and 'TCB' or 'TW' driving response results.]
So, how to implement timeout or the sweeper/closer, has for example that >>> a connection drop, should cancel all the outstanding tasks for that
connection. For example, undefined behavior of whatever sort results a
missed callback, should eventually timeout and cancel the task, or all
the tasks instances in the TQ for that task. (It's fair enough to just
mark the monads of the attachment or routine as canceled, then they'll
just get immediately discarded when they come up in the TQ.) There's no
point having timeouts in the task queue because they'd either get
invoked for nothing or get added to the task queue long after the task
usually completes. (It's figured that most timeouts are loose timeouts
and most tasks complete in much under their timeout, yet here it's
automatic that timeouts are granular to each step of the re-routine, in
terms of the re-routine erroring-out if a sub-routine times-out.)
The Recognizer/Parser (Commander) is otherwise stateless, the
Inputter/Reader and its Remultiplexer Attachment don't know what results >>> Tasks, the Task Queue will run (and here non-blockingly) any Task's
associated routine/re-reroutine, and catch timeouts in the execution of
the re-routine, the idea is that the sweeper/closer basically would only >>> result having anything to do when there's undefined behavior in the
re-routine, or bugs, or backend timeouts, then whether calls to the
adapter would have the timeout-task-lessors or "TTL's", in its task
queue, point being that when there's nothing going on that the entire
thing is essentially _idle_, with the Inputter/Reader blocked on select
on the I/O side, the Outputter/Writer or Backend Adapter sent on the O/I >>> side, the Inputter/Reader blocked on the O/I side, the TQ's empty (of,
the protocol, and, the backend adapters), and it's all just pending
input from the I/O or O/I side, to cascade the callbacks back to idle,
again.
I.e. there shouldn't be timeout tasks in the TQ, because, at low load,
they would just thrash and waste cycles, and at high load, would arrive
late. Yet, it is so that there is formal un-reliability of the routines, >>> and, formal un-reliability of the O/I side or backend, [and formal
un-reliability of connections or drops,] so some sweeper/closer checks
outstanding commands what should result canceling the command and its
routines, then as with regards to the backend adapter, recycling or
teardown the backend adapter, to set it up again.
Then the idea is that, Tasks, well enough represent the outstanding
commands, yet there's not to be maintaining a task set next to the task
queue, because it would use more space and maintenance in time than the
queue itself, while multiple instances of the same Task can be in the
Task queue as point each to the state of the monad in the re-routine,
then gets into whether it's so, that, there is a task-set next to the
task-queue, then that concluding the task removes it from the set, while >>> the sweeper/closer just is scheduled to run periodically through the
entire task-set and cancel those expired, or dropped.
Then, having both a task-set TS and task-queue TQ, maybe seems the thing >>> to do, where, it should be sort of rotating, because, the task-queue is
FIFO, while the task-set is just a set (a concurrent set, though as with >>> regards to that the tasks can only be marked canceled, and resubmitted
to the task queue, with regards to that the only action that removes
tasks from the task-set is for the task-queue to result them being
concluded, then that whatever task gets tossed on the task queue is to
be inserted into the task-set).
Then the task-set TS would be on the order of outstanding tasks, while,
the task-queue TQ would be on the order of outstanding tasks'
re-routines.
Then the usual idea of sweeper/closer is to iterate through a view of
the TS, check each task whether its attachment dropped or command or
routine timed-out or canceled, then if dropped or canceled, to toss it
on the TQ, which would eventually result canceling if not already
canceled and dropping if dropped.
(Canceling/Cancelling.)
Most of the memory would be in the monads, also the open or live handles >>> would be in the routine's monads, with the idea being that when the task >>> concludes, then the results, that go out through the remultiplexer,
should be part of the task.
TASK {
RMA attachment; // return/remultiplexer attachment
PCP command; // protocol command/payload
RRMT routine; // routine / re-routine, metrics / time (monad)
RSLT result; // result (monad)
}
It's figured that the routine _returns_ a result, which is either a
serializable value or otherwise it's according to the protocol, or it's
a live handle or specification of handle, or it has an error/exception
that is expected to be according to the protocol, or that there was an
error then whether it results a drop according to the protocol. So, when >>> the routine and task concludes, then the routine and metrics monads can
be released, or de-allocated or deleted, while what live handles they
have, are to be passed back as expeditiously as possible to the
remultiplexer to be written to the output as on the wire the protocol,
so that the live handles can be closed or their reference counts
decremented or otherwise released to the handle pool, of a sort, which
is yet sort of undefined.
The result RSLT isn't really part of the task, once the task is
concluding, the RRMT goes right to the RMA according to the PCP, that
being the atomic operation of concluding the task, and deleting it from
the task-set. (It's figured that outstanding callbacks unaware their
cancel, of the re-routines, basically don't toss the task back onto the
TQ if they're canceled, that if they do, it would just sort of
spuriously add it back to the task-set, which would result it being
swept out eventually.)
TASK {
RMA attachment; // return/remultiplexer attachment
PCP command; // protocol command/payload
RRMT routine; // routine / re-routine, metrics / time (monad, live
handles)
}
TQ // task queue
TS // task set
TW // task-queue worker thread, latch on TQ
TZ // task-set cleanup thread, scheduled about timeouts
Then, about what threads run the callbacks, is to get figured out.
TCF // thread call forward
TCB // thread call back
It's sort of figured that calling forward, is into the adapters and
backend, and calling back, is out of the result to the remultiplexer and >>> running the remultiplexer also. This is that the task-worker thread
invokes the re-routines, and the re-routine callbacks, are pretty much
called by the backend or TCF, because all they do is toss back onto the
TQ, so that the TW runs the re-routines, the TCF is involved in the O/I
side and the backend adapter, and what reserves live handles, while the
TCB returns the results through the I/O side, and what recycles live
handles.
Then it's sort of figured that the TCF result thread groups or whatever
otherwise results whatever blocks and so on howsoever it is that the
backend adapter is implemented, while TCB is pretty much a single
thread, because it's driving I/O back out through all the open
connections, or that it describes thread groups back out the I/O side.
("TCB" not to be confused with "thread control block".)
Nonblocking I/O, and, Asynchronous I/O
One thing I'm not too sure about is the limits of the read and write of
the non-blocking I/O. What I figure is that mostly buffers throughout
are 4KiB buffers from a free-list, which is the usual idea of reserving
buffers and getting them off a free-list and returning them when done.
Then, I sort of figure that the reader, gets about a 1MiB buffer for
itself, with the idea being, that the Inputter when there is data off
the wire, reads it into 1MiB buffer, then copies that off to 4KiB
buffers.
BFL // buffer free-list, 1
BIR // buffer of the inputter/reader, 1
B4K // buffer of 4KiB size, many
What I figure that BIR is "direct memory" as much as possible, for DMA
where native, while, figuring that pretty much it's buffers on the heap, >>> fixed-size buffers of small enough size to usually not be mostly sparse, >>> while not so small that usual larger messages aren't a ton of them, then >>> with regards to the semantics of offsets and extents in the buffers and
buffer lists, and atomic consumption of the front of the list and atomic >>> concatenation to the back of the list, or queue, and about the
"monohydra" or "slique" data structure defined way above in this thread. >>>
Then about writing is another thing, I figure that a given number of
4KiB buffers will write out, then no longer be non-blocking while
draining, about the non-blocking I/O, that read is usually non-blocking
because if nothing is available then nothing gets copied, while write
may be blocking because the UART or what it is remains to drain to write >>> more in.
I'm not even sure about O_NONBLOCK, aio_read/aio_write, and overlapped
I/O.
Then it looks like O_NONBLOCKING with select and asynchronous I/O the
aio or overlapped I/O, sort of have different approaches.
I figure to use non-blocking select, then, the selector for the channel
at least in Java, has both read and write interest, or all interest,
with regards to there only being one selector key per channel (socket).
The issue with this is that there's basically that the Inputter/Reader
and Outputter/Writer are all one thread. So, it's figured that reads
would read about a megabyte at a time, then round-robin all the ready
reads and writes, that for each non-blocking read, it reads as much as a >>> megabyte into the one buffer there, copies the read bytes appending it
into the buffer array in front of the remux Input for the attachment,
tries to write as many as possbile for the buffer array for the write
output in front of the remux Output for the attachment, then proceeds
round-robin through the selector keys. (That each of those is
non-blocking on the read/write a.k.a. recv/send then copying from the
read buffer into application buffers is according to as fast as it can
fill a free-list given list of buffers, though that any might get
nothing done.)
One of the issues is that the selector keys get waked up for read, when
there is any input, and for write, when the output has any writeable
space, yet, there's no reason to service the write keys when there is
nothing to write, and nothing to read from the read keys when nothing to >>> read.
So, it's figured the read keys are always of interest, yet if the write
keys are of interest, mostly it's only one or the other. So I'd figure
to have separate read and write selectors, yet, it's suggested they must >>> go together the channel the operations of interest, then whether the
idea is "round-robin write then round-robin read", because all the
selector keys would always be waking up for writing nothing when the way >>> is clear, for nothing.
Then besides non-blocking I/O is asynchronous I/O, where, mostly the
idea is that the completion handler results about the same, ..., where
the completion handler is usually enough "copy the data out to read,
repeat", or just "atomic append more to write, repeat", with though
whether that results that each connection needs its own read buffers, in >>> terms of asynchronous I/O, not saying in what order or whether
completion handlers, completion ports or completion handlers, would for
reading each need their own buffer. I.e., to scale to unbounded many
connections, the idea is to use constant size resources, because
anything linear would grow unbounded. That what's to write is still all
these buffers of data and how to "deduplicate the backend" still has
that the heap fills up with tasks, that the other great hope is that the >>> resulting runtime naturally rate-limits itself, by what resources it
has, heap.
About "live handles" is the sort of hope that "well when it gets to the
writing the I/O, figuring to transfer an entire file, pass it an open
handle", is starting to seem a bad idea, mostly for not keeping handles
open while not actively reading and writing from them, and that mostly
for the usual backend though that does have a file-system or
object-store representation, how to result that results a sort of
streaming sub-protocol routine, about fetching ranges of the objects or
otherwise that the idea is that the backend file is a zip file, with
that the results are buffers of data ready to write, or handles, to
concatenate the compressed sections that happen to be just ranges in the >>> file, compressed, with concatenating them together about the internals
of zip file format, the data at rest. I.e. the idea is that handles are
sides of a pipe then to transfer the handle as readable to the output
side of the pipe as writeable.
It seems though for various runtimes, that both a sort of "classic
O_NONBLOCKING" and "async I/O in callbacks" organizations, can be about
same, figuring that whenever there's a read that it drives the Layers
then the Recognizer/Parser (the remux if any and then the
command/payload parser), and the Layers, and if there's anything to
write then the usual routine is to send it and release to recycle any
buffers, or close the handles, as their contents are sent.
It's figured to marshal whatever there is to write as buffers, while,
the idea of handles results being more on the asynchronous I/O on the
backend when it's filesystem. Otherwise it would get involved partially
written handles, though there's definitely something to be said for an
open handle to an unbounded file, and writing that out without breaking
it into a streaming-sub-protocol or not having it on the heap.
"Use nonblocking mode for this operation; that is, this call to preadv2
will fail and set errno to EAGAIN if the operation would block. "
The goal is mostly being entirely non-blocking, then with that the
atomic consume/concatenate of buffers makes for "don't touch the buffers >>> while their I/O is outstanding or imminent", then that what services I/O >>> only consumes and concatenates, while getting from the free-list or
returning to the free-list, what it concatenates or consumes. [It's
figured to have buffers of 4KiB or 512KiB size, the inputter gets a 1MiB >>> direct buffer, that RAM is a very scarce resource.]
So, for the non-blocking I/O, I'm trying to figure out how to service
the ready reads, while, only servicing ready writes that also have
something to write. Then I don't much worry about it because ready
writes with nothing to write would result a no-op. Then, about the
asynchronous I/O, is that there would always be an outstanding or
imminent completion result for the ready read, or that, I'm not sure how >>> to make it so that reads are not making busy-work, while, it seems clear >>> that writes are driven by there being something to write, then though
not wanting those to hammer when the output buffer is full. In this
sense the non-blocking vector I/O with select/epoll/kqueue or what, uses >>> less resources for services that have various levels of load,
day-over-day.
https://hackage.haskell.org/package/machines
https://clojure.org/reference/transducers
https://chamibuddhika.wordpress.com/2012/08/11/io-demystified/
With non-blocking I/O, or at least in Java, the attachment, is attached
to the selection key, so, they're just round-robin'ed. In asynchronous
(aio on POSIX or overlapped I/O on Windows respectively), in Java the
completion event gets the attachment, but doesn't really say how to
invoke the async send/recv again, and I don't want to maintain a map of
attachments and connections, though it would be alright if that's the
way of things.
Then it sort of seems like "non-blocking for read, or drops, async I/O
for writes". Yet, for example in Java, a SocketChannel is a
SelectableChannel, while, an AsyncSocketChannel, is not a
SelectableChannel.
Then, it seems pretty clear that while on Windows, one might want to
employ the aio model, because it's built into Windows, then as for the
sort of followup guarantees, or best when on Windows, that otherwise the >>> most usual approach is "O_NONBLOCKING" for the socket fd and the fd_set. >>>
Then, what select seems to guarantee, is, that, operations of interest,
_going to ready_, get updated, it doesn't say anything about going to
un-ready. Reads start un-ready and writes start ready, then that the
idea is that select results updating readiness, but not unreadiness.
Then the usual selector implementation, for the selection keys, and the
registered keys and the selected keys, for the interest ops (here only
read and write yet also connect when drops fall out of it) and ready
ops.
Yet, it doesn't seem to really claim to guarantee, that while working
with a view of the selection keys, that if selection keys are removed
because they're read-unready (nothing to do) or nothing-to-write
(nothing to do), one worries that the next select round has to have
marked any read-ready, while, it's figured that any something-to-write,
should add the corresponding key back to the selection keys. (There's
for that if the write buffer is full, it would just return 0 I suppose,
yet not wanting to hammer/thrash/churn instead just write when ready.)
So I want to establish that there can be more than one selector,
because, otherwise I suppose that the Inputter/Reader (now also
Outputter/Writer) wants read keys that update to ready, and write keys
that update to ready, yet not write keys that have nothing-to-do, when
they're all ready when they have nothing-to-do. Yet, it seems pretty
much that they all go through one function, like WSPSelect on Windows.
I suppose there's setting the interest ops of the key, according to
whether there's something to write, figuring there's always something to >>> read, yet when there is something to write, would involve finding the
key and setting its write-interest again. I don't figure that any kind
of changing the selector keys themselves is any kind of good idea at
all, but I only want to deal with the keys that get activity.
Also there's an idea that read() or write() might return -1 and set
EAGAIN in the POSIX thread local error number, yet for example in the
Java implementation it's to be avoided altogether calling the unready as >>> they only return >0 or throw an otherwise ambiguous exception.
So, I'm pretty much of a mind to just invoke select according to 60
seconds timeout, then just have the I/O thread service all the selection >>> keys, what way it can sort of discover drops as it goes through then
read if readable and write if write-able and timeout according to the
protocol if the protocol has a timeout.
Yet, it seems instead that when a read() or write() returns until read() >>> or write() returns 0, there is a bit of initialization to figure out,
must be. What it seems that selection is on all the interest ops, then
to unset interest on OP_WRITE, until there is something to write, then
to set interest on OP_WRITE on the selector's keys, before entering
select, wherein it will populate what's writable, as where it's
writable. Yet, there's not removing the key, as it will show up for
OP_READ presumably anyways.
Anyways it seems that it's alright to have multiple selectors anyways,
so having separate read and write selectors seems fine. Then though
there's two threads, so both can block in select() at the same time.
Then it's figured that the write selector is initialized by deleting the >>> selected-key as it starts by default write-able, and then it's only of
interest when it's ever full on writing, so it comes up, there's writes
until done and its' deleted, then that continues until there's nothing
to do. The reads are pretty simple then and when the selected-keys come
up they're read until nothing-to-do, then deleted from selected-keys.
[So, the writer thread is mostly only around to finish unfulfilled
writes.]
Remux: Multiplexer/Demultiplexer, Remultiplexer, mux/demux
A command might have multiple responses, where it's figured it will
result multiple tasks, or a single task, that return to a single
attachment's connection. The multiplexer mostly accepts that requests
are mutiplexed over the connection, so it results that those are
ephemeral and that the remux creates remux attachments to the original
attachment, involved in any sort of frames/chunks. The compression layer >>> is variously before or after that, then encryption is after that, while
some protocols also have encryption of a sort within that.
The remux then results that the Recognizer/Parser just gets input, and
recognizes frames/chunks their creation, then assembling their contents
into commands/payloads. Then it's figured that the commands are
independent and just work their way through as tasks and then get
chunked/framed as according to the remux, then also as with regards to
"streaming sub-protocols with respect to the remux".
Pipelined commands basically result a remux, establishing that the
responses are written in serial order as were received.
It's basically figured that 63 bit or 31 bit serial numbers would be
plenty to identify unique requests per connection, and connections and
so on, about the lifetime of the routine and a serial number for each
thing.
IO <-> Selectors <-> Rec/Par <-> Remux <-> Rec/Par <-> TQ/TS <-> backend >>>
Well I figure that any kind of server module for the protocol needs the
client module.
Also it's sort of figured that a client adapter has a similar usual
approach to the non-blocking I/O to get figured out, as what with
regards to then usual usage patterns of the API, and expecting to have a
same sort of model of anything stateful the session, and other issues
involved with the User-Agent, what with regards to the things how a
client is, then as with regards to it has how it constructs the commands
and payloads, with the requests it gets of the commands and partial
payloads (headers, body, payload), how it's to be a thing.
Also it's figured that there should be a plain old stdin/stdout that
then connects to one of these things instead of sockets, then also for
testing and exercising the client/server that it just builds a pair of
unidirectional pipes either way, these then being selectable channels in
Java or otherwise the usual idea of making it so that stdin/stdout are a
connection.
With regards to that then it looks like TLS (1.2, 1.3, maybe 1.1) should
be figured out first, then a reasonably involved multiplexing, then as
with regards to something like the QUIC UDP multiplexing, then about how
that sits with HTTP/2 style otherwise semantics, then as with regards to
SCTP, and this kind of thing.
I.e., if I'm going to implement QUIC, first it should be SCTP.
The idea of the client in the same context as the server, sort of looks
simple, it's a connection pool, then as with regards to that usually
enough, clients call servers not the other way around, and clients send
commands and receive results and servers receive commands and send
results. So, it's the O/I side.
It's pretty much figured that on protocols like HTTP 1.1, and otherwise
usual adapters with one session, there's not much considered about
sessions that bridge adapter connections, with the usual idea that
multiple client-side connections might be a session, and anything
session-stateful is session-stateful anywhere in the back-end fleet,
where it's figured usually that any old host in the backend picks up any
old connection from the front-end.
Trying to figure out QUIC ("hi, we think that TCP/IP is ossified because
we can't just update Chrome and outmode it, and can't be bothered to
implement SCTP and get other things figured out about unreliable
datagrams multiplexing a stream's multiplex connection, or changing IP
addresses"), then it seems adds a sort of selectable-channel abstraction
in front of it, in terms of anything about it being a session, and all
the datagrams just arriving at the datagram port. (QUIC also has
"server-initiated" so it's a peer-to-peer setup not a client-server
setup.) Then it's figured that anything modeling QUIC (over UDP) should
be side-by-side SCTP over UDP, and Datagram TLS DTLS.
So, TLS is ubiquitous, figuring if anybody actually wants to encrypt
anything that it's in the application layer, then there's this ALPN to
get it associated with protocols, or this sort of "no-time for a TLS
handshake, high-five", TLS is ubiquitous, to first sort of figure out
TLS, then for the connections, then about the multiplexing, about kinds
of stateful stream datagram, sessions. ("Man in the middle? Here let me
NAT your PAC while you're on the phone.")
As part of a protocol, there's either TLS always and it precedes
entering otherwise the protocol, or, there's STARTTLS, which is figured
then to for for the duration barring "switching protocols". It's assumed
that "streaming-sub-protocols" are "assume/resume" protocol, while
"switching protocols" is "finish/start".
Then, there's a simple sort of composition of attributes of protocols,
and profiles of protocols after capabilities and thus profiles of
protocols in effect.
In Java the usual idea of TLS is called SSLEngine. Yet, SSLEngine is
sort of organized around blocking calls, or "sitting on the socket". It
doesn't really have a way to feed it the handshake, get the master key,
then just encrypt everything following with that. So it's figured that
as a profile module, it's broken apart a bit the TLS protocol, then
anything to do with certificates or algorithms is in java.security or
javax.security anyways. Then AEAD is just a common way to make encrypted
frames/chunks. It's similar with Zip/Deflate, and that it should be
first-class anyways because there's a usual idea to use zip files as
file system representation of compresssed, concatenable data at rest,
for mostly transferring static constant at what's the compressed data at
rest. The idea of partially-or-weakly encrypted data at rest is a good
dog but won't hunt as above, yet the block-cipher in the usual sense
should operate non-blockingly on the buffers. Not sure about "TLS Change
Cipher".
So, TLS has "TLS connection state", it's transport-layer. Then, what
this introduces is how to characterize or specify frames, or chunks, in
terms of structs, and alignment, and fixed-length and variable-length
fields, of the most usual sorts of binary organizations of records, here
frames or chunks.
https://en.wikipedia.org/wiki/X.690
The ASN.1 encoding, abstract syntax notation, is a very usual way to
make a bunch of the usual things like for other ITU-T or ITU-X
specifications, like X.509 the certificates and so on. Then if the
structure is simple enough, then all the OID's have to get figured out
as basically the OID's are reserved values and constants according to
known kinds of contents, and in the wild there are many tens of
thousands of them, while all that's of interest is a few tens or less,
all the necessary things for interpreting TLS X.509 certificates and
this kind of thing. So, this is a usual way to describe the structures
and nested structures of any organization of data on the wire.
Then, frames and chunks, basically are octets, of a very usual sort of
structure as "header, and body", or "frame" or "chunk", where a frame
usually has a trailer, header-body-trailer. The headers and trailers are
usually fixed length, and one of the fields is the size or length of the
variable-length body. They're sometimes called blocks.
https://datatracker.ietf.org/doc/html/rfc1951 (Deflate)
Deflate has Huffman codes then some run-length encoding, for most of the
textual data it's from an alphabet of almost entirely ISO646 or ASCII,
and there's not much run-length at all, while the alphabets of base32 or
base64 might be very usual, then otherwise binary data is usually
already compressed and to be left alone.
There's basically to be figured if there's word-match for commonly or
recently used words, in the about 32K window of the Deflate block,
mostly otherwise about the minimal character sets and its plain sorted
Huffman table the block. The TLS plaintext blocks are limited to 2^14 or
about 16K, the Deflate non-compressed blocks are limited to about 64K,
the compressed blocks don't have length semantics, only
last-block/end-of-block. The Deflate blocks have a first few bits that
indicate block/last-block, then there's a reserved code end-of-block.
The TLS 1.2 with Deflate says that Deflate state has to continue
TLS-block over TLS block, while, it needn't, for putting Deflate blocks
in TLS blocks closed, though accepting Deflate blocks over consecutive
TLS blocks. For email messages it's figured that the header is a block,
the separator is a block, and the body is a block. For HTTP it's figured
the header is a Defalte block, the separator is a Deflate block, and the
body is a Deflate block. The commands and results, it's figured at
Deflate blocks. This way then they just get concatenated, and are
self-contained. It's figured that decompression, recognize/parse, copies
into plaintext, as whatever has arrived, after encryption, block-ciphers
the block into what's either the TLS 1.2 (not TLS 1.3) or mostly only
the application protocol has as compression, Deflate. (Data is
lsb-to-msb, Huffman codes msb-to-lsb. 256 = 0x100 =
1_0000_0000_0000_0000b is end-of-block. ) For text data it would seem
better to reduce the literal range overall, and also to make a Huffman
table of the characters, which are almost always < 256 many, anyways.
I.e., Deflate doesn't make a Huffman table of the alphabet of the input,
and the minimum length of a duplicate-coded word is 3.
"The Huffman trees for each block are independent
of those for previous or subsequent blocks; the LZ77 algorithm may
use a reference to a duplicated string occurring in a previous block,
up to 32K input bytes before." -
https://datatracker.ietf.org/doc/html/rfc1951#section-2
Zip file format (2012):
https://www.loc.gov/preservation/digital/formats/digformatspecs/APPNOTE%2820120901%29_Version_6.3.3.txt
https://www.loc.gov/preservation/digital/formats/fdd/fdd000354.shtml
"The second mechanism is the creation of a hidden index file containing
an array that maps file offsets of the uncompressed file, at every chunk
size interval, to the corresponding offset in the Deflate compressed
stream. This index is the structure that allows SOZip-aware readers to
skip about throughout the file."
- https://github.com/sozip/sozip-spec/blob/master/sozip_specification.md
It's figured that if the zip file has a length check and perhaps a
checksum attribute for the file, then besides modifications then
So, the profiles in the protocols, or capabilities, are variously called
extensions, about the mode of the protocol, and sub-protocols, or just
the support of the commands.
Then, there's what's "session", in the connection, and
"streaming-sub-protocols", then sorts, "chained-sub-protocols" ("command
sequence"), where streaming is for very large files where chained is for
sequences of commands so related, for examples SMTP's MAIL-RCPT-DATA and
MAIL-RCPT-RSET. Then the usual connection overall is a chained protocol,
from beginning and something like HELO/EHLO to ending and something like
QUIT. In HTTP for example, there's that besides upgrades which is
switching, and perhaps streaming-sub-protocols for large files, and
something like CORS expectations about OPTIONS, there are none, though
application protocol above it like "Real REST", may have.
Then, in "session", are where the application has from the server any of
its own initiated events, these results tasks as for the attachment.
The, "serial-sub-protocol" is when otherwise unordered commands, have
that a command must be issued in order, and also, must be completed
before the next command, altogether, with regards to the application and
session.
About driving the inputter/reader, it's figured the Reader thread, TIR,
both services the input I/O, then also drives the remux remultiplexer
and also drives the rec/par
recognizer/parser, and the decryption and the decompression, so that its
logic is to offload up to a megabyte from each connection, copying that
into buffers for each connection, then go through each of those, and
drive their inputs, constructing what's rec/par'ed and releasing the
buffers. It results it gets a "set" of ready inputs, and there's an idea
that the order those get served should be randomized, with the idea that
any connection is as likely as any other to get their commands in first.
Writing the Code
The idea of writing the code is: the least amount. Then, the protocol
and its related protocols, and its data structures and the elements of
its recognition and parsing, should as possible be minimal, then at
compile time, the implementation, derived, resulting then according to
the schema in "abstract syntax", files and classes and metadata, that
that interfaces and base classes are derived, generated, then that the
implementations are composed as of those.
(The "_" front or back is empty string,
"_" inside is " " space,
"__" inside is "-" dash,
"___" inside is "_" underscore,
and "____" inside is "." dot.)
class SMTP {
extension Base {
enum Specification { RFC821, RFC1869, RFC2821, RFC5321 }
enum Command {HELO, EHLO, MAIL, RCPT, DATA, RSET, VRFY, EXPN,
HELP, NOOP, QUIT}
}
extension SIZE {
enum Specification {RFC1870 }
enum Result { SIZE }
}
extension CHECKPOINT {
enum Specification {RFC1845 }
}
extension CHUNKING {
enum Specification {RFC3030 }
}
extension PIPELINING {
enum Specification {RFC2920 }
}
extension _8BITMIME {
enum Specification {RFC6152 }
enhanced _8BITMIME {
enum Command {EHLO, MAIL}
enum Result {_8BITMIME}
}
}
extension SMTP__AUTH {
enum Specification {RFC4954 }
command {AUTH}
}
extension START__TLS {
enum Specification {RFC3207}
enum Command { STARTTLS }
}
extension DSN {
enum Specification {RFC3461 }
}
extension RFC3865 {
enum Specification {RFC3865}
enhanced RFC3865 {
enum Command {EHLO, MAIL, RCPT }
enum Result {NO__SOLICITING, SOLICIT }
}
}
extension RFC4141 {
enum Specification {RFC4141 }
enhanced RFC4141 {
enum Command {EHLO, MAIL }
enum Result {CONPERM, CONNEG }
}
}
// enum Rfc {RFC3207, RFC6409 }
}
class POP3 {
enum Rfc {RFC1939, RFC 1734 }
extension Base {
enum Specification {RFC1939 }
class States {
class AUTHORIZATION {
enum Command {USER, PASS, APOP, QUIT}
}
class TRANSACTION {
enum Command {STAT, LIST, RETR, DELE, NOOP, RSET, QUIT
, TOP, UIDL}
}
class UPDATE {
enum Command {QUIT}
}
}
}
}
class IMAP {
enum Rfc { RFC3501, RFC4315, RFC4466, RFC4978, RFC5256, RFC5819,
RFC6851, RFC8474, RFC9042 }
extension Base {
enum Specification {RFC3501}
class States {
class Any {
enum Command { CAPABILITY, NOOP, LOGOUT }
}
class NotAuthenticated {
enum Command { STARTTLS, AUTHENTICATE, LOGIN }
}
class Authenticated {
enum Command {SELECT, EXAMINE, CREATE, DELETE, RENAME,
SUBSCRIBE, UNSUBSCRIBE, LIST, LSUB, STATUS, APPEND}
}
class Selected {
enum Command { CHECK, CLOSE, EXPUNGE, SEARCH, FETCH,
STORE, COPY, UID }
}
}
}
class NNTP {
enum Rfc {RFC3977, RFC4642, RFC4643}
extension Base {
enum Specification {RFC3977}
enum Command {CAPABILITIES, MODE_READER, QUIT, GROUP,
LISTGROUP, LAST, NEXT, ARTICLE, HEAD, BODY, STAT, POST, IHAVE, DATE,
HELP, NEWGROUPS, NEWNEWS, OVER, LIST_OVERVIEW____FMT, HDR, LIST_HEADERS }
}
extension NNTP__COMMON {
enum Specification {RFC2980 }
enum Command {MODE_STREAM, CHECK, TAKETHIS, XREPLIC,
LIST_ACTIVE, LIST_ACTIVE____TIMES, LIST_DISTRIBUTIONS,
LIST_DISTRIB____PATS, LIST_NEWSGROUPS, LIST_OVERVIEW___FMT, LISTGROUP,
LIST_SUBSCRIPTIONS, MODE_READER, XGTITLE, XHDR, XINDEX, XOVER, XPAT,
XPATH, XROVER, XTHREAD, AUTHINFO}
}
extension NNTP__TLS {
enum Specification {RFC4642}
enum Command {STARTTLS }
}
extension NNTP__AUTH {
enum Specification {RFC4643}
enum Command {AUTHINFO}
}
extension RFC4644 {
enum Specification {RFC4644}
enum Command {MODE_STREAM, CHECK, TAKETHIS }
}
enum RFC8054 {
// "like XZVER, XZHDR, XFEATURE COMPRESS, or MODE COMPRESS"
enum Specification {RFC8054}
enum Command {COMPRESS}
}
}
class HTTP {
extension Base {
enum Specification {RFC2616, RFC7321, RFC9110}
enum Command {GET, PUT, POST, OPTIONS, HEAD, DELETE, TRACE,
CONNECT, SEARCH}
}
}
class HTTP2 {
enum Rfc {RFC7540, RFC8740, RFC9113 }
}
class WebDAV { enum Rfc { RFC2518, RFC4918, RFC3253, RFC5323}}
class CardDAV { enum Rfc { RFC6352}}
class CalDAV { enum Rfc { RFC4791}}
class JMAP {}
Now, this isn't much in the way of code yet, just finding the
specifications of the standards and looking through the history of their
evolution in revision with some notion of their forward compatibility in
extension and backward compatibility in deprecation, and just
enumerating some or most of the commands, that according to the state of
connection its implicit states, and about the remux its
connection-multiplexing, what are commands as discrete, what either
layer the protocol, switch the protocol, make states in the session, or
imply constructed tasks what later result responses.
Then it's not much at all with regards to the layers of the protocol,
the streams in the protocol, their layers, the payload or parameters of
the commands, the actual logic of the routines of the commands, and the
results of the commands.
For something like HTTP, then there gets involved that besides the
commands, then the headers, trailers, or other: "attributes", of
commands, that commands have attributes and then payloads or parameters,
in their protocol, or as about "attachment protocol (state) command
attribute parameter payload", is in the semantics of the recognition and
parsing, of commands and attributes, as with regards to parameters that
are part of the application data, and parameters that are part of
attributes.
Recognizer/Parser
The Recognizer/Parser or recpar isn't so much for the entire object
representation of a command, where it's figured that the command +
attributes + payload is for the application itself, as for recognizing
the beginnings through the ends of commands, only parsing so much as
finding well-formedness in the command + attributes (the parameters,
and, variously headers, or, data, in the wire transmission of the body
or payload), and, the protocol semantics of the command, and the
specific protocol change or command task(s) to be constructed, according
to the session and state, of the connection or its stream, and the
protocol.
For the layers or filters, of the cryptec or compec, mostly the
recognition is from the frames/chunks/blocks, while in the plaintext,
involves the wire representation of the command, usually a line, its
parameters on the command line, then if according to headers, and when
content-length or other chunking, is according to headers, or a
stop-bit, for example, the dot-stuffing. When for example trailers
follow, is assumed to be defined by the protocol, then as what the
recpar would expect.
Then, parsing of the content is mostly in the application, about the
notion that the commands reflect tasks of routines of a given logic its
implementation, command and parameters are a sort of data structure
specific to the command, that headers and perhaps trailers would be a
usual sort of data structure, while the body, and perhaps headers and
trailers, is a usual sort of data structure, given to the commands.
The recpars will be driven along with the filters and by the TIR thread
so must not block and mostly detect a shallow well-formedess. It's
figured that the implementation of the task on the body, does
deserialization and otherwise validation of the payload.
The TIR, if it finds connection/stream errors, writes the errors as
close streams/connections directly to TCB, 'short-circuit', while
engaging Metrics/Time.
The deserialization and validation of the payload then is as by a TW
task-worker or into the TCF call-forward thread.
The complement to Recognizer/Parser, is a Serializer/Passer, which is
the return path toward the Writer/Outputter, as the TCB call-back
thread, of directly serializable or passable representations of results,
according to that the command and task, has a particular ordering,
format, and wire format, the output. The idea is that results a byte
sequences or transferrable handles that goes out the remux and streams
to the connection according to the remultiplexer attachment to the
outbound connection.
The notion of callbacks generally, basically results the employment of
uni-directional or half-duplex pipes, and a system selector, thus that
as callbacks are constructed to be invoked, they have a plain input
stream that's ignored except for the selector, then that the idea is
there's an attachment on the pipe that's the holder for the
buffers/handles. That is, the idea is that in Java there's
java.nio.channels.spi.SelectorProvider, and openPipe, then that the Pipe
when constructed has that its reference is stored in a synchronous pipe
map of the TCB, with an associated attachment that when a callback
occurs, the pipe attachment has set the buffers/handlers or exception
the result, then simply sends a byte to the pipe, which the TCB picks up
from waiting on the pipe selector, deregisters and deletes the pipe, and
writes the results out the remux attachment, and returns to select() on
the pipe provider, that ultimately the usually no-work-to-do Writer
thread, sees any remaining output on its way out, among them releasing
the buffers and closing the handles.
If there's very much a hard limit on pipes, then the idea is to just
have the reference to the remux attachment and output to write in the
form of an address in the bytes written on the call-back pipe, in this
way only having a single or few long-lived call-back pipes, that TCB
blocks and waits on in select() when there's nothing to do, otherwise
servicing the writing/outputting in the serial order delivered on the
pipe, only having one SelectionKey in the Selector from the
SelectorProvider for the uni-directional pipe, with only read-ops
interest. Pipes are system objects and have limits on the count of
pipes, and, the depth of pipes. So, according to the that, could result
just an outputter/writer queue for a usually-nothing-to-do writer
thread, whether the TCB has a queue to consume, and a pipe-reader thread
that fills the queue, notify()'s the TCB as wait()'s on it, and
re-enters select() then for however it's so that at idle, all threads
are blocked in select(), or wait(), toward "0% CPU", and "floor RAM", at
idle. (... And all usual calls are non-blocking.)
For TW and TCF to "reach idle", or "rest at idle", basically has for
wait() and notify() in Java, vis-a-vis, pipe() and select() in
system-calls. (... Which idle at rest in wait() and wakeup and aren't
interrupted any notify() or notifyAll().) This is a design using mostly
plain and self-contained data structurse, and usual system calls and
library network and synchronization routines, which Java happens to
surface, so, it's not language or runtime-specific.
The callbacks on the forward side, basically are driven by TCF which
services the backend and adapters, those callbacks on the re-routines
pretty much just results the TW tasks, those then in their results
resulting the TCB callbacks as above. It's figured some adapters won't
have non-blocking ways, then either to give them threads, or, to
implement a queue as up after the pipe-selector approach, for the TCF
thread and thread group, and the back-end adapters.
It's figured for streaming-sub-protocols that backend adapters will be
entirely non-blocking also, resulting a usual sort of approach to
low-load high-output round-trip throughput, as what TCF (thread call
forward) returns to TTW (thread task worker) returns to TCB (thread call
back) off the TS and TQ (task set and task queue). Then also is a usual
sort of direct TIR to TCF to TCB approach, or bridging adapters.
Protocol Establishment and Upgrade
It's figured each connection, is according to a bound listening socket,
the accepter to listen and accept or TLA.
Each connection, starts with a "no protocol", that buffers input and
responds nothing.
Then, the protocol goes to a sort of "CONNGATE" protocol, where rules
apply to allow or deny connections, then there's a "CONNDROP" protocol,
that any protocol goes to when a connection drops, then as whether
according to the protocol it dropped from, either pending writes are
written or pending writes are discarded.
For CONNGATE is that there's the socket or IP address of the connection,
that rules are according to that, for example local subnet, localhost,
local pod IP, the gateway, as matching reverse DNS,
or according to records in DNS or a lookup, or otherwise well-known connections to allow, or anything else or unreachable addresses or
suspected spammer addresses, to deny. This is just a usual command
CONNGATE and task then to either go to the next protocol after
a CONNOPEN, according to the address and port, or go to CONNDENY and CONNDROP, this way having usual events about connections.
Along with CONNGATE is a sort of extension protocol, "ROLL FOLD GOLD
COLD SHED HOLD", this has for these sorts beginnings.
CONNOPEN
ROLL: open, it's usual protocol
FOLD: open, it's new client or otherwise anything not usual
GOLD: open, it's expected client with any priority
CONNDROP
COLD: drop, silently, close
SHED: drop, the server is overloaded, or down, try to return a response "server busy", close
HOLD: "drop", passive-aggressive, put in a protocol CONNHOLD, discard
input and dribble
"Do not fold, spindle, or mutilate."
The message-oriented instead of connection-oriented or UDP datagrams
instead of TCP sockets, has that each message that arrives, gets
multiplexed then as with regards to whether it builds
streams, on one listening port. So, there's a sort of default protocol
of DGOPEN and DGDROP, then the sort of default protocol that multiplexes datagrams according to session and client,
then a usual way that datagrams are handled as either individual
messages or chunks of a stream, whether there's a remux involved or it's
just the fixed-mux attachment, whatever else results the protocol. Each datagram that arrives is associated with its packet's socket address.
This way there's a usual sort of notion of changing protocols, so that a protocol like TLS-HANDSHAKE, or TLS-RENEGOTIATE, is just a usual
protocol in usual commands, then as with regards to the establishment
of the security of TLS according to the protocol, then resulting the block-ciphers and options of TLS is according to the options of the
protocol, with regards then the usual end of TLS is a sort of
TLS-ALERT, protocol, that then usually enough to a CONNDROP, protocol.
So, there are sort of, "CONN" protocol, and, issues with "STRM" and "MESG".
The protocol establishment and upgrade, has basically that by default, commands are executed and completed serially in the order they arrive,
with regards to each connection or message, that thusly the
establishment of filters or layers in the protocol is just so
configuring the sequence of those in the attachment or as about the
remux attachment, as with regards, to notions of connections and
streams, and, layers per connection, and, layers per stream.
Then, "TLS" protocol, is a usual layer. Another is "SASL", about the state.
As a layer, TLS, after the handshake, is mostly just the frames and block-encipherment either way, and in TLS 1.2 maybe compression and decompression, though, that's been left out of TLS 1.3. In the remux,
like for HTTP/2 or HTTP/3, or "HTTP over SCTP" or what, then for
something like HTTP/2, TLS is for the connection then ignorant the
streams, while for something like HTTP/3, it's a sort of user-space
instead of kernel-space transport protocol itself, then it's figured
that the "TLS high-five" as of Datagram TLS, is per stream, and agnostic
the connection, or listener, except as of new streams.
The compression, or "transport-layer compression", is pretty universally Deflate, then what gets involved with Deflate, is a 32Kib look-back
window, that any code in deflate, is either a literal byte, or a
look-back distance and length, copying bytes. So, that involves that in otherwise the buffers, that anything that gets compressed or
decompressed with Deflate, or the "compec", ("codec", "cryptec",
"compec"), always has to keep around a 32 Kib look-back window, until
the end of a Deflate block, where it is truncatable, then as to grow a
new one.
Mostly here then the compression may be associated with TLS, or
otherwise at the "transport layer" it's associated stream-wise, not connection-wise, and mostly it is according to the "application layer",
when and where compression starts and ends in the commands and/or the payloads. Then a usual idea is as much as possible to store the data at
rest in a compressed edition so it can be consumed as concatenated.
This way this sort of "Multiple Protocol Server" is getting figured out,
in design at least, then with the idea that with a good design, it's flexible.
Remultiplexer and Connections and Streams
The Remultiplexer is about the most complicated concept, with the idea
of the multiplexer and demultiplexer inbound then the multiplexer
and demultiplexer outbound, from and to the outward-facing multiplexer
and demultiplexer, and from and to the backend-facing multiplexer
and demultiplexer, that last bit though being adapter pools.
So, it's figured, that throughout the system, that there's the
identifier of the system, by its host, and, the process ID, and there's identification of events in time, by system time, then that everything
else gets serial numbers, basically numbers that increment serially for
each connection, message, command, response, error, and here for the remultiplexer for the streams, in protocols like HTTP/2 or WebSocket
with multiple streams, or for whatever are message-oriented protocols,
those multiple streams.
In this way, the attachment, it's an object related to the connection,
then the remultiplexer attachment, is the attachment then also
related to any stream.
The idea is that the serial numbers don't include zero, and otherwise
are positive integers, then sometimes the protocols have natural
associations of the client-initiated and server-initiated streams,
one or the other being even or odd, say, while, also internally is
that each would have their own sort of namespace and serial number.
Very similarly, the re-routines, have the serial numbers their issuance
and invocations, then the tree of sub-re-routines, has that those are serially numbered also, with regards to that any one of those comes
into being according to that the runtime, as one process its space,
somehow must vend serial numbers, and in a non-blocking way.
Then, these are involved with the addressing/routing scheme of
the callbacks or the routines, then also, with metrics/timing,
of the things and all the things. The idea is that the callbacks
are basically an object reference to an attachment, or monad
of the re-routines, then a very general sort of association to
the ring buffer of streams that come and go, or just a list of
them, and the monad of the re-routine, which is course has
that it's always re-run in the same order, so the routing scheme
to the attachment and addressing scheme to the monad,
is a common thing.
In the protocol, there are "states" of the protocol.
The "session", according to the protocol, is quite abstract.
It might be per-connection, per-stream, or across or bridging
connections or streams. A usual sort of idea is to avoid any
state in the session at all, because, anything state-ful at all
makes that the distributed back-end needs a distributed
session, and anything outside the process runtime doesn't
have the synchronization to its memory barriers. It's similar
with issues in the "domain session" or "protocol session"
with regards to vending unique identifiers or guaranteeing
deduplication according to unique identifiers, "client session",
with anything at all relating the client to the server, besides
the contents of the commands and results.
So, in the protocol, there are "states" of the protocol. This
is usually enough about sequences or chains of commands,
and as well with regards to entering streaming-sub-protocols.
So, it's sort of figured, that "states" of the protocol, are
sub-protocols, then with usually entering and exiting the
sub-protocols, that being "in the protocol".
Then, there are "profiles" of the protocol, where the protocol
has a sort of "base protocol", which is always in effect, and
then any number of "protocol extensions", then as whether
or not those are available, and advertised, and/or, exercised,
or excluded. A profile then is whatever of those there are,
for a command and stream and connection and the server,
helping show protocol profile coverage according to that.
Here then it's a usual idea that "CONNDROP" is always an
extension of the protocol, because the network is formally
un-reliable, so any kind of best-effort salvage attempt any
other error that modeled by the protocol, goes to CONNDROP.
Then, it's also a usual idea that any other error, than modeled
by the protocol, has a sort of UNMODELED protocol,
though as with regards to that the behavior is CONNDROP.
Then, for streams and messages, gets to that CONNDROP,
and "STREAMDROP" and "MESSAGEDROP", sort of vary,
those though being usual sorts catch-all exceptions,
where the protocol is always in a sort of protocol.
On 03/07/2024 08:09 AM, Ross Finlayson wrote:
On 02/29/2024 07:55 PM, Ross Finlayson wrote:
On 02/20/2024 07:47 PM, Ross Finlayson wrote:
About a "dedicated little OS" to run a "dedicated little service".
"Critix"
1) some boot code
power on self test, EFI/UEFI, certificates and boot, boot
2) a virt model / a machine model
maybe running in a virt
maybe running on metal
3) a process/scheduler model
it's processes, a process model
goal is, "some of POSIX"
Resources
Drivers
RAM
Bus
USB, ... serial/parallel, device connections, ....
DMA
framebuffer
audio dac/adc
Disk
hard
memory
network
Login
identity
resources
Networking
TCP/IP stack
UDP, ...
SCTP, ...
raw, ...
naming
Windowing
"video memory and what follows SVGA"
"Java, a plain windowing VM"
PCI <-> PCIe
USB 1/2 USB 3/4
MMU <-> DMA
Serial ATA
NIC / IEEE 802
"EFI system partition"
virtualization model
emulator
clock-accurate / bit-accurate
clock-inaccurate / voltage
mainboard / motherboard
circuit summary
emulator environment
CPU
main memory
host adapters
PU's
bus
I^2C
clock model / timing model
interconnect model / flow model
insertion model / removal model
instruction model
I got looking into PC architecture wondering
how it was since I studied internals and it really
seems it's stabilized a lot.
UEFI ACPI SMBIOS
DRAM
DMA
virtualized addressing
CPU
System Bus
Intel CSI QPI UPI
AMD HyperTransport
ARM CoreLink
PCI
PCIe
Host Adapters
ATA
NVMe
USB
NIC
So I'm wondering to myself, well first I wonder
about writing UEFI plugins to sort of enumerate
the setup and for example print it out and for
example see what keys are in the TPM and for
example the partition table and what goes in
in terms of the device tree and basically for
diagnostic, boot services then runtime services
after UEFI exits after having loaded into memory
the tables of the "runtime services" which are
mostly sort of a table in memory with offsets
of the things and maybe how they're ID's as
with regards to the System Bus the Host Adapters.
Then it's a pretty simplified model and gets
into things like wondering what all else is
going on in the device tree and I2C the
blinking lights and perhaps the beep, or bell.
A lot of times it looks like the video is onboard
out the CPU, vis-a-vis the UEFI video output
or what appears to be going on, I'm wondering
about it.
So I'm wondering how to make a simulator,
an emulator, uh, of these things above,
and then basically the low-speed things
and the high-speed things, and, their logical
protocols vis-a-vis the voltage and the
bit-and-clock accurate and the voltage as
symbols vis-a-vis symbolically the protocols,
how to make it so to have a sort of simulator
or emulator of this sort of usual system,
with a usual idea to target code to it to
that kind of system or a virt over the virtualized
system to otherwise exactly that kind of system, ....
Critix
boot protocols
UEFI ACPI SMBIOS
CPU and instruction model
bus protocols
low-speed protocols
high-speed protocols
Looking at the instructions, it looks pretty much
that the kernel code is involved inside the system
instructions, to support the "bare-metal" and then
also the "virt-guests", then that communication
is among the nodes in AMD, then, the HyperTransport
basically is indicated as, IO, then for there to be figured
out that the guest virts get a sort of view of the "hardware
abstraction layer", then with regards to the segments and
otherwise the mappings, for the guest virts, vis-a-vis,
the mappings to the memory and I/O, getting figured
out these kinds of things as an example of what gets
into a model of a sort of machine, as a sort of emulator,
basically figuring to be bit-accurate and ignore being
clock-accurate.
The "BIOS and kernel guide" gets into the order of
system initializaiton and the links, and DRAM.
It looks that there are nodes basically being parallel
processors, and on those cores, being CPUs or
processors.
Then each of the processors has its control and status
registers, then with regards to tables, and with regards
to memory and cache, about those the segments,
figuring to model the various interconnections this
way in a little model of a mainboard CPU. "Using L2
Cache as General Storage During Boot".
Then it gets into enumerating and building the links,
and setting up the buffers, to figure out what's going
on the DRAM and DMA, and, PCI and PCIe, and, then
about what's ATA, NVMe, and USB, these kinds things.
Nodes' cores share registers or "software must ensure...",
with statics and scopes. Then it seems the cache lines
and then the interrupt vectors or APIC IDs get enumerated,
setting up the routes and tables.
Then various system and operating modes proceed,
where there's an idea that the basic difference
among executive, scheduler, and operating system,
basically is in with respect to the operating mode,
with respect to old real, protected, and, "unreal",
I suppose, modes, here that basically it's all really
simplified about protected mode and guest virts.
"After storing the save state, execution starts ...."
Then the's described "spring-boarding" into SMM
that the BSP and BSM, a quick protocol then that
all the live nodes enter SMM, basically according
to ACPI and the APIC.
"The processor supports many power management
features in a variety of systems."
This gets into voltage proper, here though that
what results is bit-accurate events.
"P-states are operational performance states
characterized by a unique frequency and voltage."
The idea here is to support very-low-power operation
vis-a-vis modest, usual, and full (P0). Then besides
consumption, is also reducing heat, or dialing down
according to temperature. Then there are C-states
and S-states, then mostly these would be as by
the BIOS, what gets surfaced as ACPI to the kernel.
There are some more preliminaries, the topology
gets setup, then gets involved the DCT DIMM DRAM
frequency and for DRAM, lighting up RAM, that
basically to be constant rate, about the DCT and DDR.
There are about 1000 model-specific registers what
seem to be for the BIOS to inspect and figure out
the above pretty much and put the system into a
state for regular operation.
Then it seems like an emulator would be setting
that up, then as with regards to usually enough
"known states" and setting up for simulating the
exercise of execution and I/O.
instructions
system-purpose
interrupt
CLGI CLI STI STGI
HLT
IRET IRETD IRETQ
LIDT SIDT
MONITOR MWAIT
RSM
SKINIT
privileges
ARPL
LAR
RDPKRU WRPKRU
VERR VERW
alignment
CLAC STAC
jump/routine
SYSCALL SYSRET
SYSENTER SYSEXIT
task, stack, tlb, gdt, ldt, cache
CLTS
CLRSSBSY SETSSBSY
INCSSP
INVD
INVLPG INVLPGA INVLPGB INVPCID TLBSYNC
LGDT SGDT
LLDT SLDT
LMSW
LSL
LTR STR
RDSSP
RSTORSSP SAVEPREVSSP
WBINVD WBNOINVD
WRSS WRUSS
load/store
MOV CRn MOV DRn
RDMSR WRMSR
SMSW
SWAPGS
virtual
PSMASH PVALIDATE
RMPADJUST RMPUPDATE
RMPQUERY
VMLOAD VMSAVE
VMMCALL VMGEXIT
VMRUN
perf
RDPMC
RDTSC RDTSCP
debug
INT 3
general-purpose
context
CPUID
LLWPCB LWPINS LWPVAL SLWPCB
NOP
PAUSE
RDFSBASE
RDPID
RPPRU
UD0 UD1 UD2
jump/routine
CALL RET
ENTER LEAVE
INT
INTO
Jcc
JCXZ JECXZ JRCXZ
JMP
register
BOUND
BT BTC BTR BTS
CLC CLD CMC
LAHF SAHF
STC STD
WRFSBASE WRGSBASE
compare
cmp
CMP
CMPS CMPSB CMPSW CMPSD CMPSQ
CMPXCHG CMPXCHG8B CMPXCHG16B
SCAS SCASB SCASW SCASD SCASQ
SETcc
TEST
branch
LOOP LOOPE LOOPNE LOOPNZ LOOPZ
input/output
IN
INS INSB INSW INSD
OUT
OUTS OUTSB OUTSW OUTSD
memory/cache
CLFLUSH CLFLUSHOPT
CLWB
CLZERO
LFENCE MCOMMIT MFENCE SFENCE
MONITORX MWAITX
PREFETCH PREFETCHW PREFETCHlevel
memory/stack
POP
POPA POPAD
POPF POPFD POPFQ
PUSH
PUSHA PUSHAD
PUSHF PUSHFD PUSHFQ
memory/segment
XLAT XLATB
load/store
BEXTR
BLCFILL BLCI BLCIC BLCMSK BLCS BLCIC BLCMSK BLSFILL BLSI BLSMSK BLSR
BSF BSR
BSWAP
BZHI
CBW CWDE CDQE CWD CDQ CQO
CMOVcc
LDS LES LFS LGS LSS
LEA
LODS LODSB LODSW LODSQ
MOV
MOVBE
MOVD
MOVMSKPD MOVMSKPS
MOVNTI
MOVS MOVSB MOVSW MOVSD MOVSQ
MOVSX MOVSXD MOVZX
PDEP PEXT
RDRAND RDSEED
STOD STOSB STOSW STOSD STODQ
XADD XCHG
bitwise/math
and or nand nor
complement
roll
AND ANDN
LZCNT TZCNT
NOT
OR XOR
POPCNT
RCL RCR ROL ROR RORX
SAL SHL SAR SARX SHL SHLD SHLX SHR SHRD SHRX
T1MSKC TZMSK
math
plus minus mul div muldiv
ADC ADCX ADD
DEC INC
DIV IDIV IMUL MUL MULX
NEG
SBB SUB
ignored / unimplemented
bcd binary coded decimal
AAA AAD AAM AAS
DAA DAS
CRC32
instruction
opprefixes opcode operands opeffects
opcode: the op-code
operands:
implicits, explicits
inputs, outputs
opeffects: register effects
operations
Ethernet and IEEE 802
https://en.wikipedia.org/wiki/IEEE_802.3
TCP, TCP/IP
packets
Unicast and multicast
datagrams
sockets
SCTP
v4 ARP IP->MAC
NAT
v6 Neighbor IP->MAC
DNS and domain name resolvers
domain names and IP addresses
IP addresses and MAC addresses
packet construction and emission
packet receipt and deconstruction
packet routing
routes and packets
Gateway
Local Network
DHCP
PPPoE
NICs
I/O
routing
built-ins
NICs and the bus
NICs and DMA
The runtime, basically has memory and the bus,
in terms of that all transport is on the bus and
all state is in the memory.
At the peripherals or "outside the box", basically
has that the simulator model has only as whatever
of those are effects, either in protocols and thus
synchronously, with the modeling of the asynchronous
request/response as synchronously, as what results
the "out-of-band" then with respect to the interrupts,
the service of the interrupts, and otherwise usually
the service of the bus, with regards to the service of
the memory, modes of the synchronous routine,
among independently operating units.
Power over Ethernet / Wake-on-LAN https://en.wikipedia.org/wiki/Energy-Efficient_Ethernet
https://en.wikipedia.org/wiki/Physical_layer#PHY
Now, this isn't really related necessarily to the
idea of implementing Usenet and other text-based
Internet Message protocols in the application layer,
yet, there's sort of an idea, that a model machine
as a simulator, results how to implement an entire
operating system whose only purpose is to implement
text-based Internet Message protocols.
https://en.wikipedia.org/wiki/Link_Layer_Discovery_Protocol
One nice thing about IETF RFC's is that they're available
largely gratis while when getting into IEEE recommendations
that it results they're money.
It helps that mostly though all the needful is in the RFC's.
https://en.wikipedia.org/wiki/Network_interface_controller
So, the NIC or LAN adapter, basically is to get figured that
it sort of supports a stack already or that otherwise it's
to get figured how it results packets vis-a-vis the service
of the I/O's and how to implement the buffers and how
to rotate the buffers as the buffers are serviced, by the
synchronous routine.
https://en.wikipedia.org/wiki/TCP_offload_engine
Then there's sort of a goal "the application protocols
sit directly on that", vis-a-vis, "the operating system
asynchronous and vector-I/O facility sits directly on
that, and the application protocol sits directly on that".
This is where, for the protocols, basically involves any
matters of packet handling like firewalls and this kind
of thing, vis-a-vis the application or presentation layer
or session layer, about the control plane and data plane.
The idea that specialized units handle protocols,
reminds me one time, I was working at this place,
and one of the product, was a daughterboard,
the purpose of which was to sort data, a sorter unit.
Here the idea that the NIC knows protocol and results
bus traffic, gets into variously whether it matters.
Two key notions of the thing are, "affinity", and "coherency".
The "coherency" is sort of an "expanding wave" of consistency,
while, "affinity", is sort of a "directed edge", of consistency.
Basically affinity indicates caring about coherency,
and coherency indicates consistency of affinity.
This way the "locality" and "coherency" and "affinity" then
make for topology for satisfying the locality's affinities
of coherency, that being the definition of "behavior, defined".
"Communicating sequential processes" is a very usual metaphor,
with regards to priority and capacity and opportunity and compulsion.
https://en.wikipedia.org/wiki/Communicating_sequential_processes
There are _affinities_ in the various layers, of _affinities_
in the various layers, here for examples "packets and streams",
and "messages and threads", for example.
Much then gets involved in implementing the finite-state-machines,
with regards to, the modes the protocols the finite-state-machines
each a process in communicating sequential processes in
communicating coherent independent processes.
Co-unicating, ....
So, the idea of "open-finite-state-machine" is that there
is defined behavior the expected and unexpected, with
regards to resets, and defined behavior the known and
unknown, with regards to restarts, then the keeping and
the loss of state, what exist in the configuration space
the establishment of state and change and the state of change
and the changes of state, the open-finite-state-machine.
https://en.wikipedia.org/wiki/Unbounded_nondeterminism
https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels
When I studied the IA-32 and I studied Itanium and IA-64 a lot,
and I studied RISC, and these kinds things, with regards to x86
and how Itanium is kind of like RISC and RISC and ring registers
and these kinds of things, the modes, and so on, that was mostly
looking at assembler instructions with regards to image CODEC
code. So anyways these days it seems like this whole x86-64 has
really simplified a lot of things, that the co-operation on the bus
still seems a lot about the IDT the Interrupt Descriptor Table,
which has 256 entries, then with regards to the tags that go
into those, and service vectors about those. I'm wondering
about basically whether those are fixed from the get-go or
whether they can be blocked in and out, with regards to status
on the bus, vis-a-vis otherwise sort of funneling exceptions into
as few as possible, figuring those are few and far between
or that when they come mostly get dumped out.
I'm not very interested in peripherals and mostly interested
in figuring out hi-po I/O in minimal memory, then with regards
to the CPU and RAM for compute tasks, but mostly for scatter/gather.