From Newsgroup: rocksolid.nodes.help
/On 06.09.25 23:00, Billy G. wrote:
On 06.09.25 22:48, Soul Patch wrote:
Questions about Pugleaf server:
1. Is there a stream mode or equivalent for peering? Where are the instructions?
2. Is peering by authentication only (reader mode)?
3. If peering is by reader mode can it be done with an open peer without sending authinfo commands? How?
4. Can reader mode syncronization be done with multiple remote hosts?
5. Does Pugleaf have a means of correcting article numbering if multiple remote hosts are polled for the same newsgroup?
https://github.com/go-while/go-pugleaf
Did you get some information from the readme?
Maybe you found BUGS.md.
I try to make it short:
The main working parts are "webserver" and "pugleaf-fetcher"
Peering with pugleaf "nntp-server" (in and out) does not work yet.
Posting support has not been added yet.
FYI: Peering does not have authinfo, it's mostly dns/ip based.
You mean downloading articles via reader mode with(out) authinfo to
fillup the database?
Articles are being downloaded via the "pugleaf-fetcher"
in parallel with as many connections as you set.
you can try with 1000 conns via 81-171-22-215.pugleaf.net
Note: Global Limit is 4000.
You can add many providers/servers with or without authentication.
Everything configures via the web in /admin.
Working servers are already configured and enabled.
The fetcher will fetch only from first enabled provider/server.
No automatic fallback or switching yet.
pugleaf does own article numbering per newsgroup.
It has some scanning tech built in.
fetcher got args like: -download-start-date 2024-12-24 -group news.*
You can switch providers and it will try to find the last article on new
remote server and continue from there where old one left off.
You can import bunch of newsgroups via an active file
./webserver -import-active active.txt
then run the fetcher (with -download-start-date) and go.
setting expiry on new newsgroups before running the fetcher should
search for articles only within expiry range and not download older ones.
Latest patches on testing-001 dropped memory usage significantly.
fetcher defaults should mostly stay below 100-500MB while working with thousands of groups and downloading articles with -max-batch 100000 or
more. (default 128 rotates fine). I'm running it with -max-batch 1000. Increasing -max-batch to more than 1000 can be risky and is not really necessary.
-download-max-par can boost parallel processing of newsgroups
but can eat memory multiplied by number.
-max-queue is limited to 16384 articles by default.
this can break your neck if you hit huge binary articles
and your storage is not fast enough...
size limits for articles don't apply yet.
expire-news has not been tested.
work in progress...
https://i2pn2.pugleaf.net/groups/news.admin.peering/threads
Newsgroup Statistics
106390
Total Groups
1469116590
Total Articles
--
.......
Billy G. (go-while)
https://pugleaf.net
@Newsgroup: rocksolid.nodes.help
irc.pugleaf.net:6697 (SSL) #lounge
discord:
https://discord.gg/rh2tGMJWwV
--- Synchronet 3.21a-Linux NewsLink 1.2