Note :
If you know a useful package (even if it's not yours): Don't hesitate to add it! This is a community registry, not just an author registry.
If you found a great tool that helped you, chances are it will help others too.
Thanks again!
Yes, Nicolas, great.
You may also cite your tak at the conference.
Thanks for all,
Harald
I welcome the idea.
Note that efforts/products with a similar purpose already exist.
An example is here:
https://core.tcl-lang.org/jenglish/gutter/
I think it's wise to learn from those previous attempts/products, e.g.
- what is their current usage?
- if they are not used to the degree that you envisage for your package registry,
then why is that so?
- ...
Erik.
Thank you for sharing this! Honestly, I wasn't aware of this project, and I don't yet know why it didn't gain more traction.
I'm just someone who had this idea and wanted to give it a try.
If you have any insight on what went wrong with previous attempts, I'd really appreciate hearing it rCo it would help me avoid making the same mistakes.
Thanks--
Nicolas
Yes, Nicolas, great.
You may also cite your tak at the conference.
Thanks for all,
Harald
I think you might be confusing me with someone else!
This is a personal project I just launched, not something presented at a conference.
Thanks
Nicolas
On 2/18/26 21:58, Nicolas ROBERT wrote:
Thank you for sharing this! Honestly, I wasn't aware of this project, and I don't yet know why it didn't gain more traction.
I'm just someone who had this idea and wanted to give it a try.
If you have any insight on what went wrong with previous attempts, I'd really appreciate hearing it rCo it would help me avoid making the same mistakes.
I cannot say for sure. I remember having thought of setting up such an infrastructure myself, long
ago. Never did it, so I didn't work through all the issues.
But I remember one important design issue of a database of extensions is:
Who is responsible for keeping the info (version info) for a certain package
up to date, including the very first registration of a new package ?
Back then, I believed that it was important to leave that responsibility to the extension author,
and I still do think so. Of course, the flip-side is that the system needs to provide a
user-friendly method for package authors to provide updated information at each new release. (Maybe
this can be automated as an internet service?)
It's easy to imagine what goes wrong if that responsibility is assumed by the person
managing/administering the package database: at some point in time, lack of resources will make that
the information becomes outdated, less and less useful, less and less used.
So, my advice would be to design a system that offers a procedure/interface where package authors
can update package information themselves.
Erik Leunissen.
--
Thanks
Nicolas
However, I think there might be a misunderstanding about my approach.
I don't manually maintain version numbers or metadata.
The registry uses automated Git scraping (via GitHub Actions) to discover package information directly from source repositories - git ls-remote picks up new tags automatically, and the metadata file is regenerated daily without human intervention.
When an author pushes a new tag to their repo, the registry picks it up within 24 hours automatically. Nobody needs to update the registry manually when they release.
Regarding Fossil: To be honest, I haven't fully tested Fossil support yet. The Fossil packages currently listed have GitHub mirrors, so the automated scraping works for them via Git.
True Fossil repository scraping would require different tooling that I haven't implemented yet.
Importantly, there is no "version number" written in the public JSON that needs updating.
The packages.json only contains repository URLs as "pointers" - regardless of whether it's GitHub, GitLab, or elsewhere.
The actual version metadata is fetched automatically at build time by querying the source repositories directly.
So if a package moves from GitHub to GitLab, or if it's hosted on a custom domain, it doesn't matter - as long as the Git URL is valid, the workflow extracts the current state dynamically.
Additionally, if a package no longer exists or the URL becomes invalid, the workflow detects this and marks the package status accordingly in the generated metadata (indicating the source is unavailable, rather than showing stale or fake data).
The entry remains in the registry for reference, but users are informed that the package is currently inaccessible.
Does this address your concern about the maintenance bottleneck ?
The registry doesn't store static version state that can become "outdated" - it regenerates fresh metadata daily from the sources themselves, reflecting both updates and availability issues automatically.--
Thanks
Nicolas
On 2/18/26 23:59, Nicolas ROBERT wrote:
However, I think there might be a misunderstanding about my approach.
Oh that's very well possible. Sorry if my remarks are inapplicable.
I don't manually maintain version numbers or metadata.
The registry uses automated Git scraping (via GitHub Actions) to discover package information directly from source repositories - git ls-remote picks up new tags automatically, and the metadata file is regenerated daily without human intervention.
When an author pushes a new tag to their repo, the registry picks it up within 24 hours automatically. Nobody needs to update the registry manually when they release.
Regarding Fossil: To be honest, I haven't fully tested Fossil support yet. The Fossil packages currently listed have GitHub mirrors, so the automated scraping works for them via Git.
True Fossil repository scraping would require different tooling that I haven't implemented yet.
Importantly, there is no "version number" written in the public JSON that needs updating.
The packages.json only contains repository URLs as "pointers" - regardless of whether it's GitHub, GitLab, or elsewhere.
The actual version metadata is fetched automatically at build time by querying the source repositories directly.
So if a package moves from GitHub to GitLab, or if it's hosted on a custom domain, it doesn't matter - as long as the Git URL is valid, the workflow extracts the current state dynamically.
Additionally, if a package no longer exists or the URL becomes invalid, the workflow detects this and marks the package status accordingly in the generated metadata (indicating the source is unavailable, rather than showing stale or fake data).
The entry remains in the registry for reference, but users are informed that the package is currently inaccessible.
Does this address your concern about the maintenance bottleneck ?
The most important thing regarding the maintenance issue is that you're aware of it yourself. Alas,
in the short term I'm not in a position where I can evaluate the approach that you describe in the
detail that it deserves.
Good luck!
Erik.
Cudos for the initiative!
Some thought on using the web interface:
- I'm missing an "all packages" button (though of course I can search
for the empty string, which obviously lists all packages, but I find
that 'urks')
- after I locate a package, how do I install it?
for example, if I sort "most recent first", the first entry currently
is 'textutil'. That page has two links, which lead me to the
tcllib-textutil submodule and its documentation.
But of course, if I wish to use that module, the canonical way would
be to install whole tcllib, not the individual module, no?
So for me there is missing the information that some package is part
of a larger library package, and probably cannot be used without other
modules of that library.
For all those subpackages of tcllib (or any other library-like
package), I would rather have some tree-like display than a flat list,
or something like https://core.tcl-lang.org/tcllib/doc/trunk/embedded/md/toc.md
On submission of new modules:
- is there any check on the validity of submitted entries?
I.e. if I were a malicious attacker, I'd submit some Github project with
a modified version of a well known package or something like that.
Yes, I can do that in the wiki, too. But if your page at one point
has earned some reputation of a valuable resource, this might be dangerous.
I don't know if and how other archives like python (pip) or
microcontrollers (arduino etc) handle that problem...
My re40.01
R'
I've noticed that finding Tcl/Tk packages is still surprisingly hard for newcomers.
Between scattered wikis, old forum posts, and repos buried in various forges, it's not easy when you're just starting out.
So I built something to fix that: a centralized, searchable package registry specifically designed to help beginners (and veterans!).
I've noticed that finding Tcl/Tk packages is still surprisingly hard for newcomers.
Pourquoi faire simple quand on peut faire compliqu|- ... :-/
I certainly will be blamed and put into flames , but I want to give my opinion
(after all Donal Tr. is not a member here !). I know you work hard for the benefit
of us, but why not to propose it BEFORE all this labour.
I've noticed that finding Tcl/Tk packages is still surprisingly hard for newcomers.
Yes and no, for instance your clever little package "tomato" is difficult to find unless,
its name is already known and is not on your proposed portal. Who then will maintain
this portal ? Wouldn't it better to concentrate on what is existing and works ?
I am far to be a true programmer, unable to compile (yet) Tcl by myself, unable to understand why
object programming is useful etc. I simply use the installation binaries of Ashok
or Paul (they have all inside) and if I need to see a detail I go to /core.tcl-lang.org .
If I need an idea or do this or that, I go to wiki.tcl-lang.org. It is not so hard.
(...)
Olivier.
Olivier <user1108@newsgrouper.org.invalid> posted:.....
Pourquoi faire simple quand on peut faire compliqu|- ... :-/
I certainly will be blamed and put into flames , but I want to give my opinion
(after all Donal Tr. is not a member here !). I know you work hard for the benefit
of us, but why not to propose it BEFORE all this labour.
(...)
Olivier.
Thank you for your honest opinion - and no, you won't be put in flames! Different perspectives are valuable. You're absolutely right that wiki.tcl-lang.org exists.
About tomato not being there : you're correct, and that's partly my fault. When I started coding it, I was a beginner and found the wiki editing process intimidating - I was afraid of breaking something or not following the conventions.
So I didn't create a wiki page, which proves your point : the barrier to entry for contributing to the wiki can be high for newcomers.
Let me share why I built this registry from my own experience as a beginner :
when I needed to work with XML, I searched the wiki for "XML", and honestly - did tdom come up first? No. I had to dig through pages.
The wiki has the information, but it's not always structured for easy discovery when you don't know exactly what you're looking for.
This registry aims to lower that barrier : adding a package is just a JSON entry via GitHub PR, no need to learn wiki markup or fear breaking existing pages.
If tomato had existed in this registry format, I would have added it immediately without worrying about "doing it wrong." (personal opinion)
When you already know what you need, the wiki is perfect. But for discovery and for shy beginners who hesitate to edit the wiki, this might be a gentler entry point.
Regarding maintenance : that's why I limited myself to 1 year. If it helps even a few people discover packages without the wiki anxiety I felt, it's worth the experiment.
I understand the "why complicate things" sentiment.
If you find the wiki sufficient, by all means continue using it - this is just an alternative for those who prefer searching over navigating wiki hierarchies or fear editing them.
That said, I might be waking up a bit late to the party. With AI assistants now ubiquitous, any beginner asking "how to parse XML in Tcl" will get tdom as the first answer instantly, no registry needed.
Still, structured open data in JSON format might remain valuable for AI training or for building other tools, so hopefully it's not entirely obsolete yet!
Thanks
Nicolas
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 59 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 03:47:49 |
| Calls: | 812 |
| Files: | 1,287 |
| D/L today: |
1 files (3,740K bytes) |
| Messages: | 210,189 |