Signal drop!
Relay (operand.online) is unreachable.
Usually, a dropped signal means an upgrade is happening. Hold on!
Sorry, no connección.
Hang in there while we get back on track
Basics (o' composición)
Build small, build many.
I commonly come across programs I'd like to use.
I load them up, begin running them for a day or so, or perhaps only a couple of hours, and quickly decide.... nah.
The programs could be good, and be-longer in use, and belong in my arsenal, only they lack some pieces I require:
- More and more, I need my programs to be local-first (LoFi) to keep up with my nomadism.
- My programs need to be open-source so I can change them.
- My programs need permissible licensing so I can keep them running.
- My programs need to be free. I am an economics essayist, hence dismally broke.
Here are how some programs compare on the rubric:
program | purpose | LoFi | source | license | price |
---|---|---|---|---|---|
logseq |
local notes | x | $0 - x | ||
patchwork |
local notes, dashboards | x | nope | $0 | |
obsidian |
local notes | x | $x | ||
linear |
issue management | $0 - x | |||
helix |
editor | x | x | x | $0 |
zellij |
session manager | x | x | x | $0 |
nushell |
command-line | x | x | x | $0 |
From looking at this graph, you could properly assume I am much less able to organize my ideas as I am to execute on them.
Each day I begin doing something or other, probably using Nushell. I quickly realize I need to build a cleaner experience, and build up a couple uh, new shell functions - maybe 5 minutes, maybe half an hour.
This is a nice approach for me because I've accumulated a big bunch of functions I can rely on. I re-use them again and again, and my progress gradually speeds up as I go. Of course, I basically only use Nushell... for anything I need.
This is one of the reasons I'm so amped up around AutoMerge. The promises made by [Patchwork] seem to echo all of my ideals, in a nice clean vaporware all of us can idolize prior to release.
Maybe I need more solid benchmarks.
All the same, my nushell command bundles are reaching a place I'd like to expand on; and bring into my normal web-app procedures.
I'm making pushes to help more in nushell, and some of the additional core codebases I depend on. As automerge progresses I'm keeping an eye on the discussions; the research is beyond my usual relaxed pace.
Of course, I'm learning Rust as I go, and my leaning towards the language is pulled by my happy experiences with programs rather than an ideological language theory. The Rust-based programs I depend on are simply quicker, easier, and more reliable. More snappy.
All in all, the idea I'm most on board with in the [AutoMerge] landscape, is the "malleable software" goal.
I'd much rather use a collección of small programs I can easily rearrange on a whim, than spend hours or days repurposing something collosal like Blender into a usable process.
This is one of the main issues keeping me from progress on animation,
or video editing. Some of the programs are good, all are bulky.
I'd like to see core building blocks in a performant language,
repurposable on demand at least to the degree that gstreamer
had been.
My price point aside, there is a large market for cross-platform animation,
demanding to be filled. The incumbents ignore full OS channels,
while falling behind on any use case beyond their chosen primary mechanic.
No coincidence that there is such a chasm separating animation from graphic design,
when at first blush they seem to be close siblings.
Imagine how much may change if that one boundary could be bridged.
So, as I go I'm going to look at core Rust libraries for vectors, for graphics, for sequencing and animation and paths maths. The changes are going to come some day, and for a long duración they are going to be unable to compare to the prior generación. Their only hope for success is to build up core libraries, piece by piece; imagine if raylib became a core dependency in a large range of programs, and how many chances for interoperable exchange such a scene could bring.
Only dreams so far, although I've been ignoring the hype cycles long enough, gradually building up my lonely basis, that I feel I can reliably lead this charge.
One of the main issues needing to be addressed here?
Finding small, re-applicable problem descripcións able to be cleanly handled by a small codebase. Proliferación of solucións is both desirable, and inescapable once a small problem is comprehensively labeled and scoped.
And speaking of scope, there's news to announce this summer, once I reach the Linux Foundation's Open Source Summit early next week. I need to do a small measure of finagling to bind audio to an Asciinema recording.
This announcement follows the recipe; I looked at the problem I had been trying to address for nearly 3 years, using NixOS and Nix Flakes. I call this problem "reproducible deploys". I examine how the problem space is riddled by incumbent challenges, and I cleanly arrange a language to bring more meaning to the space, using common analogies already popularized by today's popular deployment languages.
This language is called scope, and I'm more than thrilled to have hashed through the basic language design at Baltimore Node on Thursday night.
Language design is one piece - I need to learn a bunch of Rust to apply the ideas. Only... now that I realize the issue, the "bunch of Rust" has a clear purpose, and seems easy to approach, and is mainly only a basic dependency graph to encode and decode.
I'm sure there's a small package of code I can re-use for the purpose...