Part Two — Of Loss and Structure

Updates with code and output examplea to come

You can read ‘Ashfeather’ at the end of thia post

Alright, part two

So what I’ve been doing here is experimenting. I created an encoder, and alongside it, a decoder. The encoding part worked great. The decoding… not so much. But honestly, that’s not a deal-breaker. I should stress this—bidirectional encoding and decoding aren’t always necessary. In my vision, the encoding is the essential piece. Once compressed, the LLM should still be able to interpret it and generate a meaningful response.

If the model replies in English, yes, the return will be token-heavy—but that’s okay. You don’t need to decrypt it like a secret message. This isn’t encryption. It’s compression. It’s okay to take baby steps.

What I built uses symbolic character sets to condense meaning. But before I ramble: the key tools I’ve been leaning on are TLKM and spaCy—two Python libraries that handle natural language processing. They help identify the grammatical structure of a sentence. They tag parts of speech, highlight relationships, and map out the skeleton of meaning.

So, for example, the system might determine: this is a verb, and based on its placement, that must be an adverb. This matches a known pattern. From there, it can extract the linguistic architecture and then rewrite it symbolically.

But here’s the catch: context is lost. Symbolic compression strips out names, specifics, emotional tone—anything not structurally necessary. You’re left with the framework. A skeleton.

Think of it like this: imagine compressing the Bible through this system. You’d get a reduced, unreadable glyph-stream—completely symbolic. The story wouldn’t be recognizable, but the structure might remain. You’d still see themes like belief, sacrifice, divine conflict. It becomes a shadow of itself—a logical framework more than a narrative.

Now extend that thought to religion as a whole. Despite differences in doctrine and storytelling, most spiritual systems echo the same core ideas: higher powers, morality, consequence, transcendence. The specifics shift, but the structure is surprisingly consistent.

And that’s where this idea finds footing. Even if details are lost, the structure remains useful. It becomes a way to examine shared human patterns without the baggage of form.

So what I did was grab a random public-domain book—18 chapters in total—and parsed it chapter by chapter. I encoded each one using my system and redirected all output into a single file. At that point, the original context was gone. Then I decoded it. Not to reconstruct the book perfectly, but to generate a summary—to see what remained.

What emerged was skeletal but strangely compelling. A distilled ghost of the original, yet with clarity in its structure. I took that as the foundation for a new short story.

I gave the LLM specific constraints: reduce the length from 18 chapters to 6. I didn’t want something sprawling like my book ZILD —that’s a massive effort. I wanted it tighter. Focused. I also set symbolic anchors—the number 6 is significant. Six chapters. Six feathers. Six returns. That motif threads through the narrative like a heartbeat.

I gave it the skeleton, the rules, and let it breathe life into the story.

And I’m proud of what came out. The story it generated was built from loss—of detail, of names, of recognizable plot—but it still feels complete. The symbols you see on the cover? They’re not decorative. They’re the literal encoding of the story’s core structure. The narrative reflects the process that created it.

That’s the point.

It’s recursive.

It’s reflective.

Some might say it’s bleak, or even nihilistic. I don’t see it that way. I see process, and I see life.


Recents Post

Share

08

Vibe Coding with LLM Agents Friday, 6PM
PUBLIC SERVICE ANNOUNCEMENTI’m not a (total) doomster. But I’m very much well informed. And if I ...

27

Tuesday, 1AM
Context Wrangler 🚦Automated context-length hygiene • Multi-model AI prompt generator • Git-to-Ro...

Powered by Hexo