- Published on
How this all started
- Authors
- Name
- Lorselq
- @lorselq
Note: in order for this entry to make sense, there is a degree of expectation that you have familiarity with the entries I've written here. If you haven't, please read those first for some context!
The initial problem
I've been interested in Enochiana as a sort of puzzle for years and years. Like most things I get interested in, I tend to do be curious and research the subject until I decide I'm done—at which point I walk away and do something else.
For whatever reason, Enochiana has been consistently fascinating to me, so I've developed quite a lot of context as to what's going on in it—not to the level of certain academics who have been studying Enochiana and adjacent things for years—but still enough that I've been able to ask some meaningful questions.
To anyone who has investigated Enochiana, one of the big questions is: what does Liber Loagaeth translate to anyway? Does it even translate to anything? Is it gibberish? Is it a cipher? Is it actually Enochian? Is it a dialect of Enochian? What's even going on with it??
The general academic consensus has been for the longest that it doesn't translate as anything—it's just glossolalia/nonsense. I've even asked some people in the occult sphere and their take is that it's less about what it might translate as and it's more about engaging in the Gebofal operation, a devotional practice where you transcribe Liber Loagaeth onto sheets of paper in Enochian letters.
My initial speculation
While I accept that Liber Loagaeth may not translate as anything, I decided, why not try anyway? My curiosity got the better of me and I started thinking about ways, within my means, that I could take a stab at it.
... which making this probably one of the most grandiose efforts I've ever undertaken in the name of futility.1
My working premise: the language in the Enochian Keys, which I know enough to have taught classes on, is an analytic language with productive concatenative compounding—which is to say that it's a building-block language where without a lot of little pieces tacked onto words that signal syntactical relationship. I talk about the building-block nature of Enochian a little bit here and give several examples.
That said, the human eye can only find so many of these relationships, even though the Enochian Keys are only ~1090 words long. In fact, I'm discovering more and more of these relationships each time I interact with the language—which means the rabbit hole probably goes deeper than any of us have any real time for.
It occurred to me as I started working with agentic teams at work as part of some light R&D, "What if... I make a team of AI agents do all the grunt work?" I had also been kicking around the idea of a making the agents "debate" things—which, let me explain the rationale.
1 At least, maybe it's futile—maybe it's not. Maybe I might get somewhere with this? Who knows. I'm a curious person and a dreamchaser at heart. 🥲
Wait, why make AI argue with itself?
LLMs are notoriously overconfident. One of the funniest definitions I've ever heard of what an LLM (like Chat GPT or Gemini) is from a user perspective is "mansplaining as a service". In keeping with this definition, there are many times when you, as a subject matter expert, will read what an LLM has to say and be like, "That's... not actually correct. At all. Like even a little. You're just making this up, aren't you?"
Which, technically, yes. It doesn't actually know what it's saying; in some ways (and this is grossly oversimplifying it), LLMs can be a little bit like glorified autocomplete—the process doesn't actually know in any real cognitive sense what's going on, it just drops the appropriate word-tokens together based on what "makes sense" to come next. And what "makes sense" is what is on the Internet.2
Anyway, tangents aside, the premise is: if AIs will make up information based on Internet lies or otherwise "hallucinate" information, why not have another AI exist solely to fact check and argue with the first one and try to take down any claims that the second AI sees as false. This way, we can have better final output that we're working with.
2 At some point, with all of the AI slop on the Internet, more and more AI-generated content is going to become its own training data—which is kind of hilarious when you think about it.
But what does this have to do with Enochian again?
So imagine, if you will: multiple AI agents arguing with each other, like a linguistic research team, picking apart parts of the Enochian language so I don't have to do it myself. They can find all the cool stuff that I have overlooked and maybe even corroborate and expand upon things I have found already.
What the process looks like
I think it's safe to assume the following:
- Each Enochian word is one or more letters.
- Enochian words draw from the same pool of letters.
- Some substrings of letters occur across multiple Enochian words.
- E.g., in English: "apples" and "applications"
- Those substrings might sometimes do similar things to the word's meaning.
- E.g., in English: "subscript" and "submarine"
With that in mind, this is essentially what my program's strategy is:
- Take a substring (also known as an n-gram)
- Find all words where that n-gram occurs
- Extract the common meaning across the set of definitions
- Document findings.
- Proceed to next ngram until no more ngrams.
While this worked initially, I immediately identified many areas that could be improved.
Conclusion
So yeah, that's what this project is in a nutshell. There have been many advancements, and I'm doing my best to document what those are in a way that is digestible to other people.
As I write additional entries, if they seem like "next steps I took" kind of entries, I'll update this entry and create a "next steps" section and link to them accordingly.
Anyway, thanks for reading. I wish you and yours well. 🤗