08/19/2024

Thread

Tweet 1

Is there a way that get LLM apis to tokenize inefficiently in a per token way for some sections? I have a theory this will make them understand my vim keylogger better in normal mode

---

Tweet 2

*per character way

---