Is there a way that get LLM apis to tokenize inefficiently in a per token way for some sections? I have a theory this will make them understand my vim keylogger better in normal mode
---
*per character way