Reduce wasted LLM tokens by deterministically canonicalizing and compressing prompts | saasbrowser.ai