Cut LLM token costs by pruning irrelevant context dynamically | saasbrowser.ai