⚖️ “With a leveraged worker, judgment is far more important than how much time they put in or how hard they work.” – Naval
Hello Reader,
AI is better at evaluating content than generating content.
This makes sense, when you stop to think about it. For a large language model to interpret your instructions, and create output, all in the same step, it dilutes the same token budget over two requests.
But if you have ask AI to evaluate its own outputs, it can focus all its attention on identifying how to improve. Then it can rewrite, separately.
Here’s a simple copy/paste process you can use to rapidly improve your LLM responses:
- Ask an LLM to create some content for you
- In a second tab, have another LLM evaluate the output.
- Ask the second LLM for a clear brief documenting improvements.
- Copy that brief, and paste it into the first LLM tab with the prefix, “Analyse this feedback.”
- Ask for a V2.
- Repeat until satisfied
- Compare with diffchecker.com if you want to see the changes
This simple process will help you stir your prompt around in a cauldron, like the wizard of the future that you are.