Great comprehensive resource! This aligns perfectly with what I've been exploring about the human side of AI communication. The paradox I'm seeing is that we're training people to prompt machines with crystal clarity while many leaders still can't brief their own teams with the same precision. I just published something that explores this gap: https://creatism.substack.com/p/we-prompt-machines-better-than-we?r=177ve
Great comprehensive resource! This aligns perfectly with what I've been exploring about the human side of AI communication. The paradox I'm seeing is that we're training people to prompt machines with crystal clarity while many leaders still can't brief their own teams with the same precision. I just published something that explores this gap: https://creatism.substack.com/p/we-prompt-machines-better-than-we?r=177ve
Yes, this is an interesting perspective.
I realize that I'm taking a much more structured approach when talking to an LLM. Usually following these steps:
1. Set the task
2. Provide additional information
3. Clarify how it should be solved
4. Specify what's the output I want
5. Validate the output
And if anything goes wrong I keep iterating. This is also the structure I would set when I'm setting a human engineering task :)
Are you seeing this approach influencing how you speak to humans?
cannot say that yet