There’s a queue of influencers and evangelists telling you to learn “prompt engineering”. Here’s the thing; it is nothing new. Good prompt engineering is nothing more, or less, than good communication.

If that’s already the lightbulb moment, great. If you’re still thinking “but what does that actually mean?” let’s get into it. Learn to communicate well and it doesn’t matter if you’re talking to an AI, a colleague, your manager, or a customer; the same principles apply.

Much of the narrative around AI-generated code is broken. On one side, horror stories about unreliable output and security holes; but this is rarely accompanied by questions like “did you validate it?” or “did you write tests?”. On the other, tech company evangelists promising it can do everything, quietly skipping the part about the skill and effort required to get there. From my own experience, a lot of the horror stories are a problem with planning and execution, not a tool problem. I’ve shipped production code built in collaboration with AI that holds up fine against industry standard tooling, sometimes better than code I’ve written without it. I’m working at enterprise scale too, where codebases span multiple teams and repositories. The challenge of giving any tool sufficient context in that environment is a real and fair criticism. But the answer, as we’ll come to, is still context; applied deliberately. Being explicit about the boundaries of what you’re working on, where it touches systems outside your scope, what it doesn’t need to know. That discipline is the same whether you’re working alone on a side project or in a team of hundreds. The difference isn’t the AI; it’s whether you gave it the right context, guidance, and feedback. Treat it like a colleague you’d never properly brief and you’ll get the results you’d expect from that colleague.

Prompting is a skill

There’s no shortage of courses, videos, and frameworks specifically about prompt engineering as if it were a discipline in its own right. In some ways it is, the nuance is real and the practice matters. But the foundation isn’t new, and that distinction matters. Like any communication skill it develops with practice and intent. The frameworks below are scaffolding, a checklist to lean on while the habits form. Over time you won’t need them explicitly, you’ll just naturally give context, set expectations, and calibrate your output. But starting with a framework is a good shortcut to results while you build that instinct.

Back to top

Context is key

Here’s something worth knowing before we get to frameworks: the act of writing a thorough prompt often surfaces the answer before the AI responds. If you’ve ever talked a problem through with a colleague and solved it mid-sentence, you’ll recognise this. The discipline of articulating context clearly, what you’re trying to solve, what you’ve tried, what the constraints are, is valuable in itself. The AI’s response is almost a bonus.

So when you sit down to prompt, start by asking yourself: what is the problem? What are the requirements and constraints? What have I already tried and why didn’t it work? The more precisely you can answer those, the better your prompt, and often your own thinking, will be.

Instead of:

“I need a function to do X”

Try:

“I need a Python function to do X. It takes these inputs and produces this output, and needs to be efficient as it runs in a loop. I’ve tried Y but it fails because of Z. Include logging and error handling in line with industry best practices.”

What about sensitive data?

A question that comes up regularly in enterprise settings: “am I allowed to share this code, this context, this data with the AI?” First and foremost, follow the policies in place for your project and organisation. That said, with a little thought you can provide all the important context without leaking anything sensitive. Focus on structure rather than specifics; describe the shape of the problem, the constraints, the requirements, without including names, addresses, or actual data values. In many ways it’s no different from rubber duck debugging; you’re describing the problem the same way you would to a colleague, and you’d do that without sharing sensitive information.

Back to top

Prompting frameworks

There are several prompting frameworks worth knowing. They go by different names but they’re all variations on the same idea:

  • CLEAR: Context, Length, Example, Audience, Role
  • COSTAR: Context, Objective, Style, Tone, Audience, Response
  • CRAFT: Context, Role, Action, Format, Task

Here’s the thing though, none of this is new. These are good communication frameworks with an AI label on them. You’ll find the same thinking in frameworks that predate AI entirely:

  • SCQA: Situation, Complication, Question, Answer
  • STAR: Situation, Task, Action, Result

STAR in particular should be familiar, I’ve talked about it in the context of interviewing and troubleshooting before. It keeps coming up because it’s a solid framework for clear communication, full stop. The fact that it works just as well for prompting an AI as it does for answering an interview question rather proves the point this post is making.

Back to top

CLEAR

I want to focus on CLEAR specifically, as it gets less coverage than the others and maps well to practical prompting. Frankly the acronym does a better job of staying in your head when you’re mid-task too. Think of it less as a framework to follow rigidly and more as a quick mental checklist, something to run through before you hit send to make sure you’ve given the AI what it needs to help you effectively.

  • Context:

Give the AI key information such as the the purpose of the task, any background it needs to know, and the constraints it should work within. Also consider any presentation requirements, such as formatting or tone. The more specific you can be, the better.

Instead of Say
“Write something about the product launch.” “We’re launching a new mobile app to existing customers next quarter. Write a short announcement for our newsletter covering what’s new, the release date, and how to get early access.”
  • Length:

Be specific about the length of the response you want. This can be in word count, number of points, or any other measure that makes sense for the task.

Instead of Say
“Give me feedback on this report.” “Give me three specific suggestions to improve the clarity of this report’s executive summary.”
  • Example:

Show the AI what you want it to produce for you. This is especially important for creative tasks, but it can be helpful for any task where the format or style matters.

Instead of Say
“Write a project update.” “Write a project update in plain, direct language — the kind you’d send in a quick Slack message to your team, not a formal status report.”
  • Audience:

Let the AI know who the output is for. This will help it tailor the language, tone, and level of detail to suit the needs and expectations of that audience.

Instead of Say
“Explain what an API is.” “Explain what an API is to a marketing manager who understands digital campaigns but has no software development background.”
  • Role:

Allow the AI to take on a role or perspective relevant to the task. This can help it generate more relevant and insightful responses by drawing on the knowledge and experience associated with that role.

Instead of Say
“What should we think about before expanding into a new market?” “As a market entry strategist with experience in scaling B2B SaaS businesses in Europe, what are the critical factors we should evaluate before expanding into a new region?”

Run through these five points on your next prompt and see how the output changes. Chances are you’ll recognise most of them from conversations you have every day, which is rather the point.

Back to top

Wrapping up

I’ve written a partner post on My Prompt Engineering Strategies, so you can see how I apply these principles in practice.

Communicating well is hard; it doesn’t come naturally to everyone. For those who feel more connected with technology, or people skills harder to come by, it can be an area actively avoided. But it is a skill, and like any skill it can be learnt and it improves with practice and intent. The frameworks and principles here are a practical starting point for more effective communication with AI tools. Give the tips here a try with your next prompt and see if they make a difference. Build those habits and you’ll get more consistent, more useful results from your AI interactions. The bonus, and it genuinely is a bonus worth having, is that the same skills make you clearer and more effective when you’re talking to colleagues, customers, and managers too.

Prompt engineering isn’t a new discipline. It’s just communication, with a new audience.

Back to top

If this article helped inspire you please consider sharing this article with your friends and colleagues, or let me know via LinkedIn or X / Twitter. If you have any ideas for further content you might like to see please let me know too.

Back to top

Updated: