The Basics of Prompt Engineering: Turning AI Assistants into More Trustworthy Partners Through Programming

AI chatbots such as ChatGPT, Claude, and Gemini are evolving day by day in their coding capabilities to write software programs, and ' vibe coding ,' in which humans give instructions in natural language to AI to write code, is becoming more common. However, AI does not do everything on its own, and the quality of the output code varies greatly depending on how well humans can input instructions (prompts) to AI. Google engineering leader Adi Osmani explains in his blog the basics of prompt engineering , which is the process of devising prompts so that AI can generate the desired output.
The Prompt Engineering Playbook for Programmers
https://addyo.substack.com/p/the-prompt-engineering-playbook-for
First, Osmani lists seven principles that are the foundation of effective code prompting:
Provide rich context
Make your goals and questions specific
Divide complex tasks into smaller parts
Include examples of input/output and expected behavior
- Iterate and refine the dialogue
- Maintain clarity and consistency in your code
・Use roles and personas

The situations in which you use AI in coding can vary, but it's important to keep these seven principles in mind.
Bug detection and fixes
As a way to systematically ask AI for help in finding and fixing bugs, Osmani said, 'It's important to describe what's wrong and what the code should do, and always include precise error messages and incorrect behavior.' For more difficult bugs, such as when the output is incorrect without a clear error message, it's useful to prompt the AI to walk you through the execution of the code, such as, 'Execute this function line by line and track the values of the variables at each step. They're not accumulating correctly. Where is the logic going wrong?'
Also, even if your actual code base is large, if the bug can be demonstrated in a small snippet , extract or simplify that code and provide it to the AI. Then, it is effective to directly ask what you need, such as 'What causes this problem and how can I fix it?' If the AI's initial answer is unclear or only partially useful, don't hesitate to ask follow-up questions.
In any case, when debugging with an AI assistant, detailed code information and instructions are essential. Instead of saying, 'This code doesn't work. Please help me,' you need to present the situation and symptoms and ask appropriate questions, such as, 'This function is like this, and I've introduced it for this purpose, but in this situation, an error occurs.'

◆ Code refactoring and optimization
And to get AI to refactor and optimize code, Osmani says it's important to explicitly state the refactoring goal in the prompt and provide the necessary code context. Instead of simply saying 'refactor this code,' you need to communicate a specific goal, such as improving readability, reducing complexity, or optimizing performance. It's also important to include the function or section you want to refactor, as well as the relevant surrounding context, and you should mention the language or framework.
In addition, to learn from the AI's refactorings and verify their accuracy, it is also effective to ask for an explanation of the changes. The key to this is 'using roles and personas,' as mentioned in the principle above. According to Osmani, prompting the AI to play the role of a code reviewer or senior engineer can sometimes make the answers it gives more insight.

◆ Implement new features
First, outline in plain language what you want to build, break it down into smaller tasks if necessary, then work through each task with focused prompts. Also, if you are adding functionality to an existing project, it is very helpful to provide relevant context or reference code to show how similar functionality has been implemented in the project. You should mention the coding style or architecture of the project.
Of course, it is also important to provide expected input/output and usage examples, such as 'Implement a function formatPrice(amount) in JavaScript that takes a number and returns a string in USD format. Example: formatPrice(2.5) returns '\$2.50'.' If the result is not what you want, you may need to rewrite the prompt to make it clearer by adding more detailed information or constraints.
And when using tools such as Copilot in an integrated development environment (IDE), Osmani says an effective workflow is to use comments and TODOs as inline prompts and have AI complement them.
Common bad prompt patterns
Osmani also outlines some common patterns of bad prompts that can lead an AI to produce poor responses:
・There is no specific information, such as 'It doesn't work, please fix it.'
- Asking the AI to do too many things at once.
- They just present a huge amount of information, but don't ask any important questions.
- There is no quantified definition of what constitutes success, such as 'make this function faster.'
- Ignoring requests for clarification from the AI and previous output.
-Instructions are given inconsistently in style or format.
- Using words with unclear references, such as 'the previous output' or 'that function.'

Finally, Osmani says a “tactical approach to rewriting the prompt” should be to identify specific inconsistencies, such as whether the AI was trying to solve a different problem, generated an error, or offered a solution that didn’t fit.
In addition, it is important to clearly communicate missing requirements or misunderstandings, such as 'The solution should be TypeScript, not JavaScript. Please include type annotations,' and to emphasize the addition of requirements in new prompts. If the AI repeatedly fails with complex requests, it is also effective to break down the request and ask repeated questions to clarify the situation. If you are still stuck, Osmani said that you can clear the chat history and re-enter the prompt from scratch.
Related Posts:
in Software, Posted by log1i_yk