Chapter 70. Prompt Engineering as a New Skill

This chapter reveals why old prompting rituals no longer work with reasoning models and what “new communication protocol” actually gives results you can immediately compile, test, and deploy. Here you’ll discover how to replace “role theater” with precise specifications—and reduce iterations by orders of magnitude.

4 pillars of modern prompts (context, goal, requirements, constraints), 3 query types (A/B/C), and pro-level techniques: decomposition 1 prompt = 50–200 lines, “Show–Repeat–Expand,” explicit deprecated API prohibition.

Don’t miss: after this chapter it becomes obvious why “just ask for code” is a path to tech debt.

This chapter includes ready-to-use prompt examples.

Chapter Self-Assessment

Why do reasoning models no longer need hints like “think step by step” or role prompting?Answer
Correct answer: Reasoning models automatically build internal reasoning and have built-in expertise, so such hints don’t add knowledge, only waste tokens and can even interfere with the model’s work.
What are the four pillars that make up the anatomy of a modern prompt for reasoning models?Answer
Correct answer: Technology context (tech stack), clear and unambiguous goal, technical requirements (thread safety, performance), and constraints (quality boundaries for results).
Why does a specification-type A prompt require more formulation time but is considered more effective?Answer
Correct answer: Detailed specification with context, goal, requirements, and API examples gives a predictable result that can be integrated into a project almost without edits on the first try.
In which situations should you use type B prompt (contextual-dialogical) instead of type A?Answer
Correct answer: When you need to refine an existing project: refactoring, improving architecture of overgrown classes, targeted improvements, and iterative development.
What happens if you don’t provide the model with full error context (Valgrind output, call stacks, Qt logs) when debugging multi-threaded code?Answer
Correct answer: Analysis will be less accurate, the model won’t be able to identify specific causes of memory leaks or thread races, and problem-solving will take longer.
Why does the decomposition technique suggest breaking tasks into prompts of 50-200 lines of code each?Answer
Correct answer: One prompt = one clear goal ensures high quality of each component, maintains control over architecture, and simplifies testing, avoiding loss of focus with overly large tasks.
How does the “Show – Repeat – Expand” technique help avoid repeating instructions when creating a series of similar components?Answer
Correct answer: First you show a reference component, then define general rules, after which the model generates a series of components using this template without needing to repeat detailed instructions for each.
Why explicitly prohibit using deprecated APIs like QDesktopWidget or QApplication::desktop() in prompts?Answer
Correct answer: AI may use deprecated or removed APIs from old Qt versions found in training data; explicit prohibition directs the model to use modern equivalents like QGuiApplication::screens().
Why doesn’t the constraint technique (limits on file size, complexity, pointer usage) hinder AI but improve code quality?Answer
Correct answer: Constraints guide the model to create clean, efficient, and quality code, setting clear boundaries and preventing bloated, unsafe, or non-optimal solutions.
What happens if you systematically delegate tasks to AI without analyzing generated code?Answer
Correct answer: Engineering thinking degradation occurs: the developer loses the ability to design architecture, debug complex problems, and weigh engineering trade-offs.
What legal risks arise when using AI-generated code in commercial projects?Answer
Correct answer: Code may turn out similar to proprietary solutions with incompatible licenses (e.g., GPL), which can lead to lawsuits, requirements to disclose sources, or product blocking.
Why does using cloud AI services create intellectual property leak risks even with claims of not using data for training?Answer
Correct answer: Technical possibility of leakage remains through logs, caches, and intermediate storage; corporate secrets, client information, and unique solutions end up beyond your control.
For which refactoring tasks do local models (LM Studio, Ollama) require mandatory use of reasoning models rather than regular ones?Answer
Correct answer: For any code refactoring tasks, as regular models can’t handle them properly—they can’t perform deep code and structure analysis, unlike reasoning models like GPT-OSS, DeepSeek-R1, or QwQ.
What does the principle “AI assistants are developer capability amplifiers, not replacements” mean?Answer
Correct answer: AI takes over routine and generates templates, but architectural decisions, critical analysis, understanding consequences, and responsibility for results remain with the human engineer.

Practical Assignments

Easy Level

Prompt Library for Refactoring
Create a personal library of 5 prompts for typical refactoring tasks in your Qt project. Choose real problematic code sections (overgrown classes, complex nested logic, deprecated APIs) and adapt templates from the chapter to your project’s specific tech stack. Test each prompt with a reasoning model and record results.
Hints: Start with type B prompts from the “Refactoring Prompts” section. Be sure to specify exact stack (Qt versions, C++, compiler). Save successful prompts in a separate file with notes on usage context. For testing, use Claude Sonnet 4.5 or ChatGPT (GPT-4). Compare results before and after refactoring: line count, cyclomatic complexity, readability.

Medium Level

Automated Review with Problem Detection
Develop an AI-based automated code review system for your Qt project. Create a set of 3-4 specialized type C prompts (analytical): one for thread safety checking, second for memory leak detection, third for edge cases, and fourth for Qt best practices compliance. Integrate these prompts into the review process via script or CI/CD pipeline that automatically checks critical code sections before commit.
Hints: Use prompts from the “Code Review Prompts” section. For automation, create a bash/Python script that sends code to Claude or ChatGPT API. Be sure to provide full context: Valgrind output, sanitizers, Qt logs. Configure QT_LOGGING_RULES and QT_DEBUG_PLUGINS for detailed diagnostics. Save review results in structured format (JSON/Markdown) for error pattern analysis.

Hard Level

Qt5→Qt6 Migration Framework with AI
Create a full-fledged framework for automated Qt5 to Qt6 project migration using reasoning models. The framework should: (1) analyze codebase and identify all incompatibilities; (2) generate type A prompts for porting modules; (3) automatically apply changes with backups; (4) convert .pro to CMakeLists.txt; (5) generate unit tests to verify behavior equivalence before and after migration; (6) create migration report with detailed changes. Test on a real medium-complexity Qt5 project (5000+ lines).
Hints: Start with prompts from the “Porting Prompts” section. Use decomposition and “Show-Repeat-Expand” techniques for large projects. For automation, create a Python tool using Claude/ChatGPT API. Critically important: create a git branch for rollback before applying changes. Generate tests using type A prompts from testing section. Use QTest::qCompare for equivalence checking. Document all architectural decisions and known limitations. Remember legal risks—check license compatibility of generated code.

💬 Join the Discussion!

Already tested reasoning models for refactoring? Faced unexpected results when porting Qt5 to Qt6?

Share your prompts that gave outstanding results, talk about pitfalls of working with AI assistants in production, or ask questions about effective interaction techniques with Claude and ChatGPT!

🎯 Let’s discuss together: How do you balance development speed with AI and maintaining engineering thinking? What security measures do you apply when working with cloud models?

Leave a Reply

Your email address will not be published. Required fields are marked *