Guide

Engineers & AI: Collaborating with Large Models Safely

Approx. 15 min read • Code, tests, architecture

1. Use models as reviewers, not as the source of truth

The safest way for engineers to work with LLMs is to treat them as reviewers and assistants, not as the final authority. This means asking for explanations, alternatives and edge cases instead of blindly accepting generated code.

2. Structure prompts around diffs and constraints

When reviewing code, prompts work best if you feed the model a small, focused diff plus the relevant context (interfaces, tests, configs). Ask questions like “What could break?”, “What performance issues could this introduce?” or “How would you test this behaviour?”.

3. Let models propose tests before implementations

A powerful pattern is to invert the usual flow: describe the behaviour you want, then ask the model to propose a suite of tests first. Once the tests look reasonable, you can implement or refactor the code yourself, using the tests as a safety net.

4. Be explicit about non‑functional requirements

Many disappointing AI code suggestions stem from missing non‑functional requirements: performance, security, readability, compatibility. Make these constraints first‑class citizens in your prompts, and you will see much more useful output.

5. Capture reusable patterns in your own dictionary

Over time, you will notice recurring patterns: “explain this legacy function”, “suggest a safer refactor”, “brainstorm test cases for this bug”. Turning these into shared templates inside boomPrompt makes it easier for the whole engineering team to benefit, while keeping humans firmly in control of what gets merged.