Robert C. Martin published "Clean Code" in 2008. Eighteen years later, AI tools can generate hundreds of lines of code in seconds. The natural question is whether the painstaking craft of writing clean code still matters when production velocity is measured in prompts per hour rather than keystrokes per minute.
The answer is an emphatic yes, and here is why: the ratio of time spent reading code to writing code has not changed. It is still roughly 10:1. AI has made writing faster, but it has not made reading, understanding, debugging, and modifying code any easier. If anything, the flood of AI-generated code has made clean code practices more essential, because more code now exists that nobody manually reasoned through line by line.
At Pepla, we use AI coding assistants daily. We also apply clean code standards rigorously. These practices are complementary, not contradictory.
Naming: The Foundation of Readability
Martin's first and most important rule: names should reveal intent. A variable named d tells you nothing. A variable named elapsedTimeInDays tells you everything. This sounds obvious, but AI-generated code routinely produces names like result, data, temp, item, and val that carry no semantic meaning.
Renaming AI-generated variables to express intent is often the single highest-value edit you can make.
Good naming eliminates the need for comments. Compare these two approaches:
// Check if employee is eligible for overtime const flag = getVal(emp, 'ot') > 0 && getDays(emp) >= 5;
Versus:
const isEligibleForOvertime = employee.overtimeHoursThisMonth > 0 && employee.workingDaysThisWeek >= 5;
The second version requires no comment. The code is the documentation. When reviewing AI output, renaming variables and functions to express intent is often the single highest-value edit you can make.
Naming Conventions for Teams
Consistency in naming conventions across a codebase is as important as the names themselves. At Pepla, we maintain a naming guide for each project that specifies conventions for services, repositories, DTOs, event handlers, and configuration. When AI generates code, we check that it follows these conventions before merging. It rarely does by default.
Functions: Small, Focused, One Level of Abstraction
Martin's rules for functions are straightforward: they should be small, do one thing, and operate at a single level of abstraction. AI-generated functions frequently violate all three rules. A single function might validate input, query the database, apply business rules, format the response, and log the result. It works, but it is unmaintainable.
The test for function size is not a line count. It is whether the function can be described with a single "to" statement: "This function validates the order request." If you need the word "and" in the description, the function does too much.
Extract until you cannot extract anymore. Then ask yourself if the remaining function reads like well-written prose. If it does, you are done.
The step-down rule is equally important: code within a function should read top-down at a consistent level of abstraction. High-level orchestration code should not be mixed with low-level string manipulation. This is one of the most common issues in AI-generated code, where the model produces a correct but flat implementation that interleaves strategic logic with tactical details.
Good naming is the foundation of readability -- it turns code into self-documenting prose.
Comments: When and Why
The best comment is the one you did not need to write because the code is self-explanatory. But some comments are genuinely valuable:
- Legal comments: Copyright notices and licence terms
- Intent comments: Explaining why a non-obvious approach was chosen, not what the code does
- Warning comments: "This function is called by a CRON job and must complete in under 30 seconds"
- TODO comments: Temporary markers for known technical debt (with ticket numbers)
- API documentation: JSDoc, XML docs, or equivalent for public interfaces
Comments that rephrase the code, that explain what rather than why, are noise. AI tools generate these prolifically. A comment like // increment counter above counter++ wastes everyone's time. Strip them.
Formatting: Visual Structure Matters
Code formatting is not an aesthetic preference. It is a communication tool. Consistent indentation, spacing, and grouping help readers navigate the code and understand its structure. Martin describes it as the "newspaper metaphor": the most important details at the top, with supporting details flowing downward.
In 2026, automated formatters (Prettier, Black, gofmt) handle mechanical formatting. But higher-level organisational formatting, how you group related functions, where you place imports, how you order class members, still requires human judgement. We follow a consistent ordering within files:
- Constants and type definitions at the top
- Public interface (exported functions and classes) next
- Private implementation details below
- Helper functions at the bottom
This ordering means readers can understand the module's purpose from the first screen of code without scrolling through implementation details.
Comments should explain why, never what. If the what needs explaining, refactor the code.
Error Handling: Separate from Business Logic
Error handling is one of the hardest things to get right, and AI-generated code frequently handles it poorly. Common anti-patterns include swallowing exceptions silently, catching overly broad exception types, mixing error handling with business logic, and returning null instead of throwing.
Clean error handling principles:
- Use exceptions rather than error codes. Exceptions separate the happy path from error handling, making both easier to read.
- Create domain-specific exception types. A
PaymentDeclinedExceptionis more useful than a genericRuntimeExceptionwith a string message. - Do not return null. Null return values force every caller to add null checks. Use Optional types, empty collections, or the Null Object pattern instead.
- Fail fast. Validate inputs at system boundaries and throw immediately. Do not let invalid data propagate deep into your system before failing.
- Log with context. When catching an exception, include the operation being performed, the relevant identifiers, and the full exception chain.
When reviewing AI-generated error handling, the most common fix is adding specificity. Generic catch blocks need to become specific. Generic log messages need identifiers and context. "Something went wrong" needs to become "Failed to process payment for order #12345: card declined by issuer."
Clean code is not about perfection -- it is about professionalism and leaving the codebase better than you found it.
When to Accept AI Output and When to Refactor
Not every piece of AI-generated code needs refactoring. The decision should be based on the code's expected lifespan and change frequency:
- Accept as-is: One-off scripts, prototypes, proof-of-concept code, and throwaway utilities. If it works and nobody will maintain it, do not polish it.
- Light refactoring: Standard feature implementations with clear scope. Rename variables, extract a function or two, add proper error handling. This takes minutes and pays dividends immediately.
- Significant refactoring: Core domain logic, shared libraries, and infrastructure code. These deserve the full clean code treatment because they will be read, modified, and depended upon for years.
- Rewrite: When the AI-generated approach is architecturally wrong, a structural mismatch with your codebase, or fundamentally at odds with your conventions. Use the AI output as a specification of the behaviour you want, then implement it properly.
The judgment of when to refactor versus when to accept is itself an engineering skill. It requires understanding the codebase, anticipating change, and making pragmatic decisions about where to invest effort. This judgment is exactly what separates a software engineer from a prompt operator.
Clean code is not about perfection. It is about professionalism. It is about leaving the codebase better than you found it, whether the code was written by you, a colleague, or a machine. The principles Robert C. Martin articulated in 2008 remain the best framework we have for making that happen.




