Software development has always balanced speed with quality. The faster you can get code out the door, the more competitive you become, but speed without validation is a recipe for risk (or even destruction some would argue). Pull requests are more frequent that ever and code changes are constant, so the challenge is to write code quickly and at the same time make sure those changes are reviewed, validated, and shipped without bottlenecks.
The focus of new generation of tools is on removing friction in the code review process: enabling pull request creation and stacking, managing them through a centralized inbox, leveraging AI-assisted reviews, and merging to production without teams blocking each other’s progress.
AI is making code generation easier than ever. But writing code is just one aspect. The real work is to orchestrate the process from prompt to production, with validation and human oversight at every step.
The Evolving Role of Code Review
Code review has long been a cornerstone of modern software engineering. Every serious development team does it, not only because it’s a best practice, but also because it’s embedded in company culture and, in many industries, mandated by compliance frameworks like SOC 2. Yet the purpose and nature of code review has changed over the decades.
Back in the day, review meant getting engineers together around printed sheets of code and looking for bugs line by line. Then there were desk checks, where a coworker would come by in person to look over the changes. With the rise of Git and platforms like GitHub, the process became formalized through pull requests, originally a way for open-source contributors to request that maintainers review and merge their “untrusted” changes. Over time, this workflow spread to closed-source teams as well, cementing the pull request as the central space where engineers evaluate changes, test functionality, and discuss design choices before merging to production.
The advent of AI-generated code has complicated this picture. Historically and logically, reviewers could assume the author had read and understood the code before submitting it. Now, in some cases, the pull request is the very first time a human lays eyes on it. When a model like GitHub Copilot or Claude produces an implementation in seconds, the human engineer may not have written, or even deeply examined, the logic.
This makes the review step more critical than ever.
From Bug Catching to Context Sharing
While early code reviews were all about finding errors, modern reviews serve a broader purpose. If a bug surfaces only because a reviewer spotted it during manual inspection, something has already failed upstream as automated testing and the author’s own checks should have caught it.
Instead, reviews are now about:
- Sharing context across the team so everyone understands the change and its rationale.
- Shaping architecture by debating the design direction before it becomes entrenched in the codebase.
- Highlighting unseen constraints such as decisions made in prior meetings or related technical dependencies.
- Maintaining cohesion so each change aligns with the overall product vision.
The pull request has become the central hub for collaboration, an ongoing conversation about not just “what does this code do?” but “how does this code fit into what we’re building?”
In an AI-assisted reality, this emphasis on context will only grow. Models will likely surpass humans at catching routine bugs and even generate their own tests, but they won’t understand product vision, long-term trade-offs, or organizational priorities unless humans guide them. That guidance happens in the review. It’s where engineers, and increasingly AI agents, negotiate correctness and purpose.
The New Challenge is to Maintain Context Without Writing the Code
Although AI improves code quality and reduces bugs, humans remain indispensable. The true value isn't just to produce code, but also to understand which questions to ask, which trade-offs to weigh, and how each change fits into the overall product and organizational goals.
However, if engineers aren't writing much of the code themselves, how do they stay connected to the system's changing structure? Traditionally, understanding a codebase came naturally. New team members would begin by fixing bugs, making minor improvements, and gradually absorbing the context, how modules interacted, where vulnerable points existed, and why certain architectural decisions had been made.
In an AI-assisted workflow, passive context accumulation is no longer guaranteed. A developer may be asked to review or troubleshoot code they did not write and had little interaction with. When a manager reviews code after months away from hands-on development, they may struggle to provide meaningful feedback due to a lack of attention to detail. A developer may use AI models like Claude to generate a feature and then review incremental changes without understanding the original logic.
The challenge in both cases is that without ongoing, hands-on engagement, context deteriorates. And context is precisely what is required for meaningful, high-stakes reviews.
Toward New Practices for Context Retention
If writing code is no longer the primary vehicle for understanding a system, teams will need new habits and tools to replace what used to happen automatically. This might include regular code walkthroughs, structured reading sessions, or even dedicated time for “conversing” with the codebase like querying recent changes, exploring architectural diagrams, and debating technical direction.
Code review can help maintain context, but it’s not enough on its own. By the time critical review work is needed, the reviewer should already have a working mental model of the system. AI can generate large, complex changes in minutes, therefore building and maintaining that mental model will become one of the most important, and most overlooked, skills in software engineering.
How AI Is Changing the Developer Experience
For many engineers, the joy of development comes from solving problems through the act of coding, treating it like a craft, with its own language, structure, and elegance. AI is now altering that experience. While it won’t remove the human role, it is changing how engineers spend their time.
Today, even without AI, a significant portion of engineering work isn’t “hands on keyboard” writing code. It’s everything that happens around it: getting changes through continuous integration, discussing designs with teammates, merging updates, monitoring staging environments, handling incidents, rolling back when necessary, and deciding what comes next. AI can speed up code creation, but the surrounding activities remain and require human oversight.
As AI speeds up code changing, engineers will spend more time managing those changes so they integrate smoothly, function as intended, and align with strategic objectives. In this sense, the developer experience is going towards orchestration rather than pure authorship. The skill will lie in directing the process, not just in writing every line.
This shouldn’t be considered as a loss. Problem-solving, invention, and delivering value to users have always been the real essence of engineering. The code itself is just a medium. Just as engineers no longer write in assembly for every project, future workflows may rely less on manual coding and more on prompting, reviewing, and guiding AI output to the desired outcomes.
The Role of Stacked Workflows
One development that is already taking place is the increased importance of stacked workflows. Large pull requests have always been inconvenient to review, error-prone in continuous integration, and risky to rollback. AI has not changed this. If anything, it has made small, incremental changes more valuable.
Stacking enables engineers to work on multiple changes in parallel without becoming stalled. A developer can make a pull request, let it go through review and testing, and then immediately start another one on top of it. When code generation is fast, as it is with AI, this approach becomes critical. Instead of submitting a 5,000-line pull request that is nearly impossible to review, developers (or AI agents) can divide work into smaller, reviewable checkpoints. These can be tested, validated, and merged in order or in batches, reducing risk and keeping progress moving.
Stacked workflows have been around for more than a decade, but AI-assisted coding is increasing their relevance. For AI agents capable of producing large amounts of code in minutes, stacking is required for safe, efficient, and collaborative development.
The Accountability Problem in Code Reviews
In the context of developer tools, this accountability gap has practical implications. Consider a security vulnerability such as a SQL injection or an insecure IDOR (Insecure Direct Object Reference). If an AI-assisted code review lets such a flaw through, who is responsible?
You can’t “fire” an AI. You likely can’t sue the AI vendor for a single missed vulnerability in a massive codebase. As a result, end-users and organizations will still expect engineers to be accountable for the code, no matter how automated the review process becomes. This reality is why, at least for now, AI is better positioned as an assistant rather than a final authority.
Human Oversight Is Still Non-Negotiable
Even the most advanced AI code reviewers, those capable of detecting minor issues, enforcing style guidelines, and providing pre-review commentary, cannot replace human judgment.
A human reviewer looks at more than just the difference. They provide contextual understanding of the product's direction, risk tolerance, and architectural trade-offs. They can determine if a failure is acceptable or catastrophic. AI lacks the broader perspective.
Security exacerbates the problem. Large language models are vulnerable to prompt injection attacks, which involve inserting crafted comments or instructions into code changes to trick the AI into overlooking or even approving dangerous vulnerabilities. A human reviewer would quickly detect malicious intent, whereas an AI might not.
Why the Bar for AI in Software Security Is So High
Generally speaking a software environment is inherently hostile. Bad actors are constantly looking for ways to exploit systems. This is different from AI in self-driving cars or voice assistants, where most interactions are benign.
Because online systems face continuous, targeted attacks, any AI reviewer must be exceptionally resilient before it can be trusted with production-critical or security-sensitive decisions. Until AI can reliably handle that level of malicious creativity, its role will remain assistive rather than authoritative in high-stakes development environments.
The Engineer’s Role Is Evolving, Not Disappearing
Despite the frequent claims circulating on social media that AI will soon replace engineers, the reality is far more nuanced.
As AI-generated code becomes commonplace, the scope of potential vulnerabilities will expand. Engineers will need to ensure that AI-assisted development does not inadvertently introduce new points of failure. This includes addressing emerging threats like prompt injection and other unconventional attack vectors that traditional development practices never had to account for.
If we could bring a software engineer from 40 years ago into today’s environment, they might see our tools and processes as nearly science fiction. Likewise, the next few decades will bring changes that are just as transformative for the current generation of developers.
How Solwey Can Help
Building tech products isn’t easy. But it is doable especially if you approach it with clarity, focus, and the right mindset.
If you’re unsure where to start, we at Solwey can help you formulate a plan. Just tell us about your challenges and what’s holding you back. We can guide you through finding a solution, whether that means optimizing existing tools or building something new.
Our personalized service involves working closely with you to understand your particular challenges and developing solutions that are suited to your specific requirements, rather than the other way around.
With a strong background in custom software development, we bring industry expertise to every project, delivering software that not only works, but works for you. Whether you work in finance, healthcare, retail, or manufacturing, our industry-specific solutions are tailored to the specifics of your field.
You don't have to sacrifice price to get exceptional service. Our competitive pricing structure ensures that you receive high-quality custom software without breaking the bank. With our agile processes, we can deliver results faster, allowing you to respond quickly to market demands or operational changes.
We place a high value on dependability and customer support. We will be there for you from start to finish, and beyond. Our team is committed to providing seamless support, ensuring that your software runs smoothly and your business runs more efficiently.
Allow us to be your trusted partner in driving your digital transformation. Choose Solwey for quick, adaptable, and dependable software solutions that will keep you ahead of the competition.
