
By 2025, AI wasn’t just helping developers write code faster.
It was changing how changes moved through teams.
Refactors touched shared services. Infrastructure files were updated across repositories. Dependency changes happened in batches instead of one at a time. In some environments, those changes advanced as long as automated tests cleared.
Nothing about the process looked reckless. The pipeline still ran. Reviews still happened. Deployments still followed policy.
What changed was clarity.
Teams could see what changed. They could not always explain why it changed, how far the impact reached, or who fully understood the downstream consequences.
That’s where secure-by-default stops being a security slogan and starts becoming an architectural requirement.
If AI participates in how changes move toward production, the workflow itself has to carry accountability. Otherwise you are depending on memory, speed, and trust under pressure.
The first wave of AI adoption inside engineering teams felt controlled.
Developers used it to draft a function, clarify a section of unfamiliar code, or speed up a test suite.
The output still moved through the same checkpoints teams trusted. A human reviewed it. Approval was required before anything advanced.
From the outside, the process looked unchanged.
That’s what made it easy to embrace.
The shift happened when AI stopped being a drafting tool and started interacting with the delivery system itself.
Teams began allowing AI to restructure shared components, adjust infrastructure configuration, update dependencies across services, and apply refactors that touched multiple parts of the system at once.
In some environments, those changes advanced automatically if automated checks passed.
GitHub’s expansion of Copilot into enterprise workflows made this visible. What began as suggestions evolved into structured changes applied directly within repositories.
The scope widened. The pace increased. The system moved faster than any single person could fully hold in context.
At that point, AI was no longer just helping someone write code. It was influencing what ultimately reached production.
When that becomes true, security can’t remain something verified at the end of a release cycle. It has to be embedded in how change is governed from the start.
Software risk used to be framed around defects. Bugs that surfaced under load. Vulnerabilities that slipped through review. Performance regressions that showed up after release.
Teams built testing and review processes around those categories.
AI changes where the pressure lands.
The harder problem is not whether the system functions. It’s whether the path of change can be clearly reconstructed.
In environments where AI is integrated into the development workflow, releases can remain stable while visibility erodes.
Automated checks pass. Nothing obvious breaks. Weeks later, a small inconsistency appears somewhere downstream.
The technical issue can usually be corrected.
What slows the room down is the question no one can answer quickly: how exactly did this move through the system?
Which changes were introduced intentionally. Which were generated. Who fully understood the scope at the time. Whether the approval reflected comprehension or velocity.
That is a different category of risk.
When teams lose the ability to clearly account for change, security conversations stop being about code quality and start being about accountability.
When clarity starts to erode, most teams respond by leaning more heavily on review. The assumption is simple: if more humans look at the change, the risk is contained.
That assumption held when the volume and scope of change were relatively predictable.
Reviewers could understand the intent, trace the impact, and evaluate the tradeoffs without carrying an unreasonable cognitive load.
AI alters that balance.
As its output scales, the size and reach of changes expand in ways that are technically valid but harder to fully absorb.
Automated checks confirm functionality. The formal approval process remains intact. What becomes thinner is the depth of understanding behind that approval.
The signature is still there. The comprehension is not always proportional to the scope.
This is not a failure of diligence. It is a mismatch between the rate of change and the limits of human context.
Review was designed for a world where changes were introduced at a pace people could comfortably reason through.
When AI increases both the speed and the breadth of modification, review alone no longer guarantees accountability.
When teams recognize that AI has introduced new risk, the instinct is often to add controls on top of what already exists.
More logging. Additional audits. New policy language. Governance committees.
Those measures can create the appearance of oversight, but they don’t solve the underlying problem if AI has already been allowed to move through the system without enforced provenance.
Once intent is not captured at the moment of change, it cannot be reconstructed later. Activity logs show what happened.
They rarely explain why it happened or whether the scope of impact was fully understood at the time.
Rollback plans may exist on paper, but they do not restore clarity.
This tension has surfaced most visibly in regulated environments, where compliance reviews require teams to explain the reasoning behind technical decisions.
Organizations can often demonstrate the sequence of changes. What becomes harder is demonstrating the chain of understanding behind those changes.
Security teams are then asked to explain gaps that originated in the workflow itself.
Security cannot be bolted on after AI is already participating in delivery.
If the workflow does not enforce traceability and accountability by design, no amount of retroactive oversight will fully restore it.
Secure-by-default is not about adding more tools.
It’s about designing the workflow so certain guarantees exist before anyone is under pressure.
Intent is preserved at the moment a change is introduced. Approvals reflect actual scope, not surface validation.
The ability to reverse course is established before anything moves forward. The record survives the handoff between human judgment and automated systems.
In that environment, AI can participate without erasing accountability.
If a system cannot clearly indicate where a change originated, how it was evaluated, and what boundary authorized it to advance, the problem is not a missing policy. It is a missing design constraint.
Some organizations have recognized this early. Stripe’s approach to integrating AI into sensitive domains has been deliberate and constrained.
AI is introduced where inputs are structured and outcomes are measurable. Human approval is required at defined boundaries. Reversibility is engineered into the process before expansion is allowed.
The constraint is intentional.
Security is not something reviewed after the fact. It is part of how movement is governed.
We believe that business is built on transparency and trust. We believe that good software is built the same way.
Secure-by-default is simply what that belief looks like when AI is allowed to participate in delivery.
When AI is allowed to influence what reaches production without clear boundaries, ownership begins to blur.
If a change creates downstream impact, responsibility cannot be answered with “the model generated it.” Someone configured the system. Someone approved the boundary. Someone accepted the risk of automation at that layer.
In incident response, those distinctions matter. Boards, regulators, and customers are not asking which model produced the output. They are asking who understood the scope of the change and why it was allowed to advance.
Ownership does not survive by default in AI-assisted workflows. It has to be intentionally designed.
This is where many teams reach for tools. New scanners. Additional policies. Dedicated AI governance platforms.
Tools can surface activity. They cannot enforce clarity.
Whether AI participation is legible or opaque depends on how the system is structured. The architecture determines whether teams can explain their own decisions or rely on distributed trust under pressure.
That distinction is not about vendor selection. It is about system design.
AI did not introduce risk into software delivery. It exposed how much of that risk was being managed informally.
For years, teams relied on shared context, disciplined review, and steady velocity to maintain accountability. That worked because the rate and scope of change were human-scale.
AI expands both.
When systems are allowed to evolve at machine speed, clarity cannot remain dependent on memory or distributed understanding. It has to be embedded in how change is introduced, evaluated, and advanced.
Secure-by-default is not a security preference in that environment. It is the mechanism that keeps ownership intact when automation scales.
Teams that recognize this will not abandon AI. They will constrain it intentionally. They will design workflows that preserve traceability before they need it. They will treat explainability as part of delivery, not an afterthought to it.
In 2026, the dividing line will not be between teams that use AI and teams that do not.
It will be between teams that can clearly account for how their systems evolve — and teams that cannot.

By 2025, AI wasn’t just helping developers write code faster.
It was changing how changes moved through teams.
Refactors touched shared services. Infrastructure files were updated across repositories. Dependency changes happened in batches instead of one at a time. In some environments, those changes advanced as long as automated tests cleared.
Nothing about the process looked reckless. The pipeline still ran. Reviews still happened. Deployments still followed policy.
What changed was clarity.
Teams could see what changed. They could not always explain why it changed, how far the impact reached, or who fully understood the downstream consequences.
That’s where secure-by-default stops being a security slogan and starts becoming an architectural requirement.
If AI participates in how changes move toward production, the workflow itself has to carry accountability. Otherwise you are depending on memory, speed, and trust under pressure.
The first wave of AI adoption inside engineering teams felt controlled.
Developers used it to draft a function, clarify a section of unfamiliar code, or speed up a test suite.
The output still moved through the same checkpoints teams trusted. A human reviewed it. Approval was required before anything advanced.
From the outside, the process looked unchanged.
That’s what made it easy to embrace.
The shift happened when AI stopped being a drafting tool and started interacting with the delivery system itself.
Teams began allowing AI to restructure shared components, adjust infrastructure configuration, update dependencies across services, and apply refactors that touched multiple parts of the system at once.
In some environments, those changes advanced automatically if automated checks passed.
GitHub’s expansion of Copilot into enterprise workflows made this visible. What began as suggestions evolved into structured changes applied directly within repositories.
The scope widened. The pace increased. The system moved faster than any single person could fully hold in context.
At that point, AI was no longer just helping someone write code. It was influencing what ultimately reached production.
When that becomes true, security can’t remain something verified at the end of a release cycle. It has to be embedded in how change is governed from the start.
Software risk used to be framed around defects. Bugs that surfaced under load. Vulnerabilities that slipped through review. Performance regressions that showed up after release.
Teams built testing and review processes around those categories.
AI changes where the pressure lands.
The harder problem is not whether the system functions. It’s whether the path of change can be clearly reconstructed.
In environments where AI is integrated into the development workflow, releases can remain stable while visibility erodes.
Automated checks pass. Nothing obvious breaks. Weeks later, a small inconsistency appears somewhere downstream.
The technical issue can usually be corrected.
What slows the room down is the question no one can answer quickly: how exactly did this move through the system?
Which changes were introduced intentionally. Which were generated. Who fully understood the scope at the time. Whether the approval reflected comprehension or velocity.
That is a different category of risk.
When teams lose the ability to clearly account for change, security conversations stop being about code quality and start being about accountability.
When clarity starts to erode, most teams respond by leaning more heavily on review. The assumption is simple: if more humans look at the change, the risk is contained.
That assumption held when the volume and scope of change were relatively predictable.
Reviewers could understand the intent, trace the impact, and evaluate the tradeoffs without carrying an unreasonable cognitive load.
AI alters that balance.
As its output scales, the size and reach of changes expand in ways that are technically valid but harder to fully absorb.
Automated checks confirm functionality. The formal approval process remains intact. What becomes thinner is the depth of understanding behind that approval.
The signature is still there. The comprehension is not always proportional to the scope.
This is not a failure of diligence. It is a mismatch between the rate of change and the limits of human context.
Review was designed for a world where changes were introduced at a pace people could comfortably reason through.
When AI increases both the speed and the breadth of modification, review alone no longer guarantees accountability.
When teams recognize that AI has introduced new risk, the instinct is often to add controls on top of what already exists.
More logging. Additional audits. New policy language. Governance committees.
Those measures can create the appearance of oversight, but they don’t solve the underlying problem if AI has already been allowed to move through the system without enforced provenance.
Once intent is not captured at the moment of change, it cannot be reconstructed later. Activity logs show what happened.
They rarely explain why it happened or whether the scope of impact was fully understood at the time.
Rollback plans may exist on paper, but they do not restore clarity.
This tension has surfaced most visibly in regulated environments, where compliance reviews require teams to explain the reasoning behind technical decisions.
Organizations can often demonstrate the sequence of changes. What becomes harder is demonstrating the chain of understanding behind those changes.
Security teams are then asked to explain gaps that originated in the workflow itself.
Security cannot be bolted on after AI is already participating in delivery.
If the workflow does not enforce traceability and accountability by design, no amount of retroactive oversight will fully restore it.
Secure-by-default is not about adding more tools.
It’s about designing the workflow so certain guarantees exist before anyone is under pressure.
Intent is preserved at the moment a change is introduced. Approvals reflect actual scope, not surface validation.
The ability to reverse course is established before anything moves forward. The record survives the handoff between human judgment and automated systems.
In that environment, AI can participate without erasing accountability.
If a system cannot clearly indicate where a change originated, how it was evaluated, and what boundary authorized it to advance, the problem is not a missing policy. It is a missing design constraint.
Some organizations have recognized this early. Stripe’s approach to integrating AI into sensitive domains has been deliberate and constrained.
AI is introduced where inputs are structured and outcomes are measurable. Human approval is required at defined boundaries. Reversibility is engineered into the process before expansion is allowed.
The constraint is intentional.
Security is not something reviewed after the fact. It is part of how movement is governed.
We believe that business is built on transparency and trust. We believe that good software is built the same way.
Secure-by-default is simply what that belief looks like when AI is allowed to participate in delivery.
When AI is allowed to influence what reaches production without clear boundaries, ownership begins to blur.
If a change creates downstream impact, responsibility cannot be answered with “the model generated it.” Someone configured the system. Someone approved the boundary. Someone accepted the risk of automation at that layer.
In incident response, those distinctions matter. Boards, regulators, and customers are not asking which model produced the output. They are asking who understood the scope of the change and why it was allowed to advance.
Ownership does not survive by default in AI-assisted workflows. It has to be intentionally designed.
This is where many teams reach for tools. New scanners. Additional policies. Dedicated AI governance platforms.
Tools can surface activity. They cannot enforce clarity.
Whether AI participation is legible or opaque depends on how the system is structured. The architecture determines whether teams can explain their own decisions or rely on distributed trust under pressure.
That distinction is not about vendor selection. It is about system design.
AI did not introduce risk into software delivery. It exposed how much of that risk was being managed informally.
For years, teams relied on shared context, disciplined review, and steady velocity to maintain accountability. That worked because the rate and scope of change were human-scale.
AI expands both.
When systems are allowed to evolve at machine speed, clarity cannot remain dependent on memory or distributed understanding. It has to be embedded in how change is introduced, evaluated, and advanced.
Secure-by-default is not a security preference in that environment. It is the mechanism that keeps ownership intact when automation scales.
Teams that recognize this will not abandon AI. They will constrain it intentionally. They will design workflows that preserve traceability before they need it. They will treat explainability as part of delivery, not an afterthought to it.
In 2026, the dividing line will not be between teams that use AI and teams that do not.
It will be between teams that can clearly account for how their systems evolve — and teams that cannot.