- Home
- Blog
- Leadership & Culture
- AI with Criteria: Our Manifesto
AI with Criteria: Our Manifesto
In January 2026, we sat down to answer an uncomfortable question: What is Sancrisoft's official stance on artificial intelligence?
It wasn't a rhetorical question. Our developers were already using Claude, Copilot, and Cursor daily. Some clients were asking whether "AI wrote their code." And across the industry, we saw two extremes playing out, neither of which represented us.
By Samuel Granja
The conversation happening across the software industry, documented in depth in our guide on how AI is transforming software development, makes one thing clear: there is no neutral position. Every engineering team is choosing how to relate to AI, whether they articulate it or not. We decided to articulate ours.
The Two Extremes We Reject
The first extreme is paralyzing fear. Companies that ban AI tools out of concern about code leaks, loss of control, or dependency on the machine. They understand some of the risks. But they pay for that caution in slow delivery, developer frustration, and a widening gap against teams that are moving faster with discipline.
The second extreme is what the industry has started calling vibe coding. Developers who let the agent run unsupervised, accept generated code without reading it, and push straight to the repository. It looks like productivity. Research on AI-assisted development consistently shows that AI tools can significantly accelerate output, but that acceleration becomes a liability when it outpaces human understanding of what is being built. Vibe coding is Russian roulette with the client's codebase.
Neither extreme represents us. So we decided to write our own position.
The Process of Creating the Manifesto
The irony is that we used AI to define how we use AI.
We opened a conversation with Claude and laid out our real experience: what had worked, what worried us, and what we had learned in months of adoption across client projects. Claude helped articulate, structure, and refine. But the values came from us.
That process was, in itself, a demonstration of exactly what we wanted to communicate. AI proposes, we approve or reject. We are the architects. AI is the tool. IBM's framework for AI ethics calls this "human-centered AI" keeping human judgment at the center of consequential decisions while allowing AI to handle execution. That is precisely the dynamic we were trying to name.
The result was 10 principles we call "AI with Criteria."
The 10 Principles
1. We are the code reviewers.
AI proposes, we approve or reject. Never the other way around. This is not a philosophical preference; it is a hard operational rule. Effective code review is the mechanism that keeps human understanding anchored to every line that enters a codebase. No generated code bypasses it.
2. Every line is our responsibility.
Use AI for everything, without restrictions. But if AI wrote it and you cannot explain what it does, it is not ready yet. Responsibility does not transfer to the model. It stays with the engineer who approved the merge.
3. AI is not competition, it's leverage.
A tool that multiplies your productivity, not one that replaces you. The professional software engineering skillset, architectural thinking, requirements analysis, and risk judgment become more valuable when AI handles execution, not less.
4. No code gets pushed without human review. Ever.
The repository is our territory. This principle connects directly to security standards that exist precisely because automated tools, however capable, do not carry the contextual awareness to catch every class of vulnerability. Human review does.
5. Active supervision, not passive.
We use agents to accelerate, but the git diff gets reviewed completely before any push. There is a meaningful difference between "I trust the agent" and "I reviewed what the agent produced." We do the second. Always.
6. When something isn't right, we redirect.
"Don't use that approach, use this one instead." We direct, AI executes. This requires engineering judgment: knowing when a generated pattern introduces technical debt, when an architecture choice contradicts an earlier decision, when the output is technically correct but strategically wrong.
7. Vibe coding is Russian roulette.
Letting the agent work unsupervised is not productivity. It is irresponsibility transferred to the client. The output might work today. Under load, under an edge case, under a security audit that is where the absence of human understanding shows up.
8. We approach AI from curiosity, not fear.
We ask: What if I create a skill that gives me context automatically? What if I build one that uploads tickets to Jira in bulk? What if I automate the thing that costs us four hours every sprint? Fear closes those questions. Curiosity opens them.
9. The most powerful question: "What if...?"
That curiosity is what leads us to build extraordinary things, with AI as the tool.
10. The future belongs to those who use AI with criteria.
Not to those who fear it, nor to those who blindly trust it. To those who use it with criteria. That sentence is not a tagline. It is the thesis of everything above.
From Theory to Practice
One week after writing the manifesto, we built something that did not exist in our library.
We asked ourselves: what if we could convert a UI interaction video into functional React components automatically? It sounded absurd on the surface. A video shows pixels moving. Code requires structure, logic, states. The conceptual gap looked wide.
But the question nagged at us. So we ran the experiment.
In two days we had a working skill: Video-to-UI. It downloads a video, extracts frames with ffmpeg, analyzes visual patterns, generates specifications, and produces Next.js components with Framer Motion animations, ready to drop into a web development workflow.
Did it work perfectly on the first try? No. We had to iterate on state detection (hover vs. idle vs. active), element proportions, and SVG z-order. Each iteration required reviewing the generated code, understanding why it produced the wrong output, and redirecting. The architecture decision to use frame extraction over video stream analysis came from an engineer, not the model.
That is principles 8 and 9 in action: curiosity leading to experimentation, bounded by the active supervision that principles 1 through 7 establish.
For a more direct example of these principles under production pressure, read how our team applied them to build HIPAA-compliant MFA for a telehealth platform in four days, including a complete architectural pivot halfway through. That case study is the manifesto made concrete.
Why We're Sharing This
We do not believe our manifesto is "the truth" about AI in software development. It is our position. It works for us.
We are sharing it because the conversation matters. The industry needs more nuance between "ban everything" and "automate everything." It needs frameworks for thinking, not just tools.
Our 11-month engagement with Venice.ai, building a privacy-first AI platform from mobile to backend, ran on these principles. Every line reviewed. Every architectural decision is documented. Every experiment is supervised. The AI agents accelerated the work. The engineers owned it.
If you are a development team evaluating how to adopt AI, these 10 principles can serve as a starting point. Adapt them. Question them. Find where they do not fit your context, and articulate why. That process of articulation is itself valuable.
And if you're a client evaluating development partners, this is what you can expect from us: we use AI to accelerate, but we never lose control. Every line that enters your code is our responsibility. That's non-negotiable.
The Closing We Chose
After several iterations with Claude helping us pressure-test the language, we found the phrase that captures it all:
"The future doesn't belong to those who fear AI, nor to those who blindly trust it. It belongs to those who use it with criteria."
That is the Sancrisoft way. AI with Criteria.
What This Looks Like in a Partnership
If you are a CTO evaluating AI adoption for your engineering team, a VP of Engineering who needs a nearshore partner with discipline around AI tooling, or a technical founder who wants development velocity without surrendering visibility into the code, this is the kind of team you are looking for.
Our engineers in Medellín work in your timezone, use AI daily, and operate under these 10 principles on every engagement. We move fast. We review everything. And we can explain every architectural decision in the codebase.
The full picture of how these principles connect to a structured development process is in our AI software development workflow guide, the operational complement to this manifesto.
When you're ready, schedule a consultation with our team. No pitch, no pressure just an honest conversation about how AI can accelerate your development without creating the liabilities that vibe coding leaves behind.