CRITICAL AI
FOR HIGH-RISK DOMAINS

Neural-symbolic AI for environments where failure has a cost. 100% local, fully explainable, audit-ready — built for EU AI Act Articles 13–14, while the rest of the industry asks for extensions.

Let's Talk

AI demos are easy.
Critical AI is the hard part.

Hallucinations on the production line. Data leaving your infrastructure. A model nobody can explain to a regulator. Three reasons most AI dies before deployment in regulated environments — and Critical AI is what's left when you can't afford any of them.

What you get

Neural-symbolic, surgically editable

HLM (Hopfield Layer Modeling) architectures let you edit trained models surgically — no retraining required. Fix one behaviour without breaking the rest. Beyond Transformers, built for recoverability in domains where a redeploy isn't an option.

EU AI Act, ready before the deadline

Articles 13 and 14 demand transparency and human oversight for high-risk AI. The deadline just moved to 2027–2028 because most systems can't meet the requirements yet. Yours can — designed for compliance from day one.

Sovereignty by design

100% on-premise or air-gapped deployment. No data leaving your infrastructure, no cloud dependencies, no third-party inference. For organisations that can't afford a vendor breach as their AI strategy.

Explainable in production

No hallucinations on the production line. Full audit trails for every inference. When a regulator or executive asks "why did it decide that?", you have a real, traceable answer that holds up in the meeting.

Where I've done this before

Most of what I've built isn't public. Here are a few of the industries it's lived inside.

Offshore

Critical infrastructure on oil platforms. Software faults aren't measured in bug reports — they're measured in lives, budgets, and environmental impact. The systems stayed up.

Manufacturing

Tier-one automotive suppliers. Production lines where quality and uptime are in the contract. Delivered on both.

Today

Governance tooling and audit trails for clients whose AI has to survive regulatory review. It does.

Currently running

Four projects I'm building under Cognitive Tech. Each one started because a client needed it and nothing on the market solved it the way it needed to be solved.

qriton.com

AI governance and orchestration for regulated industries. The infrastructure that turns AI experiments into production systems auditors will sign off on.

igov.ro

Civic transparency platform for Romania. Government data made readable — for citizens who want to understand it and journalists who need to cite it.

confychat.com

Real-time collaboration and communication infrastructure. Low-latency, end-to-end, built for teams that can't rely on consumer tools.

airadiohost.com

Automated aggregation and broadcast of AI development news. A radio station that reads the frontier — runs unattended, twenty-four hours a day.

Marius Dima

Founder & Principal Engineer — Cognitive Tech Projects

Over two decades of engineering experience delivering mission-critical software in environments where failure has material consequences — offshore energy operations, tier-one automotive manufacturing, and AGI research at a global supplier laboratory.

Specialised in explainable AI, neural-symbolic architectures, energy-based models, and governance systems designed for EU AI Act compliance. Holds three patents in AI governance and pattern recognition. Published author and regular speaker on explainable AI and regulatory compliance, based in the European Union and working globally.

Client engagements are direct and hands-on. You work with one principal engineer backed by twenty years of domain context — not a consultancy team. I write the code, ship the system, and produce the documentation your audit committee and regulators require to approve deployment.

On regulation, critical decisions, and what comes next.

The European Union is bringing AI regulation that current models can't always align with — debate, deadlines, postponements. The high-risk provisions, originally set for August 2026, were just pushed to December 2027 for stand-alone systems and August 2028 for embedded ones. Beneath the noise, the EU is also offering a clear picture of what European AI should look like next. We are not there yet.

In critical decision-making, mistakes are not affordable. Explainability and control have to be present at every step of the operation.

Creativity will be handled by flat architectures such as Transformers. The harder, world-grounded tasks need architectures built for them — energy-based models. That is exactly the focus of Qriton Technologies: the foundation for the next level of Critical AI.

For open innovation, the Energy Language for common-sense AI operations is now public. First models to follow. hlm.qriton.com →

How did we get here?

Seventy years of AI, told through the aesthetics of the web — from the 1956 terminal, through GeoCities, into today. A detour from the pitch.

Start the conversation.

For organisations deploying AI in regulated environments — defence, pharma, energy, automotive, and civic infrastructure. Whether you are preparing for regulatory review, closing documentation gaps, or moving a proof-of-concept into production, I welcome a direct conversation about your project.