Marius Dima

Building AI systems that explain themselves. Neural-symbolic architectures for critical systems where trust isn't optional.

Background

20 years: extreme industrial automation → manufacturing automotive process & conversational AI → neural-symbolic AI
If it can't explain itself, it shouldn't run your factory. Or your democracy.
3 published patents. Speaking: MIT, AI Summit London, How to Web
Currently: Qriton (trustable AI) + iGov.ro (tools for transparent governance)

Core Focus

Neural-symbolic AI architectures that merge deep learning with logical reasoning
EU AI Act compliance frameworks for high-risk systems
Explainable decision-making where transparency is non-negotiable
Energy-based models for anomaly detection in critical infrastructure

Projects

Book

Lut Absolut
LUT ABSOLUT
A novel exploring the absolute threshold between systems and chaos. Available in Romanian from Librăria Delfin.
View Book →

Connect