Our AI Government Policy is aligned with the EU AI Act and EU product safety law, which require that high-stakes decisions affecting safety remain under meaningful human control. Also, we recognise that if complex and safety critical information flows from AI to Human (Engineer), this way of interaction introduces a cognitive bias for the Human to start over-relying on AI and stop being critical about what AI produces. Therefore, our key policy is comprised of the following assumptions:
Only human possess True Intelligence (as we know it). Hence, AI is a misnomer.
AI is a mere tool of very high sophistication for manipulation of complex text and other information
AI must only be used for auxiliary tasks in the context of safety analysis or safety related decisions
Humans (Engineers) must perform all core safety analysis tasks and be solely responsible for making all safety ciritical decisions
For core safety tasks, information must always flow from Human to AI and never the other way around, i.e. AI is used for consistency and completeness verification only