AI Safety & Transparency Engineer
Division: DATUM, Impac Exploration Services
Location: Remote, Oklahoma City (OK), Houston (TX)
Type: Full-Time
Build AI That Earns Trust by Deserving It
We're not building black boxes. At DATUM, our AI makes decisions about critical infrastructure where "trust me, it works" isn't good enough. When our models predict something, operators need to understand why. When they make recommendations, engineers need to see the reasoning.
We need someone who believes AI transparency isn't a nice-to-have—it's the whole game. Someone who gets excited about making complex models explainable without dumbing them down. Who sees safety not as constraints, but as design principles that make AI actually useful in the real world.
What You'll Build
•Explainability frameworks that show why models make specific predictions
•Safety systems that catch edge cases before they reach production
•Transparency tools that build operator confidence in AI decisions
•Testing protocols for AI in high-stakes industrial environments
•Interpretability methods for complex multimodal models
•Trust metrics that actually measure what matters
Your North Star
•If an operator can't understand it, we haven't finished building it
•The best AI safety happens at design time, not as band-aids
•Transparency means glass box, not black box with documentation
•Industrial AI has different stakes than consumer AI—act like it
•Trust is earned through understanding, not compliance checkboxes
•The most ethical AI is AI that actually gets used safely
The Technical Challenge
•Make deep learning interpretable without sacrificing performance
•Build explainability for decisions that combine physics, sensors, and ML
•Create safety frameworks for environments where "undo" doesn't exist
•Design transparency for users who are experts in drilling, not data science
•Develop testing protocols for conditions you can't replicate in a lab
•Balance "why did it do that?" with "what should I do now?"
You're Our Person If
•You've made complex AI systems interpretable for non-technical users
•You believe safety and performance aren't trade-offs
•You can translate between ML researchers and field operators
•You see AI ethics as engineering challenges, not philosophy debates
•You've built explainability that actually explains
•You understand that industrial safety is different from consumer safety
Especially If
•You've worked on high-stakes AI (healthcare, autonomous systems, finance)
•You've built interpretability for multimodal or time-series models
•You understand both ML theory and human factors
•You've designed AI systems that passed rigorous safety audits
•You can make uncertainty quantification intuitive
•You've turned "responsible AI" from concept to code
Why This Role Matters
Your work enables:
•Operators to trust AI recommendations with million-dollar consequences
•Regulatory acceptance of AI in critical infrastructure
•Our research team to push boundaries while staying grounded
•A new paradigm for industrial AI that's open, not opaque
This isn't about checking compliance boxes. It's about building AI that deserves to be trusted with decisions that matter.
The Reality
Industrial AI safety is uncharted territory. You'll:
•Define standards that don't exist yet
•Build explainability for users who've never trusted algorithms
•Create safety frameworks for physics-based problems
•Navigate between "move fast" and "don't break things that matter"
•Translate academic interpretability research to industrial reality
Growth Path
Start: Build transparency into our existing models Six months: Define how industrial AI safety should work. One year: Publish frameworks others adopt
When companies realize they need industrial AI ethics expertise, you'll have written the playbook they're trying to follow.
The Challenge
You'll make AI that's powerful enough to transform industries and transparent enough for a roughneck to trust it at 3 AM. You'll prove that explainable doesn't mean weak, and safe doesn't mean slow.
Ready to Open the Black Box?
Show us explainability work that actually helped real users. Tell us about safety systems you've built for complex environments. Share your vision for industrial AI that's both powerful and trustworthy.
We're looking for someone who sees "it's too complex to explain" as a challenge, not an excuse.