Wanted to share a milestone update on something we've been building at inXsol — xAPI IRL (In Real Life), (https://xapiirl.com/) a platform that uses AI speech recognition to listen to live, unscripted human interactions and generate xAPI statements from them.
The idea is straightforward even if the execution isn't: point a listener at a real conversation — a training session, a meeting, radio comms, a classroom — and let a team of AI agents figure out what learning interactions are happening, who's involved, and what competencies are being demonstrated. Then map all of that to xAPI Profiles, Shareable Competency Definitions, and Learning Metadata terms — or dynamically scaffold new ones when the frameworks don't yet exist.
We've hit what I'd call the 90% point.
Elon Musk talks about the "series of 9's" in the evolution of Full Self-Driving — it's relatively straightforward to get to 90% capability, but getting to 99% takes the same effort all over again. Then 99.9%. Then 99.99%. Each additional nine is as hard as everything before it combined.
That's exactly where this feels. The core pipeline works. We can record a session, transcribe it with speaker identification, detect learning interactions with confidence scores, extract competencies against IEEE 1484.20.3, and generate pending xAPI statements ready to send to an LRS. We've built domain-specific listener plugins — EMS training, K-12 classroom, corporate conference, webinar — each one is a collection of AI agents whose personalities and detection behaviors can be tuned right through the app interface.
It's a moonshot, but we wanted to start somewhere.
What we're learning: Defined vocabularies and established xAPI Profiles are the easier path — the agents perform well when they have a clear framework to match against. But the interesting frontier is when they encounter interactions outside existing structures. That's where the platform tries to extend or create competency scaffolding on the fly, and that's where having a human in the loop makes all the difference. The AI can learn, but it learns faster and more accurately with domain experts guiding it.
This all works in tandem with tlatoolbox.com, which serves as the companion profile server and SCD registry — xAPI Profiles, Shareable Competency Definitions, and Learning Metadata terms can be created, edited, and inspected there. What xAPI IRL discovers flows back into TLA Toolbox as reusable, refinable resources.
Why I'm posting this here: I'd love to connect with anyone interested in collaborating on pushing toward the next 9. Whether you're working in a specific training domain, building xAPI profiles, developing competency frameworks, or just curious about where AI meets learning standards — there's a lot of room to explore together. The platform is multi-tenant, so standing up a new domain is really about configuring a new agent personality and giving it the right vocabulary to work with.
Happy to demo what we have, answer questions, or just hear your thoughts on the approach. The 90% proves the concept. The next 9's prove the product.
— Henry
The idea is straightforward even if the execution isn't: point a listener at a real conversation — a training session, a meeting, radio comms, a classroom — and let a team of AI agents figure out what learning interactions are happening, who's involved, and what competencies are being demonstrated. Then map all of that to xAPI Profiles, Shareable Competency Definitions, and Learning Metadata terms — or dynamically scaffold new ones when the frameworks don't yet exist.
We've hit what I'd call the 90% point.
Elon Musk talks about the "series of 9's" in the evolution of Full Self-Driving — it's relatively straightforward to get to 90% capability, but getting to 99% takes the same effort all over again. Then 99.9%. Then 99.99%. Each additional nine is as hard as everything before it combined.
That's exactly where this feels. The core pipeline works. We can record a session, transcribe it with speaker identification, detect learning interactions with confidence scores, extract competencies against IEEE 1484.20.3, and generate pending xAPI statements ready to send to an LRS. We've built domain-specific listener plugins — EMS training, K-12 classroom, corporate conference, webinar — each one is a collection of AI agents whose personalities and detection behaviors can be tuned right through the app interface.
It's a moonshot, but we wanted to start somewhere.
What we're learning: Defined vocabularies and established xAPI Profiles are the easier path — the agents perform well when they have a clear framework to match against. But the interesting frontier is when they encounter interactions outside existing structures. That's where the platform tries to extend or create competency scaffolding on the fly, and that's where having a human in the loop makes all the difference. The AI can learn, but it learns faster and more accurately with domain experts guiding it.
This all works in tandem with tlatoolbox.com, which serves as the companion profile server and SCD registry — xAPI Profiles, Shareable Competency Definitions, and Learning Metadata terms can be created, edited, and inspected there. What xAPI IRL discovers flows back into TLA Toolbox as reusable, refinable resources.
Why I'm posting this here: I'd love to connect with anyone interested in collaborating on pushing toward the next 9. Whether you're working in a specific training domain, building xAPI profiles, developing competency frameworks, or just curious about where AI meets learning standards — there's a lot of room to explore together. The platform is multi-tenant, so standing up a new domain is really about configuring a new agent personality and giving it the right vocabulary to work with.
Happy to demo what we have, answer questions, or just hear your thoughts on the approach. The 90% proves the concept. The next 9's prove the product.
— Henry