Bridging Competency Definitions and Evidence: Introducing the Rubric Assertion Profile (RAP)

Henry Ryng

Administrator
Staff member

The Problem I'm Trying to Solve​

I've been working with IEEE Shareable Competency Definitions (SCDs) and xAPI for evidence-based competency assertion, and I keep hitting the same challenge: there's a gap between the definition of a competency (especially one with a rubric) and the operational rules for asserting that someone has achieved it based on real xAPI evidence.

The TLA standards ecosystem now has strong coverage of three concerns. IEEE 1484.20.3-2022 (SCD) defines competencies and their rubric structures—it tells us WHAT we're assessing and what proficiency looks like. IEEE 2881-2025 (LMT) describes learning resources and events and formally links them to competencies via teaches and assesses properties—it tells us WHERE competencies attach to learning content. xAPI profiles define how to capture learning evidence as statements—they give us reusable patterns for HOW activities generate evidence.
But what's missing is the fourth concern: WHEN and under what conditions do we assert that someone has achieved a specific rubric criterion level, based on patterns of xAPI statements in an LRS?

This becomes especially challenging when:
  • xAPI profiles for a domain don't exist yet or are incomplete
  • Evidence spans multiple learning experiences (different registrations)
  • We need granular progress indicators, not just binary "achieved/not achieved"
  • We want to prototype competency assertion quickly without waiting for formal profile development
Existing tools like CASS handle assertion processing—they can evaluate evidence and produce competency claims. But there is no declarative configuration standard for defining assertion rules: what triggers an evaluation, how confidence is calculated, what evidence patterns map to what rubric levels. Each implementation hardcodes its own logic.

The Proposed Solution: Rubric Assertion Profile (RAP)​

I'm proposing a new JSON-LD data model called a Rubric Assertion Profile (RAP) that acts as a bridge between SCD rubric structures and xAPI evidence streams. Think of it as augmenting SCD rubric items with xAPI-specific assertion logic, without modifying the SCD definitions themselves.

RAP is to competency assertion engines what an xAPI Profile is to xAPI statement generation: a declarative configuration that tells the engine what to do.

What's in a RAP?​

A RAP is a portable JSON-LD file that contains:
  1. References to SCD rubric components by URI— Specifically:
    • scd:CompetencyDefinition instances (the competency being assessed)
    • scd:RubricCriterion instances (the evaluative dimensions—e.g., "Safe Handling Procedure")
    • scd:RubricCriterionLevel instances (the proficiency thresholds—e.g., "Proficient" with score 3)

      These reference the same globally unique URIs/IRIs that the SCD standard requires, ensuring RAP consumes the standard's own identifiers rather than inventing a parallel addressing scheme.
  2. Assertion rules— Logic for evaluating xAPI evidence:
    • Statement templates and patterns (can reference existing xAPI profiles)
    • Thresholds (e.g., "completed 5 of 7 activities")
    • Sequences (e.g., "must complete A before attempting B")
    • Multi-registration handling (evidence across multiple courses/experiences)
    • Confidence calculation methods (how scores, completion counts, and qualitative evidence combine into a confidence value)
  3. References to xAPI profiles — When formal profiles exist, RAP leverages their statement templates and patterns. When they don't, RAP can bootstrap patterns from actual LRS data, with a pathway to contribute those patterns back to formal profile development.

  4. References to LMT metadata (optional) — When operating within a TLA ecosystem, RAP can reference IEEE 2881-2025 metadata in the Experience Index (XI) to discover which CompetencyDefinitions are relevant for a given learning activity, rather than duplicating that activity-to-competency mapping internally.

How It Works in Practice​

Development workflow:
  1. Start with an SCD that includes a rubric (CompetencyFramework → CompetencyDefinition → RubricCriterion → RubricCriterionLevel)
  2. Check if relevant xAPI profiles exist:
    • If YES: Link to existing statement templates and patterns
    • If NO: Use TLA Toolbox to query/mock an LRS, analyze statements, and infer patterns
  3. Use the statement crafter to inject sample statements and test assertion logic
  4. Build assertion rules in the RAP editor, binding each rule to specific scd:RubricCriterion and scd:RubricCriterionLevel URIs
  5. Export as a portable JSON-LD file
  6. Use Audit/Suggest feature to propose additions to existing xAPI profiles or new profile creation
Runtime workflow:
  1. Learning activities emit xAPI statements to an LRS
  2. Assertion engine loads relevant RAP(s)
  3. Engine evaluates evidence against RAP rules
  4. Emits "progressed" xAPI statements when partial thresholds are met (event-driven, reduces polling load). These statements reference the scd:RubricCriterionLevel URI to indicate which level was reached, using verbs from the ADL vocabulary (e.g., progressed) or a RAP-defined extension vocabulary.
  5. Emits final "achieved"/"mastered" statement when all rubric criteria are satisfied at or above the mastery threshold defined by the CompetencyDefinition's competencyLevel property
  6. All assertions link back to the evidence trail via xAPI statement references

How This Fits With Existing Specs​

IEEE SCD (1484.20.3-2022): RAP builds on SCDs without modifying them. It references CompetencyDefinition, RubricCriterion, and RubricCriterionLevel instances by their URIs, adding operational assertion logic as a separate concern. The SCD standard defines the rubric structure and mastery thresholds; RAP defines the rules for determining when those thresholds have been met based on evidence.
IEEE LMT (2881-2025): The LMT standard's teaches and assesses properties (defined in the LRMI namespace, with scd:CompetencyDefinition as rangeIncludes) establish which competencies are associated with which learning activities. RAP can consume this mapping from the Experience Index rather than requiring each RAP file to redeclare activity-to-competency relationships. When 2881 metadata is available, RAP focuses purely on evidence evaluation; when it isn't, RAP can include its own competency mappings as a fallback.

xAPI Spec: RAP is fully compatible with xAPI. When profiles exist, RAP references their statement templates and patterns. When they don't, RAP provides a practical path forward that can later contribute to formal profile development. RAP's output (progress and assertion statements) follows xAPI conventions.
CASS: RAP complements CASS rather than replacing it. CASS provides the assertion processing engine and the assertion storage/query infrastructure. RAP provides the declarative configuration that tells CASS (or any assertion engine) when to create assertions, how to calculate confidence, and which SCD rubric components to evaluate. Think of RAP as the configuration file that an assertion processor like CASS loads to know what to do.

JSON-LD: RAP uses JSON-LD for portability, semantic linking, and alignment with both IEEE and xAPI ecosystem practices. Both IEEE standards (SCD and LMT) use RDF-based models with URI/IRI identification; RAP's use of JSON-LD ensures its references to those URIs are machine-resolvable.

The Four Concerns​

The key design principle is separation of concerns across the standards ecosystem:
ConcernStandard/SpecWhat It Answers
WHAT competencies exist and what proficiency looks likeIEEE 1484.20.3 (SCD)CompetencyDefinition + Rubric + RubricCriterionLevel
WHERE competencies attach to learning contentIEEE 2881-2025 (LMT)LearningResource/Event teaches/assesses CompetencyDefinition
HOW evidence is capturedxAPI ProfileStatement templates, patterns, verb vocabulary
WHEN to assert achievementRAP (proposed)Assertion rules, thresholds, confidence calculation

No existing standard covers the fourth concern declaratively. RAP aims to fill that gap.

Diagrams​

I've created a set of architectural diagrams showing:
  1. Conceptual relationships between SCD, RAP, LMT, and xAPI profiles
  2. RAP development workflow (with both "use existing profile" and "bootstrap from data" paths)
  3. Runtime competency assertion process
  4. Complete data flow and file relationships

rap1_5.png
rap25.png
rap3-5.pngrap4.png

 
Why I'm Sharing This

I'm building this into the TLA Toolbox platform as a reference model, but before I get too far down the path, I want to:
  1. Get feedback — Does this approach make sense? Are there obvious problems I'm missing?
  2. Learn from the community — Am I reinventing something that already exists? Are there existing specs or approaches that already solve this problem? (I've looked at CASS's assertion processing and IEEE LOM Classification, but I may be missing something.)
  3. Invite collaboration — If this resonates with others facing similar challenges, I'd love to work together on refining the model
  4. Contribute back — The goal is to create something that can inform future standards work, not to create a proprietary silo
Open Questions I'm Wrestling With
  • Should RAP be proposed as an extension to the IEEE SCD spec, or remain a separate but linked specification? (The SCD standard's own extensibility mechanism via Application Profile Schemas could potentially accommodate RAP-style rules, but assertion logic may be too far outside SCD's scope.)
  • How should we handle versioning when SCDs, LMT metadata, and xAPI profiles all evolve independently?
  • What's the right level of expressiveness for assertion rules? (Too simple = not useful; too complex = nobody uses it)
  • How do we balance the need for quick prototyping with the rigor needed for formal standards?
  • Should RAP define its own vocabulary for progress/assertion verbs, or strictly reuse existing ADL and cmi5 verbs?
  • Are there existing competency assertion engines beyond CASS that already define assertion rules declaratively?
What I'm Looking For

Constructive criticism:
Tell me what's wrong with this approach. What am I missing?

Pointers to existing work: If someone has already solved this problem, I'd love to learn about it rather than reinvent wheels.

Use case validation: Does this address real problems you've encountered, or am I solving a problem that doesn't exist?

Technical feedback: Especially around the JSON-LD structure, multi-registration handling, the event-driven assertion model, and the relationship between RAP and 2881 metadata in the XI.

Collaboration opportunities: If this resonates with your work, let's talk about how we might work together.

Bottom Line

I'm trying to make evidence-based competency assertion practical today while staying aligned with standards and creating a pathway to contribute back to formal specifications. RAP is my attempt to bridge the gap between competency definitions and xAPI evidence in a way that's flexible, portable, and standards-friendly.

The TLA ecosystem has strong standards for defining competencies (SCD), describing learning metadata (LMT), and capturing evidence (xAPI). What it lacks is a declarative way to configure the assertion logic that connects evidence to competency claims. RAP aims to be that missing piece—not by replacing anything that exists, but by filling the gap between definition and operational evaluation.

I'm fully prepared to be told this already exists, that I'm approaching it wrong, or that there's a better way. That's why I'm here—to learn and, hopefully, to contribute something useful to the community.

Looking forward to your thoughts!


Sorting out...

IEEE Standards​

  • IEEE 1484.12.1-2020: Learning Object Metadata
  • IEEE 1484.20.1-2007: RDCEO (Inactive)
  • IEEE 1484.20.3-2022: Shareable Competency Definitions

W3C / Schema.org​

CASS​

The Three Standards​

1. IEEE 1484.12.1 LOM Category 9: Classification​

Purpose: Metadata IN the learning object
When: Created at authoring time
Format: XML (in SCORM packages, IMS Content Packages)
Says: "This course teaches/assesses these competencies"
Example:
xml
XML:
<classification>
  <purpose>competency</purpose>
  <taxonpath>
    <source>IEEE 1484.20.3</source>
    <taxon>
      <id>comp-uri</id>
      <entry>Competency Name</entry>
    </taxon>
  </taxonpath>
</classification>
Used by: SCORM, LMS, Learning Object Repositories

2. Schema.org AlignmentObject​

Purpose: Web-based alignment metadata
When: Created/published externally
Format: JSON-LD, RDFa, Microdata
Says: "This activity URL aligns to this competency URI"
Example:
json
JSON:
{  "@type": "CreativeWork", 
    "url": "activity-url", 
     "educationalAlignment": {   
          "@type": "AlignmentObject",   
          "alignmentType": "teaches",   
           "targetUrl": "comp-uri" 
        }
}
Used by: CASS, modern web platforms, OER

3. CASS Assertion Processing​

Purpose: Dynamic assertion creation
When: Runtime (when xAPI statements arrive)
Format: Internal algorithm + assertions
Says: "Based on evidence, this person holds this competency"
Process:
  1. xAPI statement arrives
  2. Look up alignment (from LOM or AlignmentObject)
  3. Extract competency URI
  4. Create assertion
Example Output:
json
JSON:
{ 
  "@type": "Assertion", 
  "subject": "person-pk", 
  "competency": "comp-uri", 
  "confidence": 0.85
}
Used by: CASS, TLA ecosystem

Quick Comparison Table​

FeatureIEEE LOM ClassificationSchema.org AlignmentObjectCASS Assertions
StandardIEEE 1484.12.1W3C Schema.orgCASS Project
Year2002~20132015+
LocationInside learning objectExternal metadataRuntime creation
FormatXMLJSON-LDInternal
When CreatedAuthoring timePublishing timeRuntime
PurposeDeclare competenciesMap alignmentsCreate assertions
Common UseSCORM packagesCASS alignmentsCompetency claims

Complete Workflow: All Three Working Together​

STEP 1: AUTHORING (IEEE LOM)
  • Author creates SCORM course
  • Embeds LOM metadata with Classification category
  • Specifies which competencies the course teaches
  • Result: Learning object has metadata
STEP 2: PUBLISHING (Schema.org AlignmentObject)
  • LMS imports SCORM package
  • Reads LOM Classification
  • Creates/publishes alignment in CASS
  • Maps activity URL to competency URI
  • Result: Alignment registered
STEP 3: LEARNING (xAPI)
  • Learner completes course
  • LMS emits xAPI statement with result/score
  • Statement stored in LRS
  • Result: Evidence recorded
STEP 4: ASSERTION CREATION (CASS)
  • CASS xAPI Adapter sees statement
  • Looks up alignment to find competency
  • Extracts confidence from score
  • Creates assertion
  • Result: Competency asserted
STEP 5: QUERY (CASS Assertion Processor)
  • System asks: "Does person have competency X?"
  • CASS retrieves assertion
  • Applies roll-up rules if needed
  • Returns: TRUE/FALSE with confidence
  • Result: Competency verified

Where RAP (Reusable Assertion Profile) Fits In​

THE GAP:
  • IEEE LOM tells WHAT competencies
  • Schema.org tells WHERE alignments are
  • CASS processes assertions
  • BUT: No standard for WHEN/HOW to create assertions
  • No configuration format for assertion rules
  • No way to define confidence calculation
RAP AIMS TO FILL THIS GAP:
  • Defines WHEN to create assertions (triggers)
  • Defines HOW to calculate confidence
  • Defines WHERE to get competency URI
  • Declarative (no coding required)
  • Works WITH all three standards
 
Back
Top