Bridging Competency Definitions and Evidence: Introducing the Rubric Assertion Profile (RAP)

Henry Ryng

Administrator
Staff member
I've been working with IEEE Shareable Competency Definitions (SCDs) and xAPI for evidence-based competency assertion, and I keep hitting the same challenge: there's a gap between the definition of a competency (especially one with a rubric) and the operational rules for asserting that someone has achieved it based on real xAPI evidence.

IEEE SCDs do a great job defining competencies and their rubric structures—they tell us WHAT we're assessing. xAPI profiles define how to capture learning evidence as statements—they give us reusable patterns for HOW activities generate evidence. But what's missing is the connective tissue: how do we map the achievement of specific rubric items or levels to patterns of xAPI statements in an LRS?

This becomes especially challenging when:
  • xAPI profiles for a domain don't exist yet or are incomplete
  • Evidence spans multiple learning experiences (different registrations)
  • We need granular progress indicators, not just binary "achieved/not achieved"
  • We want to prototype competency assertion quickly without waiting for formal profile development

The Proposed Solution: Rubric Assertion Profile (RAP)​

I'm proposing a new JSON-LD data model called a Rubric Assertion Profile (RAP) that acts as a bridge between SCDs and xAPI evidence streams. Think of it as "ornamenting" or extending SCD rubric items with xAPI-specific assertion logic.

What's in a RAP?​

A RAP is a portable JSON-LD file that contains:
  1. Links to SCD rubric items - References to specific competency definitions and their rubric structure
  2. Assertion rules - Logic for evaluating xAPI evidence:
    • Statement templates and patterns (can reference existing xAPI profiles)
    • Thresholds (e.g., "completed 5 of 7 activities")
    • Sequences (e.g., "must complete A before attempting B")
    • Multi-registration handling (evidence across multiple courses/experiences)
  3. References to xAPI profiles - When formal profiles exist, RAP leverages them; when they don't, RAP can bootstrap patterns from actual LRS data

How It Works in Practice​

Development workflow:
  1. Start with an SCD that includes a rubric
  2. Check if relevant xAPI profiles exist:
    • If YES: Link to existing statement templates and patterns
    • If NO: Use your query tools or TLA Toolbox to query/mock an LRS, analyze statements, and infer patterns
  3. Use the statement crafter to inject sample statements and test assertion logic
  4. Build assertion rules in the RAP editor
  5. Export as a portable JSON-LD file
  6. Use Audit/Suggest feature to propose additions to existing profiles or new profile creation
Runtime workflow:
  1. Learning activities emit xAPI statements to an LRS
  2. Assertion engine loads relevant RAP(s)
  3. Engine evaluates evidence against RAP rules
  4. Emits "progressed" statements when partial thresholds are met (event-driven, reduces load)
  5. Emits final "achieved"/"mastered" statement when all rubric criteria are satisfied
  6. All assertions link back to the evidence trail

How This Fits With Existing Specs​

IEEE SCD Spec: RAP builds on SCDs without modifying them. It references SCD competency and rubric IDs, adding operational assertion logic as a separate concern.

xAPI Spec: RAP is fully compatible with xAPI. When profiles exist, RAP references them. When they don't, RAP provides a practical path forward that can later contribute to formal profile development.

JSON-LD: RAP uses JSON-LD for portability, semantic linking, and alignment with both IEEE and xAPI ecosystem practices.

The key innovation is separation of concerns:

  • SCD = WHAT (competency definition + rubric structure)
  • RAP = HOW (assertion rules + evidence evaluation)
  • xAPI Profile = Reusable patterns (when available)

Why I'm Sharing This​

I'm building this into the TLA Toolbox platform, but before I get too far down the path, I want to:
  1. Get feedback - Does this approach make sense? Are there obvious problems I'm missing?
  2. Learn from the community - Am I reinventing something that already exists? Are there existing specs or approaches that already solve this problem?
  3. Invite collaboration - If this resonates with others facing similar challenges, I'd love to work together on refining the model
  4. Contribute back - The goal is to create something that can inform future standards work, not to create a proprietary silo

Open Questions I'm Wrestling With​

  • Should RAP be proposed as an extension to the IEEE SCD spec, or remain a separate but linked specification?
  • How should we handle versioning when both SCDs and xAPI profiles evolve?
  • What's the right level of expressiveness for assertion rules? (Too simple = not useful; too complex = nobody uses it)
  • How do we balance the need for quick prototyping with the rigor needed for formal standards?
  • Are there existing competency assertion engines that already do something similar?

What I'm Looking For​

Constructive criticism: Tell me what's wrong with this approach. What am I missing?

Pointers to existing work: If someone has already solved this problem, I'd love to learn about it rather than reinvent wheels.

Use case validation: Does this address real problems you've encountered, or am I solving a problem that doesn't exist?

Technical feedback: Especially around the JSON-LD structure, multi-registration handling, and the event-driven assertion model.

Collaboration opportunities: If this resonates with your work, let's talk about how we might work together.

Diagrams​

I've created a set of architectural diagrams showing:

  1. Conceptual relationships between SCD, RAP, and xAPI profiles
  2. RAP development workflow (with both "use existing profile" and "bootstrap from data" paths)
  3. Runtime competency assertion process
  4. Complete data flow and file relationships
rap1_5.png
rap25.png
rap3-5.pngrap4.png

Bottom Line​

I'm trying to make evidence-based competency assertion practical today while staying aligned with standards and creating a pathway to contribute back to formal specifications. RAP is my attempt to bridge the gap between competency definitions and xAPI evidence in a way that's flexible, portable, and standards-friendly.

I'm fully prepared to be told this already exists, that I'm approaching it wrong, or that there's a better way. That's why I'm here—to learn and, hopefully, to contribute something useful to the community.

Looking forward to your thoughts!
 
Sorting out...

IEEE Standards​

  • IEEE 1484.12.1-2020: Learning Object Metadata
  • IEEE 1484.20.1-2007: RDCEO (Inactive)
  • IEEE 1484.20.3-2022: Shareable Competency Definitions

W3C / Schema.org​

CASS​


The Three Standards​

1. IEEE 1484.12.1 LOM Category 9: Classification​

Purpose: Metadata IN the learning object
When: Created at authoring time
Format: XML (in SCORM packages, IMS Content Packages)
Says: "This course teaches/assesses these competencies"
Example:
xml
Code:
<[COLOR=rgb(228, 86, 73)]classification[/COLOR]>[/COLOR]
[COLOR=rgb(56, 58, 66)]  <[COLOR=rgb(228, 86, 73)]purpose[/COLOR]>competency</[COLOR=rgb(228, 86, 73)]purpose[/COLOR]>
  <[COLOR=rgb(228, 86, 73)]taxonpath[/COLOR]>
    <[COLOR=rgb(228, 86, 73)]source[/COLOR]>IEEE 1484.20.3</[COLOR=rgb(228, 86, 73)]source[/COLOR]>
    <[COLOR=rgb(228, 86, 73)]taxon[/COLOR]>
      <[COLOR=rgb(228, 86, 73)]id[/COLOR]>comp-uri</[COLOR=rgb(228, 86, 73)]id[/COLOR]>
      <[COLOR=rgb(228, 86, 73)]entry[/COLOR]>Competency Name</[COLOR=rgb(228, 86, 73)]entry[/COLOR]>
    </[COLOR=rgb(228, 86, 73)]taxon[/COLOR]>
  </[COLOR=rgb(228, 86, 73)]taxonpath[/COLOR]>
</[COLOR=rgb(228, 86, 73)]classification[/COLOR]>
Used by: SCORM, LMS, Learning Object Repositories

2. Schema.org AlignmentObject​

Purpose: Web-based alignment metadata
When: Created/published externally
Format: JSON-LD, RDFa, Microdata
Says: "This activity URL aligns to this competency URI"
Example:
json
Code:
{
  [COLOR=rgb(228, 86, 73)]"@type"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(80, 161, 79)]"CreativeWork"[/COLOR],
  [COLOR=rgb(228, 86, 73)]"url"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(80, 161, 79)]"activity-url"[/COLOR],
  [COLOR=rgb(228, 86, 73)]"educationalAlignment"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] {
    [COLOR=rgb(228, 86, 73)]"@type"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(80, 161, 79)]"AlignmentObject"[/COLOR],
    [COLOR=rgb(228, 86, 73)]"alignmentType"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(80, 161, 79)]"teaches"[/COLOR],
    [COLOR=rgb(228, 86, 73)]"targetUrl"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(80, 161, 79)]"comp-uri"[/COLOR]
  }
}
Used by: CASS, modern web platforms, OER

3. CASS Assertion Processing​

Purpose: Dynamic assertion creation
When: Runtime (when xAPI statements arrive)
Format: Internal algorithm + assertions
Says: "Based on evidence, this person holds this competency"
Process:

  1. xAPI statement arrives

  2. Look up alignment (from LOM or AlignmentObject)

  3. Extract competency URI

  4. Create assertion
Example Output:
json
Code:
{
  [COLOR=rgb(228, 86, 73)]"@type"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(80, 161, 79)]"Assertion"[/COLOR],
  [COLOR=rgb(228, 86, 73)]"subject"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(80, 161, 79)]"person-pk"[/COLOR],
  [COLOR=rgb(228, 86, 73)]"competency"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(80, 161, 79)]"comp-uri"[/COLOR],
  [COLOR=rgb(228, 86, 73)]"confidence"[/COLOR][COLOR=rgb(64, 120, 242)]:[/COLOR] [COLOR=rgb(183, 107, 1)]0.85[/COLOR]
}
Used by: CASS, TLA ecosystem

Quick Comparison Table​

FeatureIEEE LOM ClassificationSchema.org AlignmentObjectCASS Assertions
StandardIEEE 1484.12.1W3C Schema.orgCASS Project
Year2002~20132015+
LocationInside learning objectExternal metadataRuntime creation
FormatXMLJSON-LDInternal
When CreatedAuthoring timePublishing timeRuntime
PurposeDeclare competenciesMap alignmentsCreate assertions
Common UseSCORM packagesCASS alignmentsCompetency claims

Complete Workflow: All Three Working Together​

STEP 1: AUTHORING (IEEE LOM)
  • Author creates SCORM course
  • Embeds LOM metadata with Classification category
  • Specifies which competencies the course teaches
  • Result: Learning object has metadata
STEP 2: PUBLISHING (Schema.org AlignmentObject)
  • LMS imports SCORM package
  • Reads LOM Classification
  • Creates/publishes alignment in CASS
  • Maps activity URL to competency URI
  • Result: Alignment registered
STEP 3: LEARNING (xAPI)
  • Learner completes course
  • LMS emits xAPI statement with result/score
  • Statement stored in LRS
  • Result: Evidence recorded
STEP 4: ASSERTION CREATION (CASS)
  • CASS xAPI Adapter sees statement
  • Looks up alignment to find competency
  • Extracts confidence from score
  • Creates assertion
  • Result: Competency asserted
STEP 5: QUERY (CASS Assertion Processor)
  • System asks: "Does person have competency X?"
  • CASS retrieves assertion
  • Applies roll-up rules if needed
  • Returns: TRUE/FALSE with confidence
  • Result: Competency verified

Where RAP (Reusable Assertion Profile) Fits In​

THE GAP:
  • IEEE LOM tells WHAT competencies
  • Schema.org tells WHERE alignments are
  • CASS processes assertions
  • BUT: No standard for WHEN/HOW to create assertions
  • No configuration format for assertion rules
  • No way to define confidence calculation
RAP AIMS TO FILL THIS GAP:
  • Defines WHEN to create assertions (triggers)
  • Defines HOW to calculate confidence
  • Defines WHERE to get competency URI
  • Declarative (no coding required)
  • Works WITH all three standards

 
Back
Top