Skip to content

Commit 11ea9eb

Browse files
Initial commit of assessment criteria ADRs
1 parent 41b4d65 commit 11ea9eb

5 files changed

Lines changed: 197 additions & 0 deletions
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
20. Where in the codebase should CBE assessment criteria go?
2+
============================================================
3+
4+
Context
5+
-------
6+
Competency Based Education (CBE) requires that the LMS have the ability to track learners' mastery of competencies through the means of assessment criteria. For example, in order to demonstrate that I have mastered the Multiplication competency, I need to have earned 75% or higher on Assignment 1 or Assignment 2\. The association of the competency, the threshold, the assignments, and the logical OR operator together make up the assessment criteria for the competency. Course Authors and Platform Administrators need a way to set up these associations in Studio so that their outcomes can be calculated as learners complete their materials. This is an important prerequisite for being able to display competency progress dashboards to learners and staff to make Open edX the platform of choice for those using the CBE model.
7+
8+
Decisions
9+
---------
10+
CBE Assessment Criteria, Student Assessment Criteria Status, and Student Competency Status values should go in the openedx-learning repository as there are broader architectural goals to refactor as much code as possible out of the edx-platform repository into the openedx-learning repository such that it can be designed in a way that is easy for plugin developers to utilize.
11+
12+
More specifically, all code related to adding Assessment Criteria to Open edX will live in openedx-learning/openedx_learning/apps/assessment_criteria.
13+
14+
This keeps a single cohesive Django app for authoring the criteria and for storing learner status derived from those criteria, which reduces cross-app dependencies and simplifies migrations and APIs. It also keeps Open edX-specific models (users, course identifiers, LMS/Studio workflows) out of the standalone ``openedx_tagging`` package and avoids forcing the authoring app to depend on learner runtime data. The tradeoff is that authoring and runtime concerns live in the same app; if learner status needs to scale differently or be owned separately in the future, a split into a dedicated status app can be revisited. Alternatives that externalize runtime status to analytics/services or split repos introduce operational and coordination overhead that is not justified at this stage.
15+
16+
Rejected Alternatives
17+
---------------------
18+
1. edx-platform repository
19+
- Pros: This is where all data currently associated with students is stored, so it would match the existing pattern and reduce integration work for the LMS.
20+
- Cons: The intention is to move core learning concepts out of edx-platform (see `0001-purpose-of-this-repo.rst`_), and keeping it there makes reuse and pluggability harder.
21+
2. All code related to adding Assessment Criteria to Open edX goes in openedx-learning/openedx\_learning/apps/authoring/assessment\_criteria
22+
- Pros:
23+
- Tagging and assessment criteria are part of content authoring workflows as is all of the other code in this directory.
24+
- All other elements using the Publishable Framework are in this directory.
25+
- Cons:
26+
- We want each package of code to be independent, and this would separate assessment criteria from the tags that they are dependent on.
27+
- Assessment criteria also includes learner status and runtime evaluation, which do not fit cleanly in the authoring app.
28+
- The learner status models in this feature would have a ForeignKey to settings.AUTH_USER_MODEL, which is a runtime/learner concern. If those models lived under the authoring app, then the authoring app would have to import and depend on the user model, forcing an authoring-only package to carry learner/runtime dependencies. This may create unwanted coupling.
29+
3. New Assessment Criteria Content tables will go in openedx-learning/openedx_learning/openedx_tagging/core/assessment_criteria. New Student Status tables will go in openedx-learning/student_status.
30+
- Pros:
31+
- Keeps assessment criteria in the same package as the tags that they are dependent on.
32+
- Cons:
33+
- `openedx_tagging` is intended to be a standalone library without Open edX-specific dependencies (see `0007-tagging-app.rst`_); assessment criteria would violate that boundary.
34+
- Splitting Assessment Criteria and Student Statuses into two apps would require cross-app foreign keys (e.g., status rows pointing at criteria/tag rows in another app), migration ordering and dependency declarations to ensure tables exist in the right order, and shared business logic or APIs for computing/updating status that now must live in one app but reference models in the other.
35+
4. Split assessment criteria and learner statuses into two apps inside openedx-learning/openedx\_learning/apps (e.g., assessment\_criteria and learner\_status)
36+
- Pros:
37+
- Clear separation between authoring configuration and computed learner state.
38+
- Could allow different storage or scaling strategies for status data.
39+
- Cons:
40+
- Still introduces cross-app dependency and coordination for a single feature set.
41+
- May be premature for the POC; adds overhead without proven need.
42+
5. Store learner status in a separate service
43+
- Pros:
44+
- Scales independently and avoids write-heavy tables in the core app database.
45+
- Could potentially reuse existing infrastructure for grades.
46+
- Cons:
47+
- Introduces eventual consistency and more integration complexity for LMS/Studio views.
48+
- Requires additional infrastructure and operational ownership.
49+
6. Split authoring and runtime into separate repos/packages
50+
- Pros:
51+
- Clear ownership boundaries and independent release cycles.
52+
- Cons:
53+
- Adds packaging and versioning overhead for a tightly coupled domain.
54+
- Increases coordination cost for migrations and API changes.
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
21. How should versioning be handled for CBE assessment criteria?
2+
=================================================================
3+
4+
Context
5+
-------
6+
Course Authors and/or Platform Administrators will be entering the assessment criteria rules in Studio that learners are required to meet in order to demonstrate competencies. Depending on the institution, these Course Authors or Platform Administrators may have a variety of job titles, including Instructional Designer, Curriculum Designer, Instructor, LMS Administrator, Faculty, or other Staff.
7+
8+
Typically, only one person would be responsible for entering assessment criteria rules in Studio for each course, though this person may change over time. However, entire programs could have many different Course Authors or Platform Administrators with this responsibility.
9+
10+
Typically, institutions and instructional designers do not change the mastery requirements (assessment criteria) for their competencies frequently over time. However, the ability to do historical audit logging of changes within Studio can be a valuable feature to those who have mistakenly made changes and want to revert or those who want to experiment with new approaches.
11+
12+
Currently, Open edX always displays the latest edited version of content in the Studio UI and always shows the latest published version of content in the LMS UI, despite having more robust version tracking on the backend (Publishable Entities). Publishable Entities for Libraries is currently inefficient for large nested structures because all children are copied any time an update is made to a parent.
13+
14+
Authoring data (criteria definitions) and runtime learner data (status) have different governance needs: the former is long-lived and typically non-PII, while the latter is user-specific, can be large (learners x criteria/competencies x time), and may require stricter retention and access controls. These differing lifecycles can make deep coupling of authoring and runtime data harder to manage at scale. Performance is also a consideration: computing or resolving versioned criteria for large courses could add overhead in Studio authoring screens or LMS views.
15+
16+
Decision
17+
--------
18+
Defer assessment criteria versioning for the initial implementation. Store only the latest authored criteria and expose the latest published state in the LMS, consistent with current Studio/LMS behavior. This keeps the initial implementation lightweight and avoids the publishable framework's known inefficiencies for large nested structures. The tradeoff is that there is no built-in rollback or audit history; adding versioning later will require data migration and careful choices about draft vs published defaults.
19+
20+
Rejected Alternatives
21+
---------------------
22+
23+
1. Each model indicates version, status, and audit fields
24+
- Pros:
25+
- Simple and familiar pattern (version + status + created/updated metadata)
26+
- Straightforward queries for the current published state
27+
- Can support rollback by marking an earlier version as published
28+
- Stable identifiers (original_ids) can anchor versions and ease potential future migrations
29+
- Cons:
30+
- Requires custom conventions for versioning across related tables and nested groups
31+
- Lacks shared draft/publish APIs and immutable version objects that other authoring apps can reuse
32+
- Not necessarily consistent with existing patterns in the codebase (though these are already not overly consistent).
33+
2. Publishable framework in openedx-learning
34+
- Pros:
35+
- First-class draft/published semantics with immutable historical versions
36+
- Consistent APIs and patterns shared across other authoring apps
37+
- Cons:
38+
- Inefficient for large nested structures because all children are copied for each new parent version
39+
- Requires modeling criteria/groups as publishable entities and wiring Studio/LMS workflows to versioning APIs
40+
- Adds schema and migration complexity for a feature that does not yet require full versioning
41+
3. Append-only audit log table (event history)
42+
- Pros:
43+
- Lightweight way to capture who changed what and when
44+
- Enables basic rollback by replaying or reversing events
45+
- Cons:
46+
- Requires custom tooling to reconstruct past versions
47+
- Does not align with existing publishable versioning patterns
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
22. How should CBE assessment criteria be modeled in the database?
2+
==================================================================
3+
4+
Context
5+
-------
6+
Competency Based Education (CBE) requires that the LMS have the ability to track learners' mastery of competencies through the means of assessment criteria. For example, in order to demonstrate that I have mastered the Multiplication competency, I need to have earned 75% or higher on Assignment 1 or Assignment 2\. The association of the competency, the threshold, the assignments, and the logical OR operator together make up the assessment criteria for the competency. Course Authors and Platform Administrators need a way to set up these associations in Studio so that their outcomes can be calculated as learners complete their materials. This is an important prerequisite for being able to display competency progress dashboards to learners and staff to make Open edX the platform of choice for those using the CBE model.
7+
8+
In order to support these use cases, we need to be able to model these rules (assessment criteria) and their association to the tag/competency to be demonstrated and the object (course, subsection, unit, etc) or objects that are used as the means to assess competency mastery. We also need to leave flexibility for a variety of different types as well as groupings to be able to develop a variety of pathways of different combinations of objects that can be used by learners to demonstrate mastery of a competency.
9+
10+
Additionally, we need to be able to track each learner's progress towards competency demonstration as they begin receiving results for their work on objects associated with the competency via assessment criteria.
11+
12+
Decision
13+
--------
14+
15+
1. Update **`oel_tagging_taxonomy`** to have a new column for `taxonomy_type` where the value could be “Competency” or “Tag”.
16+
2. Add new database table for `oel_assessment_criteria_group` with these columns:
17+
1. `id`: unique primary key
18+
2. `parent_id`: The `oel_assessment_criteria_group.id` of the group that is the parent to this one.
19+
3. `oel_tagging_tag_id`: The `oel_tagging_tag.id` of the tag that represents the competency that is mastered when the assessment criteria in this group are demonstrated.
20+
4. `course_id`: The nullable `course_id` to which all of the child assessment criteria's associated objects belong.
21+
5. `name`: string
22+
6. `ordering`: Indicates evaluation sequence number for this criteria group. This defines the evaluation sequence for siblings and enables short-circuit evaluation.
23+
7. `logic_operator`: Either “AND” or “OR” or null. This determines how children are combined at a group node ("AND" or "OR").
24+
25+
Example: A root group uses "OR" with two child groups.
26+
- Child group A (`ordering=1`) requires "AND" across Assignment 1 and Assignment 2.
27+
- Child group B (`ordering=2`) requires "AND" across Final Exam and viewing prerequisites.
28+
- If group A evaluates to true, group B is not evaluated.
29+
3. Add new database table for `oel_assessment_criteria` with these columns:
30+
1. `id`: unique primary key
31+
2. `assessment_criteria_group_id`: foreign key to Assessment Criteria Group id
32+
3. `oel_tagging_objecttag_id`: Tag/Object Association id
33+
4. `oel_tagging_tag_id`: The `oel_tagging_tag.id` of the tag that represents the competency that is mastered when this assessment criteria is demonstrated.
34+
5. `object_id`: The `object_id` found with `oel_tagging_objecttag_id` which is included here to maximize query efficiency. It points to the course, subsection, unit, or other content that is used to assess mastery of the competency.
35+
6. `course_id`: The nullable `course_id` to which the object associated with the tag belongs.
36+
7. `rule_type`: “View”, “Grade”, “MasteryLevel” (Only “Grade” will be supported for now)
37+
8. `rule_payload`: JSON payload keyed by `rule_type` to avoid freeform strings. Examples:
38+
1. `Grade`: `{"op": "gte", "value": 75, "scale": "percent"}`
39+
2. `MasteryLevel`: `{"op": "gte", "level": "Proficient"}`
40+
4. Add constraints and indexes to keep denormalized values aligned and queries fast.
41+
1. Enforce that `oel_assessment_criteria.oel_tagging_tag_id` matches the `oel_assessment_criteria_group.oel_tagging_tag_id` for its `assessment_criteria_group_id`.
42+
2. Enforce that `oel_assessment_criteria.object_id` matches the `object_id` referenced by `oel_tagging_objecttag_id`.
43+
3. Add indexes for common lookups:
44+
1. `oel_assessment_criteria_group(oel_tagging_tag_id, course_id)`
45+
2. `oel_assessment_criteria(assessment_criteria_group_id)`
46+
3. `oel_assessment_criteria(oel_tagging_objecttag_id, object_id)`
47+
4. `student_assessmentcriteriastatus(user_id, assessment_criteria_id)`
48+
5. `student_assessmentcriteriagroupstatus(user_id, assessment_criteria_group_id)`
49+
6. `student_competencystatus(user_id, oel_tagging_tag_id)`
50+
5. When a completion event (graded, completed, mastered, etc.) occurs for the object, then determine and track where the learner is at in earning this competency. To reduce the number of times calculations need to run, we can have tables that hold the results at each level.
51+
1. Add new database table for `student_assessmentcriteriastatus` with these columns:
52+
1. `id`: unique primary key
53+
2. `assessment_criteria_id`: Foreign key pointing to assessment criteria id
54+
3. `user_id`: Foreign key pointing to user_id (presumably the learner's id, although it appears that it is possible for staff to get grades as well) in `auth_user` table
55+
4. `status`: “Demonstrated”, “AttemptedNotDemonstrated”, “PartiallyAttempted”
56+
5. `timestamp`: The timestamp at which the student's assessment criteria status was set.
57+
2. Add a new database table for `student_assessmentcriteriagroupstatus` with these columns:
58+
1. `id`: unique primary key
59+
2. `assessment_criteria_group_id`: Foreign key pointing to assessment criteria group id
60+
3. `user_id`: Foreign key pointing to user_id (presumably the learner's id, although it appears that it is possible for staff to get grades as well) in `auth_user` table
61+
4. `status`: “Demonstrated”, “AttemptedNotDemonstrated”, “PartiallyAttempted”
62+
5. `timestamp`: The timestamp at which the student's assessment criteria status was set.
63+
3. Add a new database table for `student_competencystatus` with these columns:
64+
1. `id`: unique primary key
65+
2. `oel_tagging_tag_id`: Foreign key pointing to Tag id
66+
3. `user_id`: Foreign key pointing to user_id (presumably the learner's id, although it appears that it is possible for staff to get grades as well) in `auth_user` table
67+
4. `status`: “Demonstrated” or “PartiallyAttempted”
68+
5. `timestamp`: The timestamp at which the student's competency status was set.
69+
70+
.. image:: images/AssessmentCriteriaModel.png
71+
:alt: Assessment Criteria Model
72+
:width: 80%
73+
:align: center
74+
75+
76+
Rejected Alternatives
77+
---------------------
78+
79+
1. Add a generic oel\_tagging\_objecttag\_metadata table to attempt to assist with pluggable metadata concept. This table would have foreign keys to each metadata table, currently only assessment\_criteria\_group and assessment\_criteria as well as a type field to indicate what metadata table is being pointed to.
80+
1. Pros
81+
1. Centrally organizes metadata associations in one place
82+
2. Cons
83+
1. Adds additional overhead to retrieve specific metadata
84+
85+
.. image:: images/AssessmentCriteriaModelAlternative.png
86+
:alt: Assessment Criteria Model Alternative
87+
:width: 80%
88+
:align: center
89+
90+
2. Split rule storage into per-type tables (for example, `assessment_criteria_grade_rule` and `assessment_criteria_mastery_rule`) instead of a single JSON payload.
91+
1. Pros
92+
1. Provides stricter schemas and validation per rule type
93+
2. Cons
94+
1. Increases table count and join complexity as new rule types are added
95+
96+
149 KB
Loading
126 KB
Loading

0 commit comments

Comments
 (0)