Ed-Fi Request for Comment 15: ASSESSMENT OUTCOMES API
Product: Ed-Fi Data Standard v2.2
Affects: Ed-Fi ODS / API
Obsoletes: RFC 8 - ASSESSMENT OUTCOMES MANAGEMENT API
Obsoleted By: --
Status: Active Proposal for Release
August 31, 2018
The Assessment Outcomes API describes a REST API surface to enable exchange of assessment metadata and student assessment results between disparate and geographically separated systems operated by different organizations. The API resource models are derived from and consistent with the underling Ed-Fi Unifying Data Model v2.2.
The Ed-Fi Assessment API provides a blueprint for a source system (the provider) to manage a core set of assessment data on a target system (the consumer) using RESTful APIs. In this data exchange architecture, the provider implements an API client, which uses HTTP/S requests and RESTful patterns to manage API resources on the consumer system, which implements the API definition itself (see Figure 1).
Figure 1. Overview of API and API client architecture
This is often thought of as a "push" model, as the provider pushes data to the consumer, who has implemented the API surface.
While this architecture can be understood as the provider transferring data from itself to a consumer system, the API interactions are often more sophisticated. In real-world use cases within the Ed-Fi community, field evidence suggests that using such an API includes a broader set of responsibilities on behalf of the provider and consumer systems, such as the responsibility for regular auditing the state of data synchronization and addressing gaps in synchronization when they occur. 
The Ed-Fi Unifying Data Model (UDM)
Purpose of the UDM
The Ed-Fi UDM provides a logical model and data dictionary that define the semantics (entities, properties, and definitions) and the structure (relationships between entities including cardinality) of all data within all Ed-Fi published standards. They are "unifying" in the sense that they ensure the compatibility of the structure and meaning of data across all Ed-Fi standards.
The UDM is concerned principally with data directly related to student outcomes and results that can be used by teachers and others to make instructional decisions to improve student performance.
Consistent with that goal, the assessment data exchange models described in these standards are therefore focused on capture and delivery of results (see the outlined section of Figure 2).
Figure 2. Focus of the Ed-Fi Assessment API Standards
Use Cases Out-of-Scope for the UDM
Given that focus of the Ed-Fi UDM and data exchange standards, the exchanges enabled in the model described here are not designed to deliver on the following functions and use cases:
- Portability of assessment instruments (e.g., test forms, question item banks, scoring algorithms) between LMS or similar systems
- Managing operations of assessments in real time (e.g., the amount of time the student has to perform an assessment)
- Gathering derivative data to assist with the improvement of assessment instruments or to provide a microscopic view into student interactions (e.g., what elements of an interactive question a student clicked on)
Version of the UDM for this Standard
This standard uses the Ed-Fi Unifying Data Model v2.2. All entities, properties and relationships are derived from that logical model and must observe those semantics and structure to be considered a faithful implementation of this standard.
Background on the Ed-Fi Assessment Domain
Hierarchical and Recursive Model
Many assessments are multi-tier in the sense that they provide multiple scores or result sets for each assessment. An example would be a single "reading" assessment that tested multiple skill areas, such as "Reading Comprehension," "Accuracy and Fluency," "Phonemic Awareness," and so on. In the Ed-Fi model, the top-level assessment is an Assessment and the skill areas are ObjectiveAssessments. This structure is recursive, so that there can be any number of levels of ObjectiveAssessments.
Once the student takes the assessment, the results are captured in the StudentAssessment and StudentObjectiveAssessment, each of which has references back to its parent entity. Finally, for assessments that report item level results, there are AssessmentItems and StudentAssessmentItems. (See Figure 3 for a partial UML representation of the Ed-Fi Assessment domain model.)
Figure 3. Principal entities in Ed-Fi Assessment domain
Learning Standards and Learning Objectives
Critical for use of student assessment results is defining for stakeholders what student knowledge, skills, and other competencies are being assessed. In the Ed-Fi assessment model, LearningStandards and LearningObjective play this role.
- LearningStandard represents governed sets of academic standards shared between organizations. These would include state-published or endorsed academic standards (such as the Common Core State Standards), or sets of academic standards published by other organizations designed to coordinate cross-sector activity (an example might be the Next Generation Science Standards).
- LearningObjective represents academic standards or similar guidelines on student competency development generally applicable to a single (and generally governed by a single) organization. They capture cases in which a school district may want to have a more elaborate or slightly different set of academic standards than the state, and it is for this reason that LearningObjectives can map to a LearningStandard.
Both LearningStandards and LearningObjectives are hierarchal as well; this matches the tiered pattern seen frequently in academic standards.
ObjectiveAssessments can map to LearningStandards or LearningObjectives. As noted above, a mapping to a LearningObjective may also signify a mapping to a LearningStandard, if the LearningObjective is so mapped. AssessmentItems can be mapped to LearningStandards.
Figure 4. Ed-Fi Assessment domain representation of learning standards
The architecture covered by this model of data exchange is intended to serve the following example use cases. Note that these use cases are illustrative, not exhaustive: they outline a few high-value uses cases and do not cover all possible use cases.
- School X has a highly personalized learning process where student mastery is assessed against learning standards on an ongoing basis, sometimes even multiple times during the school day. These mastery levels for each student are recorded in an LMS, assessment, or gradebook system that teachers use to monitor progress to inform lesson planning and individual student "playlist" assignments. In addition to teacher delivered instruction and informal assessment, the school relies on external curriculum systems for content delivery. Those systems provide for "exit tickets" that describe the student’s mastery level against a learning standard. The information from the curriculum tools needs to travel to the LMS, assessment, or gradebook system as soon as possible after a student competes a learning unit.
- A school district uses an online diagnostic system to measure English-language learner capabilities as part of a larger system of identification and support for ELL students. Diagnostic sessions where the ELL screening assessment are administered are scheduled ad hoc as students enroll in the district or are otherwise identified. When a student completes a screening assessment, the resulting data needs to travel to a case-management system and to the district SIS system.
- A provider of interim benchmark assessments (typically given each grading period) needs to transfer results to school districts in machine readable format. The school districts typically aggregate this data alongside other data for the purposes of assessing progress towards overall yearly goals. The individual student level data is also included on quarterly report cards and put into parent portals. Assessment results at an item or learning standard level, both in aggregate and at a student level, are used by teachers for planning. The district wants this data to be available to produce the types of reports and visualizations outlined above and uses the Ed-Fi ODS to power the application and reporting tools.
API Resources and Interactions
This API standard is designed to allow applications to read and write assessment data through a secure REST interface.
API implementers and clients are expected to follow all guidelines in the Ed-Fi API Design and Implementation Guidelines. These include requirements relating to errors, authentication, security, and other aspects of API usage and implementation. Any MUSTs from that document are considered required to conform to this standard. If there are differences between the requirements included in this document and the Guidelines, the information provided in this document is assumed to have precedence.
An Open API definition of the REST interface is provided below. Consumer implementations wishing to conform to this standard are expected to implement paths and resources as described in that Open API specification. In addition, providers must accurately follow the semantics in the Ed-Fi UDM.
/assessmentsThis entity represents a tool, instrument, process, or exhibition composed of a systematic sampling of behavior for measuring a student's competence, knowledge, skills, or behavior. An assessment can be used to measure differences in individuals or groups and changes in performance from one occasion to the next.
/objectiveAssessmentsThis entity represents subtests that assess specific learning objectives.
/assessmentItemsThis entity represents one of many single measures that make up an assessment.
/learningObjectivesThis entity represents identified learning objectives for specified objective assessments.
/studentAssessmentsThis entity represents the analysis or scoring of a student's response on an assessment. The analysis results in a value that represents a student's performance on a set of items on a test.
 Additional examples would be that the provider is also required to be able to reconcile records following errors or other transport problems, implement data quality checks, and log and surface errors to users.
 For example, it can be helpful for teachers looking at the results of a test to be able to see the test questions, or to understand if a student was provided one or more accommodations for the test itself.