The Ed-Fi ODS / API contains endpoints that allow client applications to send XML data files through the API for bulk loading. This is useful for a number of scenarios. Bulk loading is often the easiest way to populate a new instance of the ODS / API. In addition, some implementations only require periodic uploads from clients. Bulk loading is useful for these "batch loading" scenarios.
This article provides overview and technical information to help platform hosts and client developers use the bulk load endpoints.
Note that platform hosts have an alternate way of bulk loading files directly from disk (i.e., not through the API) using the Ed-Fi Console ODS Bulk Loader. See the article How To: Use the Ed-Fi Console ODS Bulk Loader for more information.
A bulk operation can include thousands of records across multiple files to insert and update.
A few key points about the API surface worth understanding are:
- Clients must post a representation of the files to be uploaded. This includes the format, the interchange type, and the file size. We'll look at an example below.
- Clients must send bulk data that conforms to the Ed-Fi Data Standard XML definitions. See the Interchange Schema section of the Ed-Fi Data Standard documentation for details.
- Error Handling. Records are parsed and operations are executed as individual transactions. This means that one failing entity record will not fail the entire batch. The errors from individual transactions are logged. The error log can be inspected per bulk operation.
- API Profiles do not impact bulk data loads. Platform hosts can optionally implement API Profiles to create data policies for the API JSON endpoints. Those policies do not affect data sent through bulk load services.
Before we dive into the details, it's useful to understand the differences between the transactional operations of the API surface and the bulk load services discussed in this article. The following table provides a compare/contrast of the major differences:
|Transactional API Surface||Bulk Load Services|
|JSON||Ed-Fi Data Standard XML|
|Synchronous responses||Asynchronous responses|
|Near real-time, as data is changing in client applications|
For initial load or batch mode updates
|Full range of create, read, update, and delete operations||Upsert (i.e., create and update) only|
|Create and retrieve UniqueIds||No ability to create or retrieve UniqueIds|
Platform Setup and Testing
This section outlines the basics of setting up and testing bulk loading through the ODS / API surface.
- Microsoft Message Queue (MSMQ). Bulk load services work against Microsoft message queues, and the console workers share the same internal logic. The internal logic is covered by unit tests that verify the ability to process messages from one queue to the next.
- Smoke Testing. A "smoke test" is typically all that is required for these services. Platform hosts basically perform a bulk upload operation as outlined in this article, and verify that the data hits the ODS (either by inspecting the data tables directly or calling the API surface to search for the information loaded).
- Troubleshooting. When troubleshooting the services, the bulk worker and upload services can be temporarily stopped, and their associated message queues examined for unprocessed messages. Turning on the associated service should eventually clear the service's source queue. If there are messages building up in either of the queues, the problem is typically one of credentials, and an appropriate error will be in the event log. Proper credentialing of the services is covered in the deployment documentation. See, e.g., the sandbox deployment information and production deployment information in the Platform Developers' Guide.
Client Walkthrough Example
This walkthrough demonstrates the sequence of operations clients use to load bulk data via the API. We'll use an XML file with student data as an example.
The high-level sequence of operations from the client is as follows:
Detail on each step follows.
Step 1. Create the Operation
POST a representation of the files to upload to
uploadFiles entry for every file you're including. The
format should always be "text/xml",
interchangeType should be the type of interchange, and
size is the total bytes of the file you're uploading. You can easily get the file size by using
FileInfo(filePath).Length or using the
Length property of the file stream if you're opening a file stream to send it up.
Sample Response (should have status code of
From the response, you can obtain the overall operation id (the root
id), as well as individual fileIds for each file to be uploaded that will be used as part of uploading.
Step 2. Upload XML Files
For each file to upload, take the returned fileId and then submit the file as 1-to-many "chunks." Each chunk of the file can be up to 150MB. For the attached example file, it can be submitted as a single chunk for simplicity.
POST the file to
fileId is the value returned from creating the bulk operation,
offset is the current offset in the file starting with 0, and
size is the actual size of the chunk being uploaded. This POST must be submitted as
multipart/form-data with the binary data streamed along in the body. An easy way to do this correctly is to use (or deconstruct) the code provided in the generated SDK for the UploadsApi, as it will handle submitting the appropriate headers and data.
The following is an example
HttpRequest with Headers and embedded XML:
The expected response is a status code of
201, with no body.
You would repeat this process until the entire file has been uploaded, adding the size of the chunk to the offset value for each subsequent upload. For example, submitting two 300-byte chunks, the first
offset would be 0, the second would be 300, and both would have a
size of 300.
The following is example code for handling a large file:
Step 3. Commit the Upload
For each file, after finishing the upload, take the
fileId and commit the upload.
fileId is the same fileId that was uploaded to. The expected response is a
202 Accepted with no body.
Step 4. Check Status
At this point, the bulk operation is completed, and will be processed on the server asynchronously. Once the commit command is received, the operation is pushed to a queue that will trigger the actual processing. Status can be checked at any time by performing a GET to
bulkOperationId is the
id sent back from the original creation of the operation.
On a happy path, after committing all the files, the
status should be Started, such as this example:
Once the operation is done processing, the
status should be Completed, such as this example:
If any of the data elements don't load correctly, the
status will come back as Error such as this example:
An Error status doesn't necessarily mean every record failed to load. To see which parts failed to load, you can perform a GET against
/bulkoperations/operationId/exceptions/fileId?offset=0&limit=50 to get 50 exceptions per file at a time. You can adjust the offset and limit to page through all the exceptions until you've received them all.
This section contains a few additional resources related to bulk loading through the API:
- Bulk Operation Endpoint Documentation. The endpoints used in the example above are documented on the Ed-Fi API sandbox instance at . See the "Other" API section.
- API Client Boot Camp Documents. The Ed-Fi Alliance hosted a "boot camp" training session for API Client developers that included a walkthrough of bulk loading. An instructional overview and training materials are available online here.
- Ed-Fi Tracker / JIRA. The Ed-Fi Alliance's issue tracking system is a good resource for fine points and troubleshooting specific implementation issues. See, e.g., the discussion on Tracker ticket ODS-820.
- Deployment Documentation. The Deployment section in the Platform Developers' Guide has additional information that platform hosts and operations teams may find useful.