Best Practices: Large Dataset Loading
Optimized patterns for API partners loading large volumes of financial data — for data warehouses, Power BI reports, and similar analytics integrations.
This guide is intended for API partners who regularly load large datasets from Asperion — for example, to power data warehouse pipelines or Power BI reports built on financial mutations. Following these patterns will minimize API load, reduce sync times, and keep your data consistent.
Choosing Your Approach
A: Aggregated Balances
Use this when you only need period-level balances per general ledger account — no individual transaction details required.
- Simpler implementation
- Lower data volume per request
- Ideal for reporting dashboards and P&L summaries
- No incremental sync logic needed
→ Use for: Power BI balance reports, period summaries
The /v1/ledgergrouped endpoint returns balances grouped by period and general ledger account. This is the fastest way to obtain financial summaries without processing individual mutations. Refer to the Swagger UI for full parameter documentation.
B: Incremental Transaction Sync
Use this when you need full journal entry detail. After an initial full load, only changed entries are retrieved on each subsequent sync.
- Efficient delta syncs after initial load
- Full transaction-level detail available
- Supports deletion tracking (Action field)
- Best for data warehouses and audit trails
→ Use for: Data warehouses, analytics pipelines
Step 1: Retrieve Changed Journal Entries
# Fetch journal entries changed since a specific date/time curl -X 'GET' \ 'https://api-sandbox.asperion.nl/v1/journalentrylog?fields=JournalEntryId&metaOnly=false&ChangedDate_from=2023-02-01%2015%3A00%3A00' \ -H 'accept: application/json' \ -H 'X-Tenant-Id: 100003' \ -H 'Authorization: Bearer <token>'
| Parameter | Type | Description |
|---|---|---|
| ChangedDate_from | datetime | ISO 8601 timestamp. Returns entries changed on or after this date/time. |
| fields | string | Comma-separated list of fields to return. Use JournalEntryId to minimise response size. |
| metaOnly | boolean | Set to false to include result data alongside pagination metadata. |
Store your sync timestamp
Step 2: Update Your Local Dataset
| Action Value | Meaning | What to do |
|---|---|---|
| 1 | Added / Updated | Delete the local entry (if present), then re-fetch the full details from /v1/ledgerlines. |
| 2 | Deleted | Delete the corresponding entry from your local dataset. No re-fetch needed. |
Always delete before re-inserting
Step 3: Fetch Updated Journal Entry Details
For each JournalEntryId with Action = 1, fetch the full ledger lines. Filter by JournalEntryId and Bookyear. Use the largest practical pagesize to reduce the number of HTTP round-trips.
# Fetch ledger lines for a specific journal entry curl -X 'GET' \ 'https://api-sandbox.asperion.nl/v1/ledgerlines?metaOnly=false&JournalEntryId=1&Bookyear=2023&pagesize=10000' \ -H 'accept: application/json' \ -H 'X-Tenant-Id: 100003' \ -H 'Authorization: Bearer <token>'
| Parameter | Default | Maximum | Notes |
|---|---|---|---|
| pagesize | 10,000 | 100,000 | Set to maximum for large datasets to minimise HTTP calls. |
| JournalEntryId | — | — | Filter to a single journal entry. Required for per-entry fetches. |
| Bookyear | — | — | The fiscal year. Required in combination with JournalEntryId. |
Pagination Pagination metadata is returned in the meta section of each response. Check meta.totalCount and the current page to determine if additional pages need to be requested.
Step 4: Handle Period 0 — Year-End Closing Entries
Year-end closing entries are booked in Period 0. These entries are not included in the JournalEntryLog, so they will not be picked up by the standard incremental sync flow described above.- 1. On a scheduled basis (e.g. daily or weekly around year-end),delete all Period 0 entries from your local dataset for each relevant book year.
- 2. Re-fetch all Period 0 ledger lines using/v1/ledgerlinesfiltered by Period=0 and the applicable Bookyear.
- 3. Increase the refresh frequency during and shortly after the year-end period when closing entries are most likely to change.
Step 5: Performance Considerations
Follow these guidelines to build a robust, future-proof integration that performs well at scale.Use Large Page Sizes: Use larger pagesizes to reduce the total number of HTTP requests required for large datasets.
Batch Your Requests: Group fetches by book year and process them in batches. Avoid making one HTTP request per journal entry — fetch multiple entries per call where possible.
Design for Rate Limits: Design your integration to handle rate limiting gracefully. Implement exponential back-off for retries.
Persist Your Sync State: Always save the ChangedDate_from timestamp after a successful sync. Never re-fetch the full dataset unless a full reload is explicitly needed.