Data warehouse

Best practices for a data warehouse implementation

Best Practices: Large Dataset Loading

Optimized patterns for API partners loading large volumes of financial data — for data warehouses, Power BI reports, and similar analytics integrations.

 

This guide is intended for API partners who regularly load large datasets from Asperion — for example, to power data warehouse pipelines or Power BI reports built on financial mutations. Following these patterns will minimize API load, reduce sync times, and keep your data consistent.

 

Choosing Your Approach

A: Aggregated Balances

Use this when you only need period-level balances per general ledger account — no individual transaction details required.

GET/v1/ledgergrouped


→ Use for: Power BI balance reports, period summaries

The /v1/ledgergrouped endpoint returns balances grouped by period and general ledger account. This is the fastest way to obtain financial summaries without processing individual mutations. Refer to the Swagger UI for full parameter documentation.

 

B: Incremental Transaction Sync

Use this when you need full journal entry detail. After an initial full load, only changed entries are retrieved on each subsequent sync.

GET/v1/journalentrylog


→ Use for: Data warehouses, analytics pipelines

 

Step 1: Retrieve Changed Journal Entries

GET /v1/journalentrylog

Use the JournalEntryLog endpoint to fetch all journal entries that have changed since your last synchronization. Filter by ChangedDate_from to retrieve only records modified after a given timestamp. Store this timestamp after each successful sync to use as the starting point for the next run.
 
# Fetch journal entries changed since a specific date/time
curl -X 'GET' \
  'https://api-sandbox.asperion.nl/v1/journalentrylog?fields=JournalEntryId&metaOnly=false&ChangedDate_from=2023-02-01%2015%3A00%3A00' \
  -H 'accept: application/json' \
  -H 'X-Tenant-Id: 100003' \
  -H 'Authorization: Bearer <token>'

 

ParameterTypeDescription
ChangedDate_from    datetime    ISO 8601 timestamp. Returns entries changed on or after this date/time.
fields string Comma-separated list of fields to return. Use JournalEntryId to minimise response size.
metaOnly boolean Set to false to include result data alongside pagination metadata.

Store your sync timestamp
After each successful run, persist the timestamp of the sync so you can pass it as ChangedDate_from on your next call. Use UTC timestamps for consistency.

Step 2: Update Your Local Dataset

For every JournalEntryId returned in the log, apply the following logic to keep your local data in sync. The Action field tells you whether an entry was added/updated or deleted.

Action Value   MeaningWhat to do
1 Added / Updated    Delete the local entry (if present), then re-fetch the full details from /v1/ledgerlines.
2 Deleted Delete the corresponding entry from your local dataset. No re-fetch needed.


Always delete before re-inserting
Even for Action = 1, remove the existing local record first before inserting the freshly fetched data. This prevents duplicate or stale records in your dataset.

Step 3: Fetch Updated Journal Entry Details

GET/v1/ledgerlines

For each JournalEntryId with Action = 1, fetch the full ledger lines. Filter by JournalEntryId and Bookyear. Use the largest practical pagesize to reduce the number of HTTP round-trips.
 
# Fetch ledger lines for a specific journal entry
curl -X 'GET' \
  'https://api-sandbox.asperion.nl/v1/ledgerlines?metaOnly=false&JournalEntryId=1&Bookyear=2023&pagesize=10000' \
  -H 'accept: application/json' \
  -H 'X-Tenant-Id: 100003' \
  -H 'Authorization: Bearer <token>'

 

ParameterDefaultMaximumNotes
pagesize 10,000 100,000 Set to maximum for large datasets to minimise HTTP calls.
JournalEntryId Filter to a single journal entry. Required for per-entry fetches.
Bookyear The fiscal year. Required in combination with JournalEntryId.

Pagination
Pagination metadata is returned in the meta section of each response. Check meta.totalCount and the current page to determine if additional pages need to be requested.

Step 4: Handle Period 0 — Year-End Closing Entries

Year-end closing entries are booked in Period 0. These entries are not included in the JournalEntryLog, so they will not be picked up by the standard incremental sync flow described above.

⚠️
Period 0 is not tracked in JournalEntryLog Year-end closing entries in Period 0 bypass the change log mechanism. You must handle these separately to avoid stale or missing data in your dataset.
Recommended approach for Period 0
  • 1. On a scheduled basis (e.g. daily or weekly around year-end),delete all Period 0 entries from your local dataset for each relevant book year.
  • 2. Re-fetch all Period 0 ledger lines using/v1/ledgerlinesfiltered by Period=0 and the applicable Bookyear.
  • 3. Increase the refresh frequency during and shortly after the year-end period when closing entries are most likely to change.

Step 5: Performance Considerations

Follow these guidelines to build a robust, future-proof integration that performs well at scale.

Use Large Page Sizes: Use larger pagesizes to reduce the total number of HTTP requests required for large datasets.

Batch Your Requests: Group fetches by book year and process them in batches. Avoid making one HTTP request per journal entry — fetch multiple entries per call where possible.

Design for Rate Limits: Design your integration to handle rate limiting gracefully. Implement exponential back-off for retries.

Persist Your Sync State: Always save the ChangedDate_from timestamp after a successful sync. Never re-fetch the full dataset unless a full reload is explicitly needed.

 

Tags API data warehouse PowerBI
Disclaimer:
De voor u beschikbare functies worden bepaald door het type abonnement dat is afgesloten. Sommige hier beschreven functies kunnen voor u dus niet beschikbaar zijn en/of soms iets anders werken dan zoals hier beschreven.

Hoewel bij de totstandkoming van de informatie in dit document de uiterste zorg is betracht en er naar gestreefd is om deze informatie zo compleet en volledig mogelijk te maken en te houden, kan Asperion noch de juistheid, noch de volledigheid, noch een specifieke toepasselijkheid van de gepubliceerde en/of gevraagde informatie in dit document garanderen. Asperion wijst iedere aansprakelijkheid af voor enige directe, indirecte of gevolgschade, schade in de vorm van gemiste winst of van bedrijfsonderbreking welke het gevolg zou zijn van het gebruik van dit document. Het ontlenen of gebruik van informatie uit dit document geschiedt te allen tijde op eigen risico van de gebruiker. Teksten kunnen, zonder kennisgeving vooraf, veranderd worden.

Niets uit deze uitgave mag worden verveelvoudigd, opgeslagen in een geautomatiseerd gegevensbestand, of openbaar gemaakt in enige vorm, op enige wijze, hetzij door middel van druk, fotokopieën, microfilm of op welke andere wijze dan ook, zonder voorafgaande schriftelijke toestemming van Asperion

© Copyright - Asperion® en het Asperion logo zijn zowel handelsnaam als geregistreerde handelsmerken van Asperion Software B.V.