Skip to main content
This page describes how integrations retrieve full Export Item data after determining which items require processing. Export Item data represents the data layer of the export workflow. It contains the complete accounting payload required to create entries in the target Accounting System.

Implementation

See the corresponding how-to article for API usage and step-by-step instructions:

Purpose

  • Retrieve full accounting data for Export Items
  • Provide all fields required for processing and mapping
  • Ensure consistent and complete payloads for downstream workflows

Control Layer vs Data Layer

Export processing is split into two distinct responsibilities:
LayerResponsibility
Control LayerIdentify items and track status
Data Layer (this step)Retrieve full accounting payload
This step operates only on data retrieval.

Fetching Export Item Data

Use the Export Items endpoint with the jobId of the claimed Export Job.
GET /v3/export-items?job_id={jobId}

Data Returned

Each Export Item includes all information required for processing, including:

Core Transaction Data

  • transaction type and subtype
  • transaction date
  • notes and metadata

Amounts

  • supplier currency
  • wallet currency
  • net and gross values

Accounting Entry Lines

  • GL account information
  • tax codes and rates
  • line-level amounts

Supplier / Vendor Data

  • supplier details (name, country, category)
  • vendor details (IDs, external references)

Organisational Context

  • user
  • team
  • cost allocation metadata

Attachments

  • file URLs
  • file types and sizes

Bookkeeping Metadata

  • bookkeeping method (e.g. journal, payable)
  • additional accounting attributes

Processing Model

Export Item data is returned as a collection. Each item must be processed individually.
items = fetchExportItems(jobId)

for item in items:
    process(item)
The integration is responsible for applying accounting logic to each item.

Pagination

Export Items may be returned across multiple pages. Integrations must retrieve all pages before beginning processing.

Example Pagination Response

{
  "pagination": {
    "hasNextPage": true,
    "endCursor": "<cursor>"
  }
}

Example Pseudo

items = []

do:
    response = fetchItems(cursor)
    items.append(response.data)
    cursor = response.pagination.endCursor
while response.pagination.hasNextPage

Before Processing: Validate Each Item

Before beginning downstream processing, run two quick checks on each item and immediately mark any that fail. This avoids attempting to record items in your AS that are guaranteed to fail. For each item check:
  1. Required configurations are present — the journal is configured and required accounts are mapped for this item
  2. Expense GL Account (Category) is presentaccountingEntryLines[].account is populated.
If either check fails, mark the item as failed with failureReasonType: "missing_configuration" and skip it. Items that pass both checks are ready for processing. See How to Update Export Items for how to report failed items back to Pleo.

Consistency Guarantees

Export Item data is tied to a specific Export Job.
  • Data reflects the state at the time the job was created
  • Items within a job should be treated as a consistent batch
  • Re-fetching data during recovery should return the same logical dataset

Relationship to Control Layer

This step must follow the control layer:
  1. Retrieve Export Job Items (control layer)
  2. Identify items to process (e.g. pending)
  3. Fetch full Export Item data (this step)
The integration may choose to:
  • fetch all data upfront, or
  • fetch and process incrementally

Resumability & Recovery

If the integration restarts:
  • Re-fetch Export Item data using the same jobId
  • Use control layer state to determine which items still require processing
This ensures:
  • idempotent processing
  • safe retries
  • consistent outputs

Idempotency Considerations

To ensure safe retries:
  • Use accountingEntryId as the unique identifier
  • Avoid duplicate entry creation in the Accounting System
  • Coordinate with control layer status tracking

Expected Outcome

After completing this step:
  • Full accounting payloads for all Export Items have been retrieved
  • Pagination has been resolved
  • Data is ready for processing in the Accounting System
  • The workflow remains resumable and consistent

Upstream Dependencies

  • Export Job Items retrieved (control layer)
  • Items to process identified (e.g. pending)
  • Export Job is in_progress

Downstream Dependencies

  • AS/ERP processing workflow
  • Export Item status updates
  • Export Job completion

What Comes Next?



FAQs

You’re getting a MISSING_CONTRA_ACCOUNTS error, because the Export API v3 requires a contra account to be configured for each entity’s default currency. If any entity’s default currency does not have a contra account mapped, calls to /v3/export-items will fail with this error.See How to Resolve MISSING_CONTRA_ACCOUNTS for steps to fix this.
You’re getting a null value for the bookkeeping field, because it’s only populated when the Vendor Tagging feature is enabled and a bookkeeping method has been assigned to the expense before it was queued. Without Vendor Tagging enabled, bookkeeping will always be null. This is expected behaviour, not an error.When bookkeeping is null, use the supplier field on the export item for bookkeeping purposes instead.See How to Enable Vendor-Based Bookkeeping for steps to enable it.