Skip to main content
Working with global company registers means dealing with instability. Registers have maintenance windows, rate limits, and unexpected downtime. Topograph abstracts this complexity, but our approach to reliability differs depending on which endpoint you use. We make deliberate trade-offs between speed and robustness.
Monitor real-time system status at status.topograph.co.

The speed vs. robustness trade-off

Same endpoint (POST /v2/company), different behavior by mode:
ModePriorityStrategyRetry behavior
mode: "onboarding"SpeedFail fastStrict short timeouts. Minimal retries—if the register is slow, we return quickly rather than wait.
Verification (default)RobustnessDeliver eventuallyLonger retries—up to 1 hour for transient register outages.
The legacy POST /v2/onboarding route follows the same fast-path strategy but is deprecated; prefer mode: "onboarding". See Data retrieval modes.

Retry strategies

Onboarding mode (mode: "onboarding")

Designed for real-time user flows where latency matters most.
  • Timeout: Hard 10-second deadline per datapoint. Datapoints that do not finish in time fail with error code onboarding_timeout. Any partial data that arrived before the deadline is still returned.
  • Retries: Minimal. We do not retry transient register errors that would delay the response significantly.
  • Priority: Speed over completeness. Retry without mode: "onboarding" if you need the datapoint the slow way.

Verification mode (default POST /v2/company)

Designed for back-office processes where data completeness is critical.
  • Timeout: Long. We accept that some requests might take minutes if the register is struggling
  • Retries: Exponential backoff strategies. For scheduled register maintenance or temporary outages, we may retry for up to 1 hour
  • Priority: Eventual success over immediate failure
Because verification requests can be long-running, we strongly recommend using webhooks to receive results asynchronously.

Error handling

When things go wrong, we provide standardized error codes. However, the meaning of an error can depend on the endpoint context.

Common error codes

CodeErrorDescriptionAction
400Bad RequestInvalid parameters or data pointsFix your request payload
401UnauthorizedInvalid or missing API keyCheck your credentials
402Payment RequiredInsufficient creditsTop up your account balance
404Not FoundCompany not found in the sourceVerify the ID or search again
406Not AcceptableCountry/feature not supportedCheck our coverage map
429Too Many RequestsYou hit your rate limitSlow down your requests
500Internal ErrorServer-side issueContact support if persistent
503Service UnavailableUpstream register failureOnboarding mode: Often final failure. Verification mode: We are already retrying for you

Handling “in progress” status

For the verification endpoint, data retrieval is asynchronous. You will initially receive a 200 OK response with data status set to in_progress.
{
  "request": {
    "dataStatus": {
      "dataPoints": {
        "company": {
          "status": "in_progress"
        }
      }
    }
  }
}
Do not treat this as an error. It means we have accepted your request and are working on it (including handling any necessary retries). Wait for the webhook.

Handling register instability

If you receive a failed status in your webhook payload with an error like “register unavailable”:
  1. Onboarding mode: This is usually final. Ask the user to enter details manually or try again later.
  2. Verification mode: We tried for up to 1 hour and still could not reach the register—often a prolonged outage (e.g., weekend maintenance). You can retry the request later.

Breaking changes in a register

Registries occasionally introduce breaking changes such as new API versions, schema modifications, authentication updates, or structural changes to their data formats. These changes can temporarily disrupt data retrieval until we adapt our integration.

Our response strategy

We maintain strong monitoring and agile development processes to detect and respond to registry changes quickly:
  • 24-hour recovery target: We aim to resolve breaking changes from registries within 24 business hours of detection
  • Proactive monitoring: Our systems continuously monitor register responses and API behavior to detect anomalies early
  • Rapid adaptation: When breaking changes occur, our team prioritizes fixes to restore service as quickly as possible

What this means for you

  • Temporary disruptions: If a register has a breaking change, you may see increased error rates or 503 Service Unavailable responses until we adapt. Note that you never pay for failed requests!
  • Automatic recovery: Once we’ve updated our integration, service will resume automatically, no action required on your end
  • Status updates: Check status.topograph.co for real-time updates during incidents
If you experience persistent errors for a specific country, please contact support so we can assist in the most efficient way.