Working with global company registers means dealing with instability. Registers have maintenance windows, rate limits, and unexpected downtime. Topograph abstracts this complexity, but our approach to reliability differs depending on which endpoint you use. We make deliberate trade-offs between speed and robustness.Documentation Index
Fetch the complete documentation index at: https://docs.topograph.co/llms.txt
Use this file to discover all available pages before exploring further.
Monitor real-time system status at
status.topograph.co.
The speed vs. robustness trade-off
Same endpoint (POST /v2/company), different behavior by mode:
| Mode | Priority | Strategy | Retry behavior |
|---|---|---|---|
mode: "onboarding" | Speed | Fail fast | Strict short timeouts. Minimal retries—if the register is slow, we return quickly rather than wait. |
| Verification (default) | Robustness | Deliver eventually | Longer retries—up to 1 hour for transient register outages. |
POST /v2/onboarding route follows the same fast-path strategy but is deprecated; prefer mode: "onboarding". See Data retrieval modes.
Retry strategies
Onboarding mode (mode: "onboarding")
Designed for real-time user flows where latency matters most.
- Timeout: Hard 10-second deadline per datapoint. Datapoints that do not finish in time fail with error code
onboarding_timeout. Any partial data that arrived before the deadline is still returned. - Retries: Minimal. We do not retry transient register errors that would delay the response significantly.
- Priority: Speed over completeness. Retry without
mode: "onboarding"if you need the datapoint the slow way.
Verification mode (default POST /v2/company)
Designed for back-office processes where data completeness is critical.
- Timeout: Long. We accept that some requests might take minutes if the register is struggling
- Retries: Exponential backoff strategies. For scheduled register maintenance or temporary outages, we may retry for up to 1 hour
- Priority: Eventual success over immediate failure
Error handling
When things go wrong, we provide standardized error codes. However, the meaning of an error can depend on the endpoint context.Common error codes
| Code | Error | Description | Action |
|---|---|---|---|
400 | Bad Request | Invalid parameters or data points | Fix your request payload |
401 | Unauthorized | Invalid or missing API key | Check your credentials |
402 | Payment Required | Insufficient credits | Top up your account balance |
404 | Not Found | Company not found in the source | Verify the ID or search again |
406 | Not Acceptable | Country/feature not supported | Check our coverage map |
429 | Too Many Requests | You hit your rate limit | Slow down your requests |
500 | Internal Error | Server-side issue | Contact support if persistent |
503 | Service Unavailable | Upstream register failure | Onboarding mode: Often final failure. Verification mode: We are already retrying for you |
Handling “in progress” status
For the verification endpoint, data retrieval is asynchronous. You will initially receive a200 OK response with data status set to in_progress.
Handling register instability
If you receive afailed status in your webhook payload with an error like “register unavailable”:
- Onboarding mode: This is usually final. Ask the user to enter details manually or try again later.
- Verification mode: We tried for up to 1 hour and still could not reach the register—often a prolonged outage (e.g., weekend maintenance). You can retry the request later.
Breaking changes in a register
Registries occasionally introduce breaking changes such as new API versions, schema modifications, authentication updates, or structural changes to their data formats. These changes can temporarily disrupt data retrieval until we adapt our integration.Our response strategy
We maintain strong monitoring and agile development processes to detect and respond to registry changes quickly:- 24-hour recovery target: We aim to resolve breaking changes from registries within 24 business hours of detection
- Proactive monitoring: Our systems continuously monitor register responses and API behavior to detect anomalies early
- Rapid adaptation: When breaking changes occur, our team prioritizes fixes to restore service as quickly as possible
What this means for you
- Temporary disruptions: If a register has a breaking change, you may see increased error rates or
503 Service Unavailableresponses until we adapt. Note that you never pay for failed requests! - Automatic recovery: Once we’ve updated our integration, service will resume automatically, no action required on your end
- Status updates: Check status.topograph.co for real-time updates during incidents
If you experience persistent errors for a specific country, please contact support so we can assist in the most efficient way.