Download OpenAPI specification:Download
The Ledger Enterprise API lets you monitor your workspace, scale your crypto operations and develop integrations with external applications.
Use our reporting endpoints to query and export transactions (transfers, DeFi & NFT, staking), accounts (balances, addresses, governance, etc.), users, and all workspace objects (whitelists, groups, entities).
Set notifications to catch workspace events (incoming or outgoing transactions, accounts, progress in governance schemes, changes to workspace objects) and automate your workflows.
You can generate reporting API keys to access our reporting and notifications endpoints.
Use our transaction endpoints to perform programmatic crypto transfers, automate your staking operations or interact with DeFi & NFT contracts at scale.
You can create and register API operators for that purpose, and assign them transaction creation or approval roles in accounts’ governance rules (transfer, staking, or smart contract interactions).
If you’ve generated reporting API keys you can authenticate your requests to reporting and notifications endpoints on the base URL https://api.vault.ledger.com
using the below headers:
X-Ledger-Workspace
X-Ledger-Reporting-API-Key
If you’ve registered API operators, you can authenticate the requests to all endpoints exposed by the container running your LAM, using the below headers:
X-Ledger-API-User
X-Ledger-API-Key
X-Ledger-Store-Auth-Token
header along with the API user header. For more details see how to set up HashiCorp Vault with the LAM.X-Ledger-Store-Auth-Token
The Ledger Enterprise API employs a rate limiter to help maximize its stability. You can track your rate limit status via dedicated API response headers: X-RateLimit-Limit, X-RateLimit-Remaining X-RateLimit-Reset, Retry-After When the rate limit is exceeded, the API will return a code 420 with rate limitation details
Note that Rate limits are specific to your API plan please get in touch with you Account Manager for more details.
When necessary we use a pagination mechanism to fetch big volumes of data when the dataset is too large.
For example, let's take a look at a call on the \ /transactions
endpoint:
GET /transactions?page=2&page_size=5
Here, the API is queried to return the second page of our queryset, each page containing 5 elements Here is the resulting json:
{
"edges": [
{
"cursor": 0,
"node": {...}
},
{
"cursor": 1,
"node": {...}
},
{
"cursor": 2,
"node": {...}
},
{
"cursor": 3,
"node": {...}
},
{
"cursor": 4,
"node": {...}
}
],
"page_info": {
"count": 73,
"has_next_page": True
}
}
There are two properties, edges
and page_info
, at the root level, explained in the following sections.
edges
PropertyThis property contains the relevant data as a JSON array. Each element of this array is an object with two properties:
cursor
property which is an integer equivalent to the index of the element in the current view.node
property which represents the actual object being queried, in this case a Transaction
type (whose schema is described in our openAPI specification).page_info
PropertyThis property gives you the total number of objects contained in this particular queryset (count
property), and allows you to know if the page you have queried is the last of a given view (has_next_page
parameter).
By default, results are sorted by creation date in descending order,newest objects being first.
We believe this makes search results more useful, but it can also be an issue when new objects are regularly added, as this could affect pagination, creating what looks like duplicates in two successive pages.
This is particularly relevant for transactions. If you were to successively fetch the first two pages of your transactions, and 20 new transactions were created between the two GET calls, then both responses would contain the exact\nsame results.
This issue can be avoided by making further use of filters (on accounts, creation date, etc.). We also recommend that you always de-duplicate by\nusing the object ids.