Updated on by Daniel Twigg
API development can be a minefield and it can be a challenge to decide how to enhance your platform. This is because there are a variety of protocols, structures and setup options to choose from. While there is no right or wrong way to develop integration first APIs.
There are definitely aspects that merit consideration when developing integration first APIs to facilitate SaaS integration.
A few simple additions can set up your API to improve performance, and responsiveness and enable new functionality. In this article, we take a look at a few suggestions for you to consider:
Having an updated date/time field in your API method responses becomes a great advantage. Especially, when looking to build out integrations that retrieve a few specific relevant records by returning data in date order.
- Adds the ability to GET new and updated records since the last time integration has run
- Reduces task usage as you can receive the most relevant records in bulk as a polling event
- Return data in date order
Breaking requested data into multiple API responses gives you a fail-safe method to ensure your API is controlled.
- Paging breaks down the number of records into set chunks (say 200 at a time). This allows you to receive, and send, records in batches (inside arrays) rather than one by one.
- For example:
- Both the requesting and receiving systems are set up with a paging of 200 records.
- GET-ting 20,000 email records would use 100 tasks and POST-ing them directly into another system. With a paging setup, it would use another 100 tasks.
- 200 tasks used per run
- Without paging, it would use 1 task to GET all records and 20,000 tasks to POST them one by one.
- 20,001 tasks used per run
Event Driven vs Polling
RESTful polling methods and event-driven notifications, such as Webhooks, can live in harmony. However, for certain methods, you may need to consider which type would be best for both your architecture and your users.
This decision will be driven by the required speed that an updated/new record should be sent to a subscribed service. As well as the size of the data packet (or the number of records) being sent.
- Developers should predict whether the number of records being sent in an individual integration is likely to be preferred to be either:
- Low volume/low latency -> Webhook
- Low volume/high latency -> Either option
- High volume/high latency -> GET with long polling cycles
- High volume/Low latency -> GET with short polling cycles
- Event-driven integrations, using webhooks, will allow for low latency. However, it would essentially fire the integration one record at a time. If you are expecting high volumes of these events to be triggered you will expect to use more tasks.
- For integrations that are going to be processing high volumes of records each time it is run, polling will allow you to GET batches of records in one to be processed.
- This reduces the number of tasks you will use as you will be able to process records in batches.
- The trade-off is the pre-set amount of time between polling.
- High volume/Low latency use cases would need to set their polling interval time to the shortest possible amount.
Webhook Auto Setup
When adding webhook functionality to your platform you can improve user experience with an addition to your API.
By creating POST and DELETE webhook methods to your API it will be possible to automatically set a webhook for your users when they install an integration. This substantially reduces set-up requirements.
Authentication is a must for any API being used to transfer private account data. It is also necessary for any multi-tenant system. While there is a range of authentication types available, we would opt for an OAuth approach.
- While not the easiest method to set up, OAuth should be your first port of call. Especially for setting up authentication as the long-term benefits outweigh the short-term hassle.
- Standardised methodology
- One of the best user experiences
- More secure than using basic or API key authentication
Another way to manage and protect your backend access through your API is rate-limiting. This lets you put a limit on the number of API calls a user can make within a time period.
Make sure this is documented, as any connecting system will have to be configured to ensure it doesn’t make more API calls than your API is set up for.
- Try to identify any limitations of your API, whether imposed for “fairness” reasons or due to hardware constraints. This means they can be set within the Cyclr Connector. This will avoid unnecessary errors occurring when data is being processed.
Data Retrieval Endpoints
When retrieving data, it can be useful to offer endpoints. These will allow the API Request to decide the “width” of the data that is returned.
This could be done by letting the Request state the Fields to be returned for each object. For example, only the email address for each contact, or perhaps a complete list of Fields.
Data Modification Endpoints
To avoid having to check whether an object already exists, providing an “Add or Update” endpoint – also known as an upsert – is helpful. This takes care of performing the appropriate action.
It may also include some indication of whether a new object was created or an existing one updated in its Response Body.
JSON seems to be the most common way to structure data for both the Request and Response Body. This is because it is clear and legible.
As there are no global standards for API structure APIs can vary enormously from one to another. Even ones using the same protocol. This is why documentation is key to every API development project.
It is as important for internal stakeholders to make new versions/iterations of your API. This is because it is for your API users who want to work with it.
- Documenting your API helps with:
- Making your API readable to humans
- Making amendments and additions to your API
- A self-generating system such as OpenAPI is ideal
- Provides API spec in YAML/JSON formats
- Can be added to Swagger tools to easily provide interactive API documentation
- Can be used with the Cyclr Connector creation tools
Cyclr Specific Fields
Depending on the presentation layer deployment model you choose, you will need to create and make available fields to:
- Store your user’s Cyclr username
- Store your user’s Cyclr password
- Store your user’s Cyclr account ID
If you want to create accounts in Cyclr on a 1-to-1 basis, you can add all of these new fields to your User table and endpoint.
If you want to create one account in Cyclr per company/organisation in your platform, so you can have multiple users in one account. Then just add the username and password fields to your User table and AccountID to your account/company/organisation table and endpoint.