Published on

Strategies for working with spotty APIs


For the last while I have been integrating with an API from a provider but it has been painful for a few reasons:

  1. Spotty Documentation: their documentation is very spotty and unreliable when it has documentation on what we need or it does not even document anything about the endpoint we need.
    1. For example it may say a date format is standard ISO but when you actually hit it, Jackson fails as it is not actually standard ISO but some other undocumented format.
  2. Unreliable Non-Production Environment: the non production environment either does not have a non-production version of something or it behaves completely differently to how it actually does in production.
  3. Unreliable Support: support staff at the API provider take ages to respond if they do at all.
  4. Strange and Inconsistent Responses: APIs respond in inconsistent ways for example some endpoints respond with null when something is not there, others respond with an empty JSON object {}.

This naturally makes a task (integration) that is already complex, significantly more complex and time consuming. In the course of working through this a number of approaches ended up allowing us to mostly get unblocked:

Manual Tests

These are unit test that you run manually that allow you to hit the integration. This can be done using tags in Junit 4/5 to mark the test as manual and then run it explicitly under the manual profile in IntelliJ or from the CLI.

An example of how to do this in Junit 5 is as follows:

import org.junit.jupiter.api.Tag
// ..

class SomeAPIProvidersManualTest {
    // ... your usual test code here

    // ... see below for the details on the PropertiesFileReader and

    // initialize the client here so that you do not need a full blown integration test, this makes running these tests super fast
    private val someAPIProviderClient: SomeAPIProviderClient = PropertiesFileReader("/").propertiesMap.let {

    val username = it["username"]
        ?: error("There is no username configured under / for key[username]")
    val password = it["password"]
        ?: error("There is no password configured under / for key[password]")

    val properties = SomeAPIProviderLibraryProperties(
        username = username,
        password = password

        properties = properties


Your build.gradle needs to have this block to configure the manual tests (note how we exclude manual tests from normal tests and unit test from manual tests):

test {
    useJUnitPlatform {
        excludeTags 'manual'

task manualTest(type: Test) {
    useJUnitPlatform {
        includeTags 'manual'
        excludeTags 'test'

I .gitignore a file which the manual test can pull in and setup the API client. I also have an file which new developers can copy paste as a file and setup as needed on their side.

The helper used to read this file in the test class looks as follows:

class PropertiesFileReader(propertiesFilePath: String) {

    private val propertiesRegex = "([a-zA-Z\\\\.]+)=(.*)".toRegex()

    val propertiesMap: Map<String, String> =
        .filter { !it.trim().startsWith("#") }
        .mapNotNull {
            propertiesRegex.matchEntire(it)?.let { result ->
                val (key, value) = result.destructured
                key to value

These tests are pivotal as they allow hitting an endpoint using the integration you have setup. You can see very quickly if there are issues un/marshalling, fix them and try again until you get a positive response back.


Manual tests will only get you so far as you will not know for sure if you have covered most permutations of a request/response object. Once the manual tests are through the gate the next step is to integrate the client into the app that will use it. As we still do not know at this stage if the integration will work, we create a wrapper to the client that takes requests and simply saves those to the database without hitting the actual API endpoint.

Business can then review the mapping. If everything looks good requests can be re-built later from this table to actually hit the API.

I have found that setting each C(create), U(update) and D(delete) endpoint up behind a 2 state toggle is super useful. The 2 toggle states are:

  1. On: this means we allow hitting the API and we save the request to the DB
  2. Listen-only: we only listen/save to the DB and do not allow hitting the API

Toggles are nothing fancy, they are normally an environment variable you can inject in to a properties file and change the state of them with a new deploy or some other more advanced mechanism if you are using a full blown toggle framework.

For example in Spring I would have an enum to represent the possible states of the toggle with a convenience property to see if I can hit it. The enum would look as follows:

enum SoftSwitch(val canHitAPI: Boolean){

This would then be the type for a property:

import org.springframework.context.annotation.Configuration

@ConfigurationProperties(prefix = "some.api")
class SomeApiProperties {
    lateinit var softSwitch: SoftSwitch

Your file would then either have the value as ON or LISTEN_ONLY:


Finally to use it in the client wrapper simply inject it in and call the convenience property on the enum to see if you can hit the API:

// ...

    // hit the API here
} else {
    // return some sort of default value


The final strategy is to build on the toggle approach described above but allow a 3rd state - have a whitelist we look in for every CUD request and if in the whitelist then allow that request to hit the API.

This allows business testing against more dangerous APIs using their own details where they can easily clean up after themselves when done with testing. Get requests are allowed to go all the way through the API (this depends on the API provider and assumes their GET operations do not mutate anything or incur a cost per call).

Once the final whitelist testing has been signed off the toggle can be switched to fully on. The approaches described above are not perfect by any means but given API issues described above, this is a phased approach which allows progress where it would otherwise be blocked or require more dangerous testing.