Query-by-POST

Just a quick one today.

When trying to implement a REST endpoint that does some filtering, it’s generally pretty easy and obvious. Just add filters as query string parameters. For example:

GET /api/employees?lastName=Smith

… and the response should be an HTTP 200, with a collection of employees with the last name Smith. Standard fare. You can continue to add filtering parameters and it’s all really straightforward.

What about if you wanted to query for all employees named Smith, that started before 01/01/2019? That’s more like a search than a filter. For searching, a common pattern exists that some of my peers and I have come to call “Query-by-POST”. I can’t seem to find decent documentation on it, so I’m doing that now. It looks something like the following:

POST /api/employees/searches
{
"lastName" : "Smith",
"hireDate" : {
"lessThan" : "01/01/2019"
}
}

… and the response is:

HTTP/1.1 303 See Other
Location: https://.../searches/results/<id>

That is, you’re POSTing a new search to the API, and the API is returning a redirect to the results it created.

The Id of the search results can be anything you want. Ideally, it should actually represent some kind of resource. I’ve used an encoded list of the ids from the search result, and it worked well. If it’s computationally expensive, then you can persist the results as any other resource as well. In order for the endpoint to be RESTful though, you should get back the same resource (results) each time you call the results endpoint.

Under inspection, one thing probably looks odd: you’re POSTing to the searches collection and redirecting to the search results resource… instead of just returning search results to the original request. That’s twice as many HTTP Requests as when you POST to create any other resource. Here’s why…

Normally, when you POST a resource to an endpoint like this:

POST /api/employees
{
“id” : “”,
“firstName” : “Jane”,
“lastName” : “Smith”,
“hireDate” : “12/30/2018”
}

You will often get back a 201 with the employee object that contains the populated Id. You’re POSTing the object to the same collection that it will be located at.

With the search endpoint, you are actually creating a request for the system to create search results for you. If it were going to return anything in the body to that endpoint, it could reasonably return your search object (with the lastName and hireDate comparison) as well as the Location header, and it would be idiomatically RESTful. Because it’s actually/logically creating a resource somewhere else, it redirects you to it.

Hope it helps!

Simple Communications ctd…

Or, “how to force others to assume failure.”

Frequently when writing code that calls out to other services, the calls fail. For all kinds of reasons:

  • time outs
  • network outage
  • power outage at the host
  • bad parameters
  • bad credentials
  • no permissions

… and myriad other reasons. The point is, when you’re communicating across a network, lots can go wrong.

The unfortunate part is that, it often goes just fine, and that’s the problem. You end up with code that usually works, certainly in a developer’s environment, and you don’t end up with code that is required to be resilient to failures.

This is why, when I write service objects, I construct the methods such that they take callbacks for failure modes. This is my little contribution to attempt to push people into a pit of success.

It looks like this:

public class MyService : IMyService  
{
private RestServiceClient client;
public MyService()
{
client = new RestServiceClient();
}
public void GetData(
Guid id,
Action<Data> onSuccess,
Action<NetworkError> onFailure)
{
var request = RequestBuilder.GET($"/v1/employee/{id}")
.Build();

client.Execute<DataResponse>(
request,
response => { onSuccess(new Data(response)) },
onFailure);
}
}

At first, this might seem weird, or superfluous. Certainly the RequestBuilder makes this function seem simple, and then we just call out to the ResetServiceClient which looks like this:

public class RestServiceClient 
{
protected RestClient Client { get; set; }

public void Execute<T>(
IRestRequest request,
Action<T> onSuccess,
Action<NetworkError> onFailure)
where T : new()
{
ExecuteAsync<T>(request, onSuccess, onFailure);
}

private async Task ExecuteAsync<T>(
IRestRequest request,
Action<T> onSuccess,
Action<NetworkError> onFailure)
where T : new()
{
if (!CheckConnectivity())
{
onFailure(NetworkError.NoConnectivity());
}

try
{
var response = await Client.ExecuteTaskAsync<T>(request);
if (IsSuccessfulStatusCode(response.StatusCode))
{
onSuccess(response.Data);
}
else
{
onFailure(NetworkError.From(response.StatusCode));
}
}
catch (Exception exception)
{
onFailure(NetworkError.From(exception));
}
}

protected bool IsSuccessfulStatusCode(HttpStatusCode code)
{
return (int)code >= 200 && (int)code <= 299;
}
}

Here we have a simple function that calls out to an async function immediately, so that it can execute independently and call back to the onSuccess or onFailure functions when the call succeeds or fails.

So that’s a bit of code, and it’s the inclusion of the requirement to take an onFailure callback that helps force developers into a pit of success, or forces them to write code that is resilient to failures. Calling code now must look like this:

var service = new MyService();
service.GetData(guid,
onSuccess: data => {
// do what you want with the data, update the UI, etc.
},
onFailure: networkError => {
// do what you want with the error, try again, update the UI, etc.
});

So, no every call to a function on a service object that goes over the network, is required to pass a function that handles the failure scenario.

Using this, which becomes a fairly simple pattern to use, I’ve had a number of mobile apps and native apps that have had excellent and reliable behavior communicating over unreliable network connections.

Hope it helps!