In Functional Programming, the best way to achieve latency is to shape it as an effect. Confusing? I would say yes.

We all know what latency means, in fact, background tasks are a core part in almost every app for iOS. In Objective-C we were used to handle latency with Threads and Operations in early days, then Grand Central Dispatched was released and life became easier. In Swift, we can turn this concept into a shining, safe and stable implementation.

Futures are here to help

There are a lot of libraries providing latency and background tasks support in Objective-C and Swift, if in the first case writing from scratch a wrapper that stands can be non-trivial, in the second one we can achieve this effect with less effort.

Futures

The first concept we will implement is Future. What exactly is a Future?

The Scala documentation might help:

A Future is an object holding a value which may become available at some point. This value is usually the result of some other computation:

  • If the computation has not yet completed, we say that the Future is not completed.
  • If the computation has completed with a value or with an exception, we say that the Future is completed.

Completion can take one of two forms:

  • When a Future is completed with a value, we say that the future was successfully completed with that value.
  • When a Future is completed with an exception thrown by the computation, we say that the Future was failed with that exception.

In a nutshell: A Future is nothing more than a placeholder, we declare a future giving a computation to perform (closure in Swift) and the Future will perform the given task, returning the result or an error lately, at some point.

We need to achieve some goals, our implementation must be functional, so I want to be able to chain futures, in an elegant way, or to perform any other kind of combination without turning my code into a nested closure hell.

I don’t basically want this:

dispatch_async(queue) {
    // Perform long running task such as API call, file download or similar
    if error {

    } else {
        dispatch_async(queue) {
    	// Process returned data
            if error {

            } else {
                dispatch_async(queue) {
                    // Process final data
                }
            }
    	}
    }
}

The code I showed before is an example of callback hell and it is clear what can happen when we need to chain more operations, we can end up messing up it.

So let’s try to achieve the following goals:

  • No callbacks hell
  • Handling of values and errors
  • On complete callback
  • Manageable queue to perform the task

Execution: Where?

The first thing we have to consider, is the last one in the list. We need to run our code asynchronously, taking care to pick the right queue. For example: an API call returning a JSON must be handled in background, as well the JSON conversion, but once all the magic has been performed, we need to switch to the main thread to update the UI.

So, we need a context to perform every task:

public struct ExecutionContext {
    public let queue: dispatch_queue_t

    init(queue: dispatch_queue_t) {
        self.queue = queue
    }

    /// Executes the given block on the queue asynchronously.
    public func executeAsync(block: () -> ()) {
        dispatch_async(queue, block)
    }
}

Note: Wrapping up the dispatch_queue_t on a struct has the advantage that we can mask and add logic to perform the given task, it’s going to be useful for other features I am **NOT* going to expose in this post.*

Value or Error?

The second thing to think about is how to manage a failure. In Scala, Java and Objective-C we can use the Try/Catch/Finally statements, but in Swift we don’t have such a thing. Creators have clearly stated that it will never be added to the language and I completely agree on this point. Swift is powerful enough to avoid it, so the usage of NSError is a must.

For this, we can use a Result enum to store an Error or a Value:

public enum Result<V> {
    case Error(NSError)
    case Value(Box<V>)
    
    public init(_ e: NSError?, _ v: V) {
        if let ex = e {
            self = Result.Error(ex)
        } else {
            self = Result.Value(Box(v))
        }
    }
}

About Box you can check this answer in StackOverflow.

The Real Thing

Now we have the context to perform asynchronous tasks, and also a structure to handle values and errors: now let’s code the magic part! The first thing to implement is the initializer, using of course the task to perform:

init(_ executor: ExecutionContext = ExecutionContext.defaultContext, _ task: Runnable) {
    self.execute(executor, task)
}

The init will immediately fire the task, we need it to be performed right away, by design. Then we have to implement the execute function.

/// Execute the current task using the executor
private func execute(_ executor: ExecutionContext = ExecutionContext.defaultContext , _ task: Runnable) {
    // If the current future has a result, return
    if result != nil {
        return
    }
    
    // Execute the task, if provided
    executor.executeAsync {
        synchronized(self) {
            self.result = task()
        }
        return
    }
}

In the previous code block is pretty clear what’s happening, we check that the current future doesn’t have a result, then we proceed to perform the block in the given ExecutionContext.

The result is then injected using a synchronized method that is clearly explained by Mike Ash. This has to be done to prevent any kind of race condition, it’s more a prevention thing rather than a necessary feature.

We have our placeholder filled now, but we need to process it accordingly. To achieve this, we need a dedicated method and an array of callbacks to perform.

// We need to perform callbacks once a result is provided
internal var result: Result<T>? = nil {
    didSet {
        self.performCallbacks(result)
    }
}

// Callbacks to run
var callbacks: [SinkOf<Result<T>>] = Array<SinkOf<Result<T>>>()

internal func performCallbacks(result: Result<T>?) {
    if let res = result {
        for sink in self.callbacks {
            sink.put(res)
        }
        self.callbacks.removeAll()
    }
}

This looks pretty much crystal clean, except for that SinkOf object, what’s a sink? Googling about the SinkOf type in Swift will give you back a lot of fancy and crazy answers, I still have to find a good one, but in a nutshell:

A Sink is the consumer side of the relationship producer/consumer.

Fancy? Let’s try to make this more clear: a sink is nothing more than a processor of something, it takes a closure during the initialization and then, every time a value is pushed (using the put function), that closure is performed with the given value.

init(_ putElement: (T) -> ())

You can clearly understand how it works with the line sink.put(res) on the previous block of code.

At the current status we have the task processed, but still no callbacks, it’s time to handle the result of our computation. The first callback to implement is the onComplete. This callback will be performed in any case, either if the block has completed with a value or an error. With this general callback we will be able to create the sub-cases for onError and onSuccess.

public func onComplete(context executor: ExecutionContext = CurrentThreadExecutionContext(), callback: OnCompleCallback) -> Future<T> {
        
    let boxedCallback : Result<T> -> () = { res in
        executor.executeAsyncWithBarrier {
            callback(result: res)
            return
        }
    }
    
    if let res = self.result {
        executor.executeAsyncWithBarrier {
            callback(result: res)
        }
    } else {
        self.callbacks.append(SinkOf<Result<T>>(boxedCallback))
    }
    
    return self
}

As previously stated, the SinkOf object is initialized here with a closure. So at this point, the previous defined method will now add the passed callback to the list of the ones to perform once a Result is returned. Using it, we can then handle both cases of correct computation or failure.

Of course, if the future has been already completed, we immediately run the callback.

public func onSuccess(context executor: ExecutionContext = CurrentThreadExecutionContext(), callback: OnSuccessCallback) -> Future<T> {
    self.onComplete(context: executor) { result in
        switch result {
        case .Value(let val):
            callback(val.value)
        default:
            break
        }
    }
    return self
}

public func onFailure(context executor: ExecutionContext = CurrentThreadExecutionContext(), callback: OnFailureCallback) -> Future<T> {
    self.onComplete(context: executor) { result in
        switch result {
        case .Error(let err):
            callback(err)
        default:
            break
        }
    }
    return self
}

YUUP! We are in a working state, our Future object is now able to perform a task, asynchronously, handling an error state or a returned value.

Hmm… here we have a strange function: CurrentThreadExecutionContext(), what’s this? This function is returning the ExecutionContext depending on the current thread, this is a smart way to have callbacks performed in the main thread to update the UI.

Another nice trick: to avoid to call Future<Int>() (for example) every time, we can create a simple global function:

public func future<T>(ec: ExecutionContext = ExecutionContext.defaultContext, t: () -> Result<T>) -> Future<T> {
    return Future<T>(ec, t)
}

Now we can run code like this:

future {
    var response: NSURLResponse?
    var error : NSError? = nil
    let contentRequest = NSURLRequest(URL: NSURL(string: stringURL)!)
    if let data = NSURLConnection.sendSynchronousRequest(contentRequest, returningResponse: &response, error: &error) {
        return Result(data)
    }
    return Result(error!)
}

AWESOME! But we can only run a single Future at time… let’s chain them! We need a function that will take the current Future and will run another Future after the first one has been completed. The returning Future will process the result if valid or simply return the error of the previous one.

public func then<U>(_ executor: ExecutionContext = ExecutionContext.defaultContext, _ task: (value: T) -> Result<U>) -> Future<U> {
    let future = Future<U>()
    
    self.onComplete(){ result in
        switch result {
        case .Error(let e):
            synchronized(future) {
                future.result = Result<U>(e)
            }
        case .Value(let val):
            future.execute(executor){
                return task(value: result.value!)
            }
        }
    }
    
    return future
}

With then we can now have a code like this:

future {
    var response: NSURLResponse?
    var error : NSError? = nil
    let contentRequest = NSURLRequest(URL: NSURL(string: stringURL)!)
    if let data = NSURLConnection.sendSynchronousRequest(contentRequest, returningResponse: &response, error: &error) {
        return Result(data)
    }
    return Result(error!)
}.then(){ data in
    var error : NSError? = nil
    if let dict = NSJSONSerialization.JSONObjectWithData(data, options: nil, error: &error) as! Dictionary<String, AnyObject>? {
        return Result(dict)
    }
    return Result(error!)
}.onSuccess() { processed in
    self.processAndDisplayData(processed)
}

Now we are done! Our goals are achieved!

Futures are an easy and convenient way to perform tasks asynchronously, keeping the code clean and readable.


If you are interested in the complete implementation, you can check the Functional Reactive Kit called DeLorean, that I am currently developing and the project for iOS as example.


In the next post, I will cover monadic combinators on futures (yes, a Future is a Monad) and a new type called Promise that is an extension of what I showed in this post.