Specifically, it’s a TAX you pay every time you want to separate concerns and give some more layer of indirection, such as when you apply some basic DDD layering. DAOs, Domain Services, Infrastructure Services, and stuff like that.
You want to add some details-hiding interface, and you end up adding steps in the arrow of callback.
Another bad side-effect of the Callback TAX is that you have to spread callbacks all over the application, just to be prepared when an implementation goes from sync to async. For instance, say you can have some DAO that is hiding an in-memory, sync I/O, object cache. If you don’t prepare it for async flow of execution, the migration to an on-disk persistence propagates through all the upstanding layers of indirection.
Async is a library that provides a mean to “verticalize” arrow code:
The other solution, common in many technologies, are Promises, Futures, Promises/A+, etc.
Let’s see Q promises, a very good implementation that permit chaining, with something like this:
It appears to be a better solution than async. Cleaner, simpler. But also tightly coupled. I have to change all the contracts of my classes, not only the implementations, to use that library of promises. Try to imagine the cost of changing my promises library to another one. That’s why the Promises/A+ specification. From this point of view, Promises are a worst solution than async.
But both libraries (async and promises) have an added, deeper, hidden cost that can be dramatic, because of the strong coupling: testability.
There are some more options, like an extensive use of nested closures, or the use of reactive programming extensions, but I think they are solution to different problems; in fact, I think you should not be forced to use them because of the technology you are programming. And in addition, they could be impractical for everyday programming.
The same is true, IMHO, for a massive use of “Tell, Don’t Ask” as an architectural style, like this one; indeed, I see it almost utopical in the vast majority of the codebases I know, and it’s not so clear that is always the best solution, though it is in a world of pure theory. See for example the related article of Martin Fowler.
All these solution make big efforts to make the asynchronicity of node.js code easier to understand, to “mitigate” the problem at the same time being respectful to the inner nature of node.js reactor.
Let’s give a “theoretical framework” to properly name the problem. It’s perfectly feasible to imagine a “compiler” doing the following, from:
So we can say this is a mere problem of syntaxis, namely, syntactic noise. If we could solve the syntactical problem, the code in its whole would be perfectly portable between sync and async technology or implementations. In fact, in general we can state that every sync code can be written in async style. The contrary is not true.
Syntaxis encapsulation would give me the chance of changing from a sync implementation to an async one, for instance, inside a DAO class, without changing anything of the client code. Not even the syntaxis.
Besides, in the specific case of node.js, I could completely avoid huge (syntactical) problem in testability, and the cost of a bigger codebase.
A logical framework for the solution
Now, I got the bigger part of the work done: identifying a logical framework in which to achieve an optimal solution. From here it’s only a matter of finding the technology/library/framework nearest to the solution I propose.
Many technologies have a standard solution that goes in this direction. Think for example of C# async/await, Java/Scala continuations, and some Coffee Script solution (IcedCoffeeScript).
Obviously, it is non-blocking.