An Alternative to Dependency Injection Frameworks

I have a confession to make. I hate Dependency Injection (DI) frameworks.

My very first job as a Software Engineer involved working with a very complex system that powered a ~100 person hedge fund. We made extensive use of Dependency Injection… but only via Constructor or Setter Injection. We did not use any DI frameworks at all. Little did I realize how lucky I was.

I have since worked with Java code bases, much less complex in scope, but absolutely littered with DI annotations everywhere. I’ve worked with frameworks that took DI to the next level – even method parameters were injected by other methods that dynamically produced them when needed. To my untrained eye, it seemed like a colossal mess. Tracing anything took forever. Everything was implicitly linked to everything else. Maintaining the configs for every app and every test was a chore. Things that could have been a simple compile-time error flagged by my IDE, instead exposed themselves as run-time errors that were a pain to debug and fix.

Is all this really necessary? Why do we need all these annotation-driven magically-wired DI frameworks?

Dependency Graphs

I went searching for an explanation, and found one from the following blog post:

The main downside is that it’s a pain to have to manually create the Config before we can create the Server. We’ve created a dependency graph here – we must create our Config first because of Server depends on it. In real applications these dependency graphs can become very large and this leads to complicated logic for building all of the components your application needs to do its job.

He then goes on to give an example of a Server, which has a chain of dependencies – all of which need to be constructed in sequence, by a centralized main function:

func main() {
  config := NewConfig()
  db := ConnectDatabase(config)
  personRepository := NewPersonRepository(db)
  personService := NewPersonService(config, personRepository)
  server := NewServer(config, personService)
  server.Run()
}

His point presumably is that managing this dependency graph from a central location, can be complex and burdensome. Hence, it’s better to use a DI framework where you can specify how each dependency should be constructed, and they are all transitively invoked and initialized when needed.

I think that he is somewhat overstating the problems of constructor injection, but let’s assume for now that he’s right. Is there a different way to accomplish the above goal, without having to use a DI framework, and annotation-driven auto-wiring?

An Alternative

Turns out that I had run into a similar issue myself while working on some side projects. And I had solved them in a way that “resembles” a DI framework, without actually using any DI framework or advanced language constructs. I’m probably biased, but this approach appears to be far simpler, while conferring similar benefits.

Context:

  1. We want to construct and run a Server instance
  2. Server has a dependency on PersonService
  3. PersonService has a dependency on PersonRepository and Config
  4. PersonRepository has a dependency on Database
  5. Database has a dependency on the same Config as above

Suppose, as the author mentions, we do not want to use constructor injection in order to inject Config -> Database + Config -> PersonRepo -> PersonService -> Server. Suppose we want all dependencies to be lazily, and transitively constructed only when needed.

Consider the following:

public class Toolbox {
  public static Config getConfig() {...}
  public static Database getDatabase() {...}
  public static PersonRepo getPersonRepo() {...}
  public static PersonService getPersonService() {...}
}

If you have the above fully implemented, it can be trivially used to replace framework-based dependency injection. For instance, suppose you have a class that has a dependency on Database. Instead of relying on the DI framework to inject Database, you can just fetch it from the Toolbox instead.

@Inject
public PersonRepo(@Database Database db) {...}

Becomes:

public PersonRepo() { this(Toolbox.getDatabase()); }
public PersonRepo(Database db) {...}

Configuring the Toolbox

That all sounds great, but where does Toolbox.getDatabase() get its return value from? There are many possible ways to implement this, depending on your specific application and testing needs. Let’s look at a few of them.

Simplest possible option: construct a new instance every time:

public class Toolbox {
  public static Database getDatabase() { 
    return DatabaseProvider.get(); }
  }
  
  private static class DatabaseProvider {
    static Database get() { 
      return buildDatabase(Toolbox.getConfig()); 
    }
  }
}

Or if you want to reuse the same Database instance every time, you can use a singleton holder with lazy-initialization:

class DatabaseProvider {
  static Database get() { return DefaultHolder.DEFAULT; }

  private static class DefaultHolder {
    private static final DEFAULT = buildDatabase(Toolbox.getConfig());
  }
}

And suppose you want the ability to inject custom instances, for testing purposes:

// Restrict visibility to prevent access from unexpected sources
class DatabaseProvider {
  // throws exception if already set to a different value
  // Prevents any mutations from happening after the first value is set
  static void set(Database db) {...}

  // Returns a default if not set
  static Database get() {...}
}

And if you want all this to be thread-safe, you can use AtomicReference. Or you could use a simple utility class that manages thread safety, lazy init, defaults, and immutability, in order to implement all this in just 5 lines of code.

class DatabaseProvider {
  private static final DynamicConstant INSTANCE = 
    DynamicConstant.withDefault(() -> buildDatabase(Toolbox.getConfig()));

  // throws exception if instance is already set to a different value
  // Prevents any mutations from happening after the first value is set
  static void set(Database db) { INSTANCE.set(db); }

  static Database get() { return INSTANCE.get(); }
}

You can customize this to fit any particular requirements you have. Thread-safety, immutability, defaults, singletons vs suppliers, injecting fakes for tests – you can implement any of these simply by customizing the DatabaseHolder implementation.

Notice that this automatically manages your dependency graph as well. When Toolbox.getDatabase() is invoked, that invokes DatabaseProvider.get(), which will then invoke Toolbox.getConfig() if needed, which might in turn transitively invoke its own dependencies via the Toolbox as well.

In this way, PersonRepo only needs to call Toolbox.getDatabase(), and all transitive dependencies are lazily initialized or constructed (if needed), in order to generate the Database instance.

So… Service Locators?

Given the superficial similarity to Service Locators (SL), it’s easy to see why this might seem like a reincarnation of an old idea. However, there are some major differences between the approach described above, and a traditional SL pattern. Differences that completely change the way the system feels and operates.

First, unlike a SL, the above approach cannot be used to request any arbitrary object. The Toolbox only has specific methods defined, such as getDatabase(), which return specific objects. You cannot simply invoke Toolbox.get(MyCustomObject.class), like you can with a SL.

This restriction might seem like a limitation. But it actually makes your code much safer. It guarantees that all Toolbox users are only using it to request objects that have been explicitly planned for and added to the Toolbox interface. It also allows for programmers to easily figure out which dependencies they can safely get from the Toolbox, and which ones they have to get elsewhere.

The above also provides an additional level of safety: you can ensure that every method exposed by the Toolbox, comes with a default supplier. A default supplier that eliminates any worries that the Toolbox wasn’t properly initialized prior to use. A default supplier that transitively constructs its own dependencies using the Toolbox recursively.

In fact, the right way to do it would be to define default suppliers that always return something that works, and is intended for production use. This way, when running in prod, your code should never have to set any values in the toolbox. It can simply get the lazy-constructed defaults whenever needed. The only use case for setting something in the Toolbox, would be for testing purposes when you want to inject a fake.

Lastly, a SL is designed and intended to be extremely flexible, by allowing for instance injection at any time. This can be a powerful tool, if your application needs such dynamic abilities. However, it can also lead to complex interactions and side-effects as different parts of the application interfere with each other in unintentional or non-intuitive ways.

The Toolbox approach described above isn’t expressly designed to have such capabilities. If you look at the various set methods, you can see that they are programmed to throw exceptions if they conflict with a previously set value. This means that as soon as a value is set, it is then frozen for the rest of the application’s lifespan. You can always customize this in any way you want, by changing the Provider implementation – but I would recommend enforcing some form of consistency.

Combine all of these differences, and you get something that’s completely different from a Service Locator in terms of its uses and drawbacks.

But Singletons are Bad?

With respect to Singletons, there’s little difference between the Toolbox approach above, and what you would do with DI frameworks. If you want a new instance every time, you can configure the DatabaseProvider to construct a new instance every time. Alternatively, if you prefer to reuse the same instance every time because it’s designed to be reused, then you can use Singletons as shown above – something that DI frameworks like Guice explicitly support as well.

How would you do Testing?

There are many ways to inject fakes, using the set methods shown above. Here’s one simple example, and by no means is it meant to be definitive.

public class TestBase {
  protected final FakeDatabase database = new FakeDatabase();

  @BeforeTest
  public void setup() {
    DatabaseProvider.set(database);
    database.reset();
  }
}

Final Thoughts

The approach described here is more complex than Constructor Injection for sure. Where possible, I would still recommend using constructor injection. But if you absolutely want something that manages the wider dependency graph for you, I personally find this approach a lot simpler, easier to follow, and more flexible, as compared to any DI framework I’ve worked with.

7 thoughts on “An Alternative to Dependency Injection Frameworks

  1. Thank you! I’ve always felt like DI was a solution in search of a problem, and arguments for it dramatically overstate the issue of dependency graphs. DI it’s over-used, and makes software much more complex than it should be.

    Like

  2. DI is a pattern that is blindly being followed and replicated everywhere without paying a thought to whether it is appropriate or not. Lot of java stuff is like that. Enterprise thinking!

    Like

  3. I definitely agree that DI/IoC containers are often rather problematic and overly complicate most applications. I developed basically the same pattern described here and used it with a fairly good sized Vertx.io project. Works well, though obviously it may be a bit more work to add a new object to your ‘Toolbox’. I actually think that is a GOOD thing as it forces people to really think about dependencies and makes them much more explicit.

    Liked by 1 person

  4. How would this handle having to call set() with multiple values for separate test cases?
    Also concurrently run tests, they would all have a dependency on the static methods of Toolbox?

    Like

    1. I’ve generally used vanilla dependency injection techniques like constructor injection for use-cases like different tests running in parallel with different injected values

      Like

Leave a comment