Do tests need static analysis level max? – Matthias Noback

I recently heard this interesting question: if your project uses a static analysis tool like PHPStan or Psalm (as it should), should the tests by analysed too?

The first thing to consider: what are potential reasons for not analysing your test code?

Why not?

1. Tests are messy in terms of types

Tests may use mocks, which can be confusing for the static analyser:

$object = $this->createMock(MockedInterface::class);

The actual type of $object is an intersection type, i.e. $object is both a MockObject and a MockedInterface, but the analyser only recognizes MockObject. You may not like all those warnings about “unknown method” calls on MockObject $object so you exclude test code from the analysis.

2. Static analysis is slow

Static analysis is slow, so you want to reduce the amount of code the analyser has to process on each run. One way to do this is to ignore all the test code.

3. Production code is more important because it gets deployed

Besides performance, another justification for excluding the test code may be that production code is more important. It has to be correct, because it will be deployed. A bug in production code is worse than a bug in test code.

Why yes?

I think we’ve covered the three major objections against analysing test code. If you have other suggestions, please let them know in the comments! Anyway, let’s tackle the objections now, because (as you may have guessed): I think we should have our test code analysed too.

1. Mock types can easily be improved

Static analysers support intersection types so a mocked object can be annotated as MockObject & MockedInterface. They also have plugins or extensions that can derive the resulting type for you:

$object = $this->createMock(MockedInterface::class); // Derived type of $object: MockObject & MockedInterface

2. Static analysers have a cache

Both PHPStan and Psalm use a cache so they don’t have to analyse the entire code base over and over again. You won’t notice any difference if you analyse all your code or just production code (quick tip: if you run the analyser in a Docker container, make sure that the cache directory is not lost after each run; configure it to be within one of the bind-mounted volumes).

3. Test code is just as important as production code

Of course, it’s the production code that gets deployed so its quality needs to be guarded. However, tests play another important role in quality assurance. To care less about your tests means you’ll have trouble maintaining both production code and test code. Test code also deserves to be refactored, and its design needs to evolve over time. When doing so, it will be very important to get feedback from static analysers.

Additional benefits

Separately testing behavior

Statically analysing all the code before running the tests is a great way to ensure that the tests themselves don’t throw any basic errors, like wrong number of method arguments, type mismatches, etc. This allows for a more clear definition of the role of tests versus static analysis: static analysis can tell you that your code will (most likely) run, and the tests can tell you if the code actually implements the behavior you expect from it.

Probing

Running static analysis on your entire code base allows for different refactoring workflows too. Consider a common situation where a method needs an extra required argument. A traditional workflow is:

  • Add an optional argument to this method.
  • Find all usages of this method with your IDE.
  • On each call site, pass the new argument.
  • Finally make the argument required.

At every stage, the code and tests keep working and every step can be committed.

Another approach could be:

  • Add that extra required argument.
  • Run the static analyser and find out what no longer works.

By doing so you might not be prepared to stop at any time. However, it does follow the Mikado style; the first step of which is to Just Do It an

Truncated by Planet PHP, read more at the original (another 907 bytes)

PHP: Saving Xhtml creates entity references – Christian Weiske

All my blog posts are Xhtml, because I can load and manipulate them with an XML parser. I do that with scripts when adding IDs for better referencing, and when compiling the blog posts by adding navigation, header and footer.

The pages have no XML declaration because the W3C validator complains that

Saw <?. Probable cause: Attempt to use an XML processing instruction in html. (XML processing instructions are not supported in html.)

But when loading such a Xhtml page with PHP’s SimpleXML library and generating the XML to save it, entities get encoded:

<?php
$xml = 'ÄÖÜ';
$sx = simplexml_load_string($xml);
echo $sx->asXML() . "\n";
?>

This script generates encoded entities:

<?xml version="1.0"?>
ÄÖÜ

I found the solution for that problem in a stack overflow answer: You have to manually declare the encoding – despite the standard saying that UTF-8 is standard when no declaration is given.

dom_import_simplexml($sx)->ownerDocument->encoding = 'UTF-8';

Now the generated XML has proper un-encoded characters:

ÄÖÜ

Book excerpt – Decoupling from infrastructure, Conclusion – Matthias Noback

This article is an excerpt from my book Advanced Web Application Architecture. It contains a couple of sections from the conclusion of Part I: Decoupling from infrastructure.

This chapter covers:

  • A deeper discussion on the distinction between core and
    infrastructure code
  • A summary of the strategy for pushing infrastructure to the sides
  • A recommendation for using a domain- and test-first approach to
    software development
  • A closer look at the concept of “pure” object-oriented programming

Core code and infrastructure code

In Chapter 1 we’ve looked
at definitions for the terms core code and infrastructure code. What
I personally find useful about these definitions is that you can look at
a piece of code and find out if the definitions apply to it. You can
then decide if it’s either core or infrastructure code. But there are
other ways of applying these terms to software. One way is to consider
the bigger picture of the application and its interactions with
actors. You’ll find the term actor in books about user stories and use
cases by authors like Ivar Jacobson and Alistair Cockburn, who make a
distinction between:

  1. Primary actors, which act upon our system
  2. Secondary or supporting actors, upon which our system acts

As an example, a primary actor could be a person using their web
browser to send an HTTP POST request to our application. A
supporting actor could be the relational database that our application
sends an SQL INSERT query to. Communicating with both actors requires
many infrastructural elements to be in place. The web server should be
up an running, and it should be accessible from the internet. The server
needs to pass incoming requests to the application, which likely uses a
web framework to process the HTTP messages and dispatch them to the
right controllers. On the other end of the application some data may
have to be stored in the database. PHP needs to have a PDO driver
installed before it can connect to and communicate with the database.
Most likely you’ll need a lot of supporting code as well to do the
mapping from domain objects to database records. All of the code
involved in this process, including a lot of third-party libraries and
frameworks, as well as software that isn’t maintained by yourself (like
the web server), should be considered infrastructure code.

Most of the time between the primary actor sending an HTTP request to
your server, and the database storing the modified data, will be spent
by running infrastructure code and most of this code can be found in PHP
extensions, frameworks, and libraries. But somewhere between ingoing and
outgoing communication the server will call some of your own code, the
so-called user code.

User code is what makes your application special: what things can you
do with your application?
You can order an e-book. You can pay for it.
What kind of things can you learn from your application? You can see
what e-books are available. And once you’ve bought one, you can download
it. Frameworks, libraries, and PHP extensions could never help you with
this kind of code, because it’s domain-specific: it’s your business
logic.

The following figure shows that user
code is in the middle of a lot of infrastructure code:

Even if we try to
ignore most of the surrounding infrastructure while working on and
testing user code, we’ll often find that this code is hard to work with.
That’s because the code still contains many infrastructural details. A
use case may be inseparable from the web controller that invokes it. The
use of service locators and the likes prevents code from running in
isolation, or in a different context. Calls to external services require
the external service to be available when we want to locally test our
code. And so on…

If that’s the case, user code consists of a mix of infrastructure code
and core code.
The following figure shows what this looks like:

When I look at this diagram, I immediately feel
the urge to push the bits of infrastructure co

Truncated by Planet PHP, read more at the original (another 12319 bytes)

Testing your controllers when you have a decoupled core – Matthias Noback

A lot can happen in 9 years. Back then I was still advocating that you should unit-test your controllers and that setter injection is very helpful when replacing controller dependencies with test doubles. I’ve changed my mind: constructor injection is the right way for any service object, including controllers. And controllers shouldn’t be unit tested, because:

  • Those unit tests tend to be a one-to-one copy of the controller code itself. There is no healthy distance between the test and the implementation.
  • Controllers need some form of integrated testing, because by zooming in on the class-level, you don’t know if the controller will behave well when the application is actually used. Is the routing configuration correct? Can the framework resolve all of the controller’s arguments? Will dependencies be injected properly? And so on.

The alternative I mentioned in 2012 is to write functional tests for your controller. But this is not preferable in the end. These tests are slow and fragile, because you end up invoking much more code than just the domain logic.

Ports and adapters

If you’re using a decoupled approach, you can already test your domain logic using fast, stable, coarse-grained unit tests. So you particularly don’t want to let your controller tests also invoke domain logic. You only want to verify that the controller correctly invokes your domain logic. We’ve seen one approach in Talk review: Thomas Pierrain at DDD Africa, where Thomas explained how he includes controller logic in his coarse-grained unit tests. He also mentioned that it’s not “by the book”, so here I’d like to take the time to explain what by the book would look like.

Hexagonal architecture prescribes that all application ports should be interfaces. That’s because right-side ports should potentially have more than one adapter. The port, being an interface, allows you to define a contract for communicating with external services. On the left side, the ports should be interfaces too, because this allows you to replace the port with a mock when testing the left-side adapter.

Following the example from my previous post, this is a schema of the use case “purchasing an e-book”:

PurchaseEbookController and PurchaseRepositoryUsingSql are adapter classes (which need supporting code from frameworks, libraries, etc.). On the left side, the adapter takes the request data and uses it to determine how to call the PurchaseEbookService, which represents the port. On the right side there is another port: the PurchaseRepository. One of its adapters is PurchaseRepositoryUsingSql. The following diagram shows how this setup allows you to invoke the left-side port from a test runner, without any problem, while you can replace the right-side port adapter with a fake repository:

Left-side adapter tests

Since the test case replaces the controller, the controller remains untested. Even though the controller should be only a few lines of code, there may be problems hiding there that will only be uncovered by exploratory (browser) testing or once the application has been deployed to production.

In this case it would help to create an adapter test for the controller. This is not a unit test, but an integrated test. We don’t invoke the controller object directly, but travel the normal route from browser request to response, through the web server, the framework, and back. This ensures that we got everything right, and leave (almost) no room for mistakes in interpreting framework configuration (routing, security, dependency injection, etc.).

We have already established that you wouldn’t want to test domain logic again through a web request. Two things we need to solve for adapter tests then:

  1. The controller shouldn’t invoke the domain logic. Instead, we should only verify that it calls the port (which is an interface) in the right way. The pattern for this is called mocking: we need a test double that records the calls made to it and makes assertions about those calls (the number of times, and the arguments provided).
  2. We need a way to inject this mock into the controller as a constructor argument. Thi

Truncated by Planet PHP, read more at the original (another 7582 bytes)

PHP Internals News: Episode 77: fsync: Buffers All The Way Down – Derick Rethans

PHP Internals News: Episode 77: fsync: Buffers All The Way Down

In this episode of “PHP Internals News” I chat with David Gebler (GitHub) about his suggestion to add the fsync() function to PHP, as well as file and output buffers.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode’s MP3 file, and it’s available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:13

Hi, I’m Derick. Welcome to PHP internals news, a podcast dedicated to explaining the latest developments in the PHP language. This is Episode 77. In this episode I’m talking with David Gebler about an RFC that he’s written to add a new function to PHP called fsync. David, would you please introduce yourself?

David Gebler 0:35

Hi, I’m David. I’ve worked with PHP professionally among other languages as a developer of websites and back end services. I’ve been doing that for about 15 years now. I’m a new contributor to PHP core, fsync is my first RFC.

Derick Rethans 0:48

What is the reason why you want to introduce fsync into the PHP language?

David Gebler 0:52

It’s an interesting question. I suppose in one sense, I’ve always felt that the absence of fsync and some interface to fsync is provided by most other high level languages, has always been something of an oversight in PHP. But the other reason was that it was an exercise for me in familiarizing myself with PHP’s core getting to learn the source code, and it’s a very small contribution, but it’s one that I feel is potentially useful, and it was easy for me to do as a learning exercise.

Derick Rethans 1:16

How did you find learning about PHP’s internals?

David Gebler 1:19

Quite the roller coaster. The PHP internals are very arcane I suppose I would say, it’s it’s something that’s not particularly well documented. It’s quite an interesting challenge to get into it. I think a lot of it you have to pick up from digging through the source code, looking at what’s already been done, putting together the pieces, but there is a really great community on the internals list, and indeed elsewhere online, and I found a lot of people very helpful in answering questions and again giving feedback when I first opened my initial proof of concept PR

Derick Rethans 1:48

Did you manage to find room 11 on Stack Overflow chat as well?

David Gebler 1:52

I did not, no.

Derick Rethans 1:53

I’ll make sure to add a link in the show notes and it’s where many of the PHP core contributors hang out quite a bit.

David Gebler 2:00

Sounds good to know for the future.

Derick Rethans 2:02

I read the RFC earlier today. And it talks about fsync, but it also talks about flush, or f-flush. What is the difference between them and what does fsync actually do?

David Gebler 2:14

That’s the question that will be on everyone’s lips when they hear about this feature being introduced into the language, hopefully. What does fsync do and what does fflush do? To understand that we have to understand the concept of the different types of buffering, an application runs on a system. So we have the application or sometimes called the user space buffer, and we have the operating system kernel space buffer,

Derick Rethans 2:36

And we’re talking a

Truncated by Planet PHP, read more at the original (another 22440 bytes)

Does it belong in the application or domain layer? – Matthias Noback

Where should it go?

If you’re one of those people who make a separation between an application and a domain layer in their code base (like I do), then a question you’ll often have is: does this service go in the application or in the domain layer? It sometimes makes you wonder if the distinction between these layers is superficial after all. I’m not going to write again about what the layers mean, but here is how I decide if a service goes into Application or Domain:

Is it going to be used in the Infrastructure layer? Then it belongs in the Application layer.

Dependency rule V2

I like to follow the Dependency rule: layers can have only inward dependencies. This ensures that the layers are decoupled. The Infrastructure layer can use services from the Application layer. The Application layer can use Domain layer services. In theory, Infrastructure could use services from Domain, but I’d rather not allow that. I want the Application layer to define a programming interface/API that can be used by the Infrastructure layer. This makes the Domain layer, including the Domain model, an implementation detail of the Application layer. Which I think is rather cool. No need to do anything with aggregates, or domain events; as long as everything can be hidden behind the Application-as-an-interface.

Making this a bit more concrete, consider the use case “purchasing an e-book”. In code it will be represented as a PurchaseEbookController, living in Infrastructure, which creates a PurchaseEbook command object, which it passes to the PurchaseEbookService. This service is an application service, living in the Application layer. The service creates a Purchase entity and saves it using the PurchaseRepository, living in the Domain layer. At runtime, a PurchaseRepositoryUsingSql will be used, living in Infrastructure, which implements the PurchaseRepository interface.

Why is the PurchaseRepository interface in Domain? Because it won’t and shouldn’t be used directly from Infrastructure (e.g. the controller). The same goes for the Purchase entity. It should only be created or manipulated in controlled ways by application services. But from the standpoint of the Infrastructure layer, we don’t care if the application service uses an entity, a repository interface, or any other design pattern. As long as it does its job in a decoupled way. That is, it’s not coupled to specific infrastructure, neither by code nor by the need for it to be available at runtime.

Why is the application service in the Application layer? Because it’s called directly from the controller, which is Infrastructure. The application service itself is part of the API defined by the Application layer.

Application-as-an-interface or ApplicationInterface?

This gets us to an interesting possibility, which is somewhat experimental: we can define the API that the Application layer offers to its surrounding infrastructure as an actual interface. E.g.

namespace Application; interface ApplicationInterface
{ public function purchaseEbook(PurchaseEbook $command): void; /** * @return EbookForList[] */ public function listAvailableEbooks(): array;
}

That second method, listAvailableEbooks() is an example of a view model that could be made accessible via the ApplicationInterface as well.

I think the-application-as-an-interface is a nice design trick to force Infrastructure to be decoupled from Application and Domain code. Infrastructure, like controllers, can only invoke Application behavior via this interface. Another advantage is that creating acceptance tests for the application becomes really easy. You only need the ApplicationInterface in your test and you can run commands and queries against it, to prove that it behaves correctly. You can also create better integration tests for your left-side, or input adapters, because you can replace the entire core of your application by mocking a single interface. I’ll leave a discussion of the options for another article.

Excerpt from PHP for the Web: Error handling – Matthias Noback

This is an excerpt from my book PHP for the Web. It’s a book for people who want to learn to build web applications with PHP. It doesn’t focus on PHP programming, but shows how PHP can be used to serve dynamic web pages. HTTP requests and responses, forms, cookies, and sessions. Use it all to build a CRUD interface and an authentication system for your very first web application.

Chapter 11: Error handling

As soon as we started using a PHP server to serve .php scripts in Chapter 2 we had to worry about errors and showing them in the browser.
I mentioned back then that you need to make a distinction between the website as it is still running on your own computer and the website as it is running on a publicly accessible server.
You may find that people talk about this distinction in different ways.
When you’re working on your website on your own computer you’re running it “locally” or on your “development server”.
When it runs on a publicly accessible server it has been “deployed” to the “production server”.
We use different words here because these are different contexts or environments and there will be some differences in server configuration and behavior of the website depending on whether it runs locally or on the production server.
In this chapter we’ll improve the way our website handles errors and we’ll make this dependent on the environment in which the website runs.

Producing an error

Before we can improve error handling, let’s create a file that produces an error, so we can see how our website handles it.
Create a new script in pages/ called oops.php.
Also add it to the $urlMap in index.php so we can open the page in the browser:

$urlMap = [ '/oops' => 'oops.php', // ...
];

This isn’t going to be a real page, and we should remove it later, but we just need a place where we can freely produce errors.
The first type of error we have to deal with is an exception.
You can use an exception to indicate that the script can’t do what it was asked to do.
We already saw one back in Chapter 9 where the function load_tour_data().
The function “throws” an exception when it is asked to load data for a tour that doesn’t exist:

function load_tour_data(int $id): array
{ $toursData = load_all_tours_data(); foreach ($toursData as $tourData) { if ($tourData['id'] === $id) { return $tourData; } } throw new RuntimeException('Could not find tour with ID ' . $id);
}

In oops.php we’ll also throw an exception to see what that looks like for a user:

<?php throw new RuntimeException('Something went wrong');

Start the PHP server if it isn’t already running:

php -S 0.0.0.0:8000 -t public/ -c php.ini

Then go to http://localhost:8000/oops.
You should see the following:

Fatal error: Uncaught RuntimeException

The reason the error shows up on the page is because we have set the PHP setting display_errors to On in our custom php.ini file.
We loaded this file using the -c command-line option.

Seeing error messages on the screen is very useful for a developer like yourself: it will help you fix issues quickly.
But it would be quite embarrassing if this showed up in the browser of an actual visitor of the website.

Using different configuration settings in production

Once the website has been deployed to a production environment, the display_errors setting should be Off.
While developing locally you can simulate this by using multiple .ini files.
The php.ini file we’ve been using so far will be the development version.
We’ll create another .ini file called php-prod.ini that will contain settings that mimic the production environment.

; Show no errors in the response body
display_errors = Off

When you supply multiple php.ini files

Truncated by Planet PHP, read more at the original (another 18479 bytes)

Talk review: Thomas Pierrain at DDD Africa – Matthias Noback

As a rather unusual pastime for the Saturday night I attended the third Domain-Driven Design Africa online meetup. Thomas Pierrain a.k.a. use case driven spoke about his adaptation of Hexagonal architecture. “It’s not by the book,” as he said, but it solves a lot of the issues he encountered over the years. I’ll try to summarize his approach here, but I recommend watching the full talk as well.

Hexagonal architecture

Hexagonal architecture makes a distinction between the use cases of an application, and how they are connected to their surrounding infrastructure. Domain logic is represented by pure code (in the FP sense of the word), surrounded by a set of adapters that expose the use cases of the application to actual users and connect the application to databases, messages queues, and so on.

The strict separation guarantees that the domain logic can be tested with isolated tests (“unit tests”, or “acceptance tests”, which run without needing any IO). The adapter code will be tested separately from the domain logic, with adapter tests (“integration tests”). Finally, when connecting domain logic and adapters, the complete running application can be tested with end-to-end tests or exploratory tests.

Pragmatic hexagonal architecture

Thomas notes that in practice, developers like to write unit tests and acceptance tests, because they are fast, and domain-oriented. Adapter tests are boring (or hard) to write, and so they are often neglected. Thomas noticed that the adapters are where most of the bugs reside. So he stretches acceptance tests to also include part of the left-side adapters and part of the right-side adapters. Only the actual IO gets skipped, because the penalty is too high: it would make the test slow, and unpredictable.

I think it’s somewhat liberating to consider this an option. I’ve also experimented with tests that leave out the left-side adapter but include several right-side adapters. I always felt somewhat bad about it; it wasn’t ideal. Indeed, it may still not be ideal in some situations, but at least it gives you some options when you’re working on a feature.

I find I often don’t test left-side adapters on their own, so they are a nice place for mistakes to hide until deployment. Being able to make them part of an acceptance test is certainly a way to get rid of those mistakes. However, the standard arguments against doing this still hold up. Your acceptance tests become tied to the delivery mechanism. By invoking your web controller in an acceptance test, you’re coupling it to framework-specific and web-specific classes. This is going to be a long-term maintenance issue.

The same goes for the right-side adapters. If we are going to test part of those adapters in an acceptance test, the test will in fact end up being coupled to implementation logic, or a specific database technology. Thomas mentions that only the “last mile”, the actual IO, will be skipped. I took this to mean that your test may, for instance, use the real repository, but not provide the real database connection to it. This, again, seems like a very valuable technique. It saves some time creating a separate adapter test for the repository. However, this also comes at the price of increased coupling. The acceptance test verifies that a certain query will be sent to the database, but this will only be useful as long as we’re using this particular database.

Thomas explains that we can reduce the coupling issue by making assertions at a higher abstraction level, but even then, the acceptance tests being tied to specific technologies like that greatly reduces the power that came with hexagonal architecture: the ability to swap adapters, or experiment with alternative adapters, while leaving the tests intact. On the other hand, it is cool to have the option to write fewer types of tests and cover more or less the same ground.

Concerns

Although the concept of an acceptance test gets stretched a bit, it still doesn’t invoke any IO, which means it’s still mostly follows the hexagonal architecture approach, where we should be able to replace left-side adapters with our test runner, and replace right-side adapters with some fake adapters. However, when an ac

Truncated by Planet PHP, read more at the original (another 1759 bytes)

Successful refactoring projects – The Mikado Method – Matthias Noback

You’ve picked a good refactoring goal. You are prepared to stop the project at anytime. Now how to determine the steps that lead to the goal?

Bottom-up development

There is an interesting similarity between refactoring projects, and regular projects, where the goal is to add some new feature to the application. When working on a feature, I’m always happy to jump right in and think about what value objects, entities, controllers, etc. I need to build. Once I’ve written all that code and I’m ready to connect the dots, I often realize that I have created building blocks that I don’t even need, or that don’t offer a convenient API. This is the downside of what’s commonly called “bottom-up development”. Starting to build the low-level stuff, you can’t be certain if you’re contributing to the higher-level goal you have in mind.

Refactoring projects often suffer from the same problem. When you start at the bottom, you’ll imagine some basic tasks you need to perform. I find this kind of work very rewarding. I feel I’m capable of creating a value object. I’m capable of cleaning up a bit of old code. But does it bring me any closer to the goal I set? I can’t say for sure.

Top-down development

A great way to improve feature development is to turn the process around: start with defining the feature at a higher level, e.g. as a scenario that describes the desired behavior of the application at large, and at the same time tests if the application exposes this behavior (see Behavior-Driven Development). When you take this top-down approach, you’ll have less rework, because you’ll be constantly working towards the higher-level goal, formulated as a scenario.

The Mikado Method

For refactoring projects the same approach should be taken. Formulate what you want to achieve, and start your project from there. This has been described in great detail in the book The Mikado Method, by Ola Ellnestam and Daniel Brolund. I read it a few years ago, so I might not be completely faithful to the actual method here. What I’ve taken from it is that you have to start at the end of the refactoring project. I’ll give an example from my current project, where the team has decided they want to get rid of Doctrine ORM as a dependency. This seems like a daunting task. But Mikado can certainly help here.

The first thing to do is to actually remove the Doctrine ORM dependency: composer remove doctrine/orm. Commit this change, run the tests and of course you’ll get an error: could not find class Doctrine\ORM\... in file xxx.php on line y. So now you know that before you can remove the doctrine/orm package you have to ensure that file xxx.php does not use that class from the Doctrine\ORM namespace anymore. This is called a prerequisite; something we need to do first, before we can even think about doing the big change. We now have to revert the commit, because it’s not a commit we should keep; we broke the application.

This may seem like a stupid exercise, but it’s still very powerful because it’s an empirical method. Instead of guessing what needs to be done to achieve the end goal, you are now absolutely sure what needs to be done. In this example, it may be quite obvious that removing a package means you can no longer use classes from that package, but in other situations it’s less obvious.

The next step is to rewrite file xxx.php in such a way that it no longer uses those classes from Doctrine\ORM. This may require a bit of work, like rewriting the mapping logic. You can define all those tasks as prerequisites too.

When you’re done with any of the prerequisites, and the tests pass, you can commit and merge your work to the main branch. Everything is good. Of course, you still have all those other classes that use Doctrine\ORM classes, but at least there’s one less. You are closer to your end goal.

You can stop at any time

Being able to commit and merge smaller bits of work multiple times a day means that Mikado is compatible with the rule that you should be able to stop at any time. You can (and should) use short-lived branches with Mikado. Everything you do is going to get you closer to your goal, but also doesn’t break the project in any way.

With Mikado it actually feels like with every commit you’re gaining XP so you can unlock new levels for your application

Truncated by Planet PHP, read more at the original (another 1515 bytes)