I have the same experience. Integration tests are the best. They test only what really matters and allow you to keep flexibility over implementation details.
When your TDD approach revolves around integration tests, you have complete freedom to add, remove and shift around internal components. Having the flexibility to keep moving around the guts of a system to bring it closer to its intended behavior is what software engineering is all about.
This is also how evolution works; the guts and organs of every living creature were never independently tested by nature.
Nature only cares about very high-level functionality; can this specimen survive long enough to reproduce? Yes. Ok, then it works! It doesn't matter that this creature has ended up with intestines which are 1.5 meters long; that's an implementation detail. The specimen is good because it works within all imposed external/environmental constraints.
That's why there are so many different species in the world; when a system has well defined external requirements, it's possible to find many solutions (of varying complexity) which can perfectly meet those requirements.
I'll would like to watch you running around and trying out everything when integration tests fails and you don't know which part of the code base caused the failure, while I just run all the specs and figure out the exact unit of code that is causing the problem.
That would not happen because each of my integration test cases are properly isolated form each other. If any specific test case starts failing, I just disable all other test cases and start debugging the affected code path; it usually only takes me a few minutes to identify even the most complex problems. Also, because I use a TDD approach, I usually have a pretty good idea about what changes I might have made to the code which could have introduced the issue.
Unit tests on the other hand are useless at identifying complex issues like race conditions; they're only good at detecting issues that are already obvious to a skilled developer.
* I restart the system from scratch between each test case but if that's too time-consuming (I.e. more than a few milliseconds), I just clear the entire system state between each test case. If the system is truly massive and clearing the mock state between each test case takes too long, then I break it up into smaller microservices; each with their own integration tests.
Apologies sir...! When I said unit tests, I mean a specific unit of code (in TDD manner).
I mistunderstood your Integration tests. I understand them as something like checking from the end user contract. If so, without TDD, it won't be possible to track down the bug easily as with TDD.
When your TDD approach revolves around integration tests, you have complete freedom to add, remove and shift around internal components. Having the flexibility to keep moving around the guts of a system to bring it closer to its intended behavior is what software engineering is all about.
This is also how evolution works; the guts and organs of every living creature were never independently tested by nature.
Nature only cares about very high-level functionality; can this specimen survive long enough to reproduce? Yes. Ok, then it works! It doesn't matter that this creature has ended up with intestines which are 1.5 meters long; that's an implementation detail. The specimen is good because it works within all imposed external/environmental constraints.
That's why there are so many different species in the world; when a system has well defined external requirements, it's possible to find many solutions (of varying complexity) which can perfectly meet those requirements.