This is where we are now. We know how to make testable designs, since effort has been put towards it.
What we lost is being able to distinguish good vs bad designs in other key aspects. One indicator I use is the number of test cases that are needed for covering. A poorly separated design will need to test many combinations having not isolated the factors.
I have noticed another indicator of good design is how easy it is to add specific fixes/features without having to think too much about how many places you'll need to make changes. Interesting point about no. of test cases. I tend to end up with several combinations because I prefer to focus on integration tests more for crud-type applications (a test configuration type/class comes handy in such scenarios).
As an example, if testing a 'send' function that takes a sender, content/media type, and recipients where recipients can be a single contact or a group, one shouldn't have to check that all combinations of content/media-type works with single or group recipients and other variable option variables. If decomposed in orthogonal ways K+L+N+M tests and perhaps a handful overall edge cases should do and not KxLxNxM if each variable dimension has K, L, N, M possibilities. If not cleanly separated in the implementation structure, the cross product of test cases are needed.
What we lost is being able to distinguish good vs bad designs in other key aspects. One indicator I use is the number of test cases that are needed for covering. A poorly separated design will need to test many combinations having not isolated the factors.