Let's talk about Testing, about unit and integration tests which everyone here, i hope, likes to write.
Despite of lots of benefits, tests have two fundamental problems: there is no architecture at all (quality), nobody knows what is the meaning of the quantity.
Tests have a cost. Lets imagine dialog between manager and developer:
— John, how many hours does it take to implement?
— Approximately one week: 1,5 day for task and 3,5 days to write tests
So why people write tests? Let's try to build a mental model: we delivered a component, our beta testers found a bug, we wrote tests scenario. So, next time we try to deliver the same component, we already know about bad scenario and have automation to check it. So far so good. Next, we are trying to predict future bugs by writing lot's of tests and cover bad scenarios. Next, we deliver better components, users are happy. Profit. (Despite of all pros, the solution is not scalable — with every new component we will face a reality).
So tests are useful: it's sort of agreement between a flexible codebase and loyal users (one guys called tests «Double-entry bookkeeping» which who knows accounting can be very true). So every programmer writes tests with good intentions thinking about blackbox component on one side and tests with YES/NO expection on other side. In the same time, every lazy programmer bravely keeps his or her codebase untouchable (design or just function signature) because otherwise all tests needs to be rewritten. Good developer can start from tests and then continue with a function implementation (hello TTD). But sometimes we can find interesting artifacts in our codebase: properties with names like this._threadsCount /* used in tests; do not delete */ or Table.testEnvironment /* sets to true in the tests */ For now what we have it's all ours: business brings money, developers write tests. Let's talk about quantity.
Lets write one-line function square(x) { return x; } Lets test it for keep covering 100%: expect(square(1)).to.be.eq(1). Responsible programmer can go futher and write basic tests, just because he or she wants to cover common cases and feels safer to make changes in the codebase. Make sense. But let's think a bit, how many tests do you want to write for the function groupBy(list, fn)? i hope at least 16, approximately 4x more then the length of groupBy implementation. OK, thats true for unittest, what about integration tests? With integration tests we have at least two components. Where we have components we have relations. Not to mention the problems with all Fakes in many components system, relations exponentially increase amount of states need to be tested. I hope you knew it before. Despite all that problems integration tests are very useful tool (but very expensive, and shaky, and slow, and i hope you knew it before too). There are much more problems, my dear readers, but you can easily reveal them just start thinking about different approaches how to test systems, and i left it as exercises (really, just try to think about it) For now that's all, see you in the next topic.
Despite of lots of benefits, tests have two fundamental problems: there is no architecture at all (quality), nobody knows what is the meaning of the quantity.
Tests have a cost. Lets imagine dialog between manager and developer:
— John, how many hours does it take to implement?
— Approximately one week: 1,5 day for task and 3,5 days to write tests
So why people write tests? Let's try to build a mental model: we delivered a component, our beta testers found a bug, we wrote tests scenario. So, next time we try to deliver the same component, we already know about bad scenario and have automation to check it. So far so good. Next, we are trying to predict future bugs by writing lot's of tests and cover bad scenarios. Next, we deliver better components, users are happy. Profit. (Despite of all pros, the solution is not scalable — with every new component we will face a reality).
So tests are useful: it's sort of agreement between a flexible codebase and loyal users (one guys called tests «Double-entry bookkeeping» which who knows accounting can be very true). So every programmer writes tests with good intentions thinking about blackbox component on one side and tests with YES/NO expection on other side. In the same time, every lazy programmer bravely keeps his or her codebase untouchable (design or just function signature) because otherwise all tests needs to be rewritten. Good developer can start from tests and then continue with a function implementation (hello TTD). But sometimes we can find interesting artifacts in our codebase: properties with names like this._threadsCount /* used in tests; do not delete */ or Table.testEnvironment /* sets to true in the tests */ For now what we have it's all ours: business brings money, developers write tests. Let's talk about quantity.
Lets write one-line function square(x) { return x; } Lets test it for keep covering 100%: expect(square(1)).to.be.eq(1). Responsible programmer can go futher and write basic tests, just because he or she wants to cover common cases and feels safer to make changes in the codebase. Make sense. But let's think a bit, how many tests do you want to write for the function groupBy(list, fn)? i hope at least 16, approximately 4x more then the length of groupBy implementation. OK, thats true for unittest, what about integration tests? With integration tests we have at least two components. Where we have components we have relations. Not to mention the problems with all Fakes in many components system, relations exponentially increase amount of states need to be tested. I hope you knew it before. Despite all that problems integration tests are very useful tool (but very expensive, and shaky, and slow, and i hope you knew it before too). There are much more problems, my dear readers, but you can easily reveal them just start thinking about different approaches how to test systems, and i left it as exercises (really, just try to think about it) For now that's all, see you in the next topic.
rhaport
Integration Tests greatly help verifying legacy features when we do refactoring for a new version of a product or implement new features which are touching legacy code.