在整个开发过程中重新运行单元测试的实际目的是什么?

时间:2018-04-20 16:01:33

标签: unit-testing tdd

我一遍又一遍地读到,为所有(或至少大多数)方法创建单元测试很重要,并在整个开发过程中反复运行它们。起初这对我来说非常有意义,但是现在我开始自己实施这些测试时,我感觉不太确定。从我所看到的,一旦你做了测试通过,它将永远通过,因为它使用的所有数据都被模拟了。我觉得有些东西我没有得到。

假设你写了一个像这样的方法:

/* Verifies email address (just for illustration, not robust code) */
bool VerifyEmail(String email){
      return Regex.Match(email, "^\w+@\w+\.com$");
}

也许你会写这样的单元测试:

/* Again, not robust, just for illustration */
void TestVerifyEmail(){
    Dictionary<String, bool> testCases = new Dictionary<String, bool>(
        {"fake@fake.com", true},
        {"fake@!!!.com", false},
        {"@fake.com", false},
        {"fake@fake.cme", false}
    );

    forEach(String test in testCases.Keys()){
        Test.Assert(VerifyEmail(test) == testCases[test]);
    }
}

除非你去更改测试用例,否则测试函数的结果永远不会改变,无论其他代码发生了什么,因为VerifyEmail()是隔离的。

这是一个特别简单的案例,但在大多数单元测试示例中,我看到,即使是那些不是在真空中运行的例子,它们总是使用完全模拟的数据,因此测试结果永远不会改变,除非测试本身就改变了。

在我看来,结果永远不会改变,反复运行单元测试的重点是什么?由于所有测试都将他们正在测试的代码块放入具有模拟数据的隔离环境中,因此单元测试每次都会通过。

我在最初创建代码时完全可以编写单元测试,以确保它以您想要的方式工作,例如在TDD 中,但是一旦完成了,那么运行它的重点是什么稍后再试?

3 个答案:

答案 0 :(得分:0)

理想情况下,不会编写单元测试来验证您编写的代码是否有效;编写单元测试以确保满足要求。如果单元测试套件包含每个可能需求的测试覆盖率(正面和负面),那么一旦编写了足够的代码来通过每个测试,您的项目就完成了。这样做的好处是,如果以后添加额外的要求,有人可以重构项目以添加额外的行,并且只要每个单元测试通过,那么原始要求仍然完成。

答案 1 :(得分:0)

你有一个有效的观点。从理论上讲,您可以使用依赖关系来仅重新运行可能因新更改而中断的测试。在实践中,虽然我们没有。为什么?主要是因为我们不相信我们可以正确地编写依赖项。请记住,我们正在谈论功能依赖,而不仅仅是include-headers。我不知道自动生成这些的工具。

答案 2 :(得分:0)

It may help to keep in mind that there are two common definitions of "unit test". Both share the constraints that tests should be fast, deterministic, isolated from each other.

There is a school that adds an additional constraint that the system under test should be isolated from all other collaborators in the system. But that definition isn't universal -- it's normal in the "Chicago Style" for the system under test to be a composite made from many different parts.

Martin Fowler:

As xunit testing became more popular in the 2000's the notion of solitary tests came back, at least for some people. We saw the rise of Mock Objects and frameworks to support mocking. Two schools of xunit testing developed, which I call the classic and mockist styles. One of the differences between the two styles is that mockists insist upon solitary unit tests, while classicists prefer sociable tests.

When private implementation details of the system under test can be refactored into separate parts, it becomes less cost effective to track which tests are dependent on which fragments of production code.

Furthermore, you are regularly going to be running some unit tests; at a minimum, after each refactoring you should be checking that you didn't introduce a regression, so you should be running all of the tests that depend on the code you just changed.

BUT, we're talking about tests that are fast and isolated from one another. Given that you are already taking a moment to run some tests, the marginal costs of running "more" tests are pretty small.

Of course, small isn't zero; I believe that in most cases developers use a weighting strategy to determine how often a tests needs to be run; run the tests that are likely to detect a problem more of than those that aren't likely to detect a problem, conditioned on what part of the code base you are actively working on.