Mocking timers when unit testing a communication library with timeouts











up vote
0
down vote

favorite












I have to maintain a rather big library implementing a communication protocol with different layers. Fortunately, there are thousands of unit tests for many different situations.



I have problems with those unit tests probing for timeouts within the library. The timeouts are in the range of a few seconds which is inacceptable long for unit-testing.



What we already did, is to implement a single central clock service which can be configured to run faster by a certain factor. By this, the tests complete much faster. But we cannot choose this factor too large, as then some tests begin to fail randomly because the performance of the test machine varies. Also, it is a nightmare to debug.



I feel that it is wrong to use any kind of independently running clock for testing timeouts. It would be better if this is completely under control of the unit test.



Does anyone have an idea how to implement it, optimally without having to change too much in the library code and in a way that is easy to add to the unit tests?



The library code with timeout is more or less structured as:



var startTime = TimeService.CurrentTime;
while (TimeService.CurrentTime < startTime + timeout)
{
// do something
Thread.Sleep(50); // or so
}


and it would be difficult to change this structure.










share|improve this question
























  • nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
    – lloyd
    Nov 11 at 10:09










  • As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
    – lloyd
    Nov 11 at 10:14










  • @lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
    – Klaus Gütter
    Nov 11 at 10:58










  • MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
    – lloyd
    Nov 11 at 13:58










  • @lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
    – Klaus Gütter
    Nov 11 at 14:18















up vote
0
down vote

favorite












I have to maintain a rather big library implementing a communication protocol with different layers. Fortunately, there are thousands of unit tests for many different situations.



I have problems with those unit tests probing for timeouts within the library. The timeouts are in the range of a few seconds which is inacceptable long for unit-testing.



What we already did, is to implement a single central clock service which can be configured to run faster by a certain factor. By this, the tests complete much faster. But we cannot choose this factor too large, as then some tests begin to fail randomly because the performance of the test machine varies. Also, it is a nightmare to debug.



I feel that it is wrong to use any kind of independently running clock for testing timeouts. It would be better if this is completely under control of the unit test.



Does anyone have an idea how to implement it, optimally without having to change too much in the library code and in a way that is easy to add to the unit tests?



The library code with timeout is more or less structured as:



var startTime = TimeService.CurrentTime;
while (TimeService.CurrentTime < startTime + timeout)
{
// do something
Thread.Sleep(50); // or so
}


and it would be difficult to change this structure.










share|improve this question
























  • nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
    – lloyd
    Nov 11 at 10:09










  • As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
    – lloyd
    Nov 11 at 10:14










  • @lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
    – Klaus Gütter
    Nov 11 at 10:58










  • MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
    – lloyd
    Nov 11 at 13:58










  • @lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
    – Klaus Gütter
    Nov 11 at 14:18













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have to maintain a rather big library implementing a communication protocol with different layers. Fortunately, there are thousands of unit tests for many different situations.



I have problems with those unit tests probing for timeouts within the library. The timeouts are in the range of a few seconds which is inacceptable long for unit-testing.



What we already did, is to implement a single central clock service which can be configured to run faster by a certain factor. By this, the tests complete much faster. But we cannot choose this factor too large, as then some tests begin to fail randomly because the performance of the test machine varies. Also, it is a nightmare to debug.



I feel that it is wrong to use any kind of independently running clock for testing timeouts. It would be better if this is completely under control of the unit test.



Does anyone have an idea how to implement it, optimally without having to change too much in the library code and in a way that is easy to add to the unit tests?



The library code with timeout is more or less structured as:



var startTime = TimeService.CurrentTime;
while (TimeService.CurrentTime < startTime + timeout)
{
// do something
Thread.Sleep(50); // or so
}


and it would be difficult to change this structure.










share|improve this question















I have to maintain a rather big library implementing a communication protocol with different layers. Fortunately, there are thousands of unit tests for many different situations.



I have problems with those unit tests probing for timeouts within the library. The timeouts are in the range of a few seconds which is inacceptable long for unit-testing.



What we already did, is to implement a single central clock service which can be configured to run faster by a certain factor. By this, the tests complete much faster. But we cannot choose this factor too large, as then some tests begin to fail randomly because the performance of the test machine varies. Also, it is a nightmare to debug.



I feel that it is wrong to use any kind of independently running clock for testing timeouts. It would be better if this is completely under control of the unit test.



Does anyone have an idea how to implement it, optimally without having to change too much in the library code and in a way that is easy to add to the unit tests?



The library code with timeout is more or less structured as:



var startTime = TimeService.CurrentTime;
while (TimeService.CurrentTime < startTime + timeout)
{
// do something
Thread.Sleep(50); // or so
}


and it would be difficult to change this structure.







.net unit-testing time






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 11 at 9:25

























asked Nov 11 at 7:31









Klaus Gütter

1,167612




1,167612












  • nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
    – lloyd
    Nov 11 at 10:09










  • As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
    – lloyd
    Nov 11 at 10:14










  • @lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
    – Klaus Gütter
    Nov 11 at 10:58










  • MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
    – lloyd
    Nov 11 at 13:58










  • @lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
    – Klaus Gütter
    Nov 11 at 14:18


















  • nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
    – lloyd
    Nov 11 at 10:09










  • As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
    – lloyd
    Nov 11 at 10:14










  • @lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
    – Klaus Gütter
    Nov 11 at 10:58










  • MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
    – lloyd
    Nov 11 at 13:58










  • @lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
    – Klaus Gütter
    Nov 11 at 14:18
















nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
– lloyd
Nov 11 at 10:09




nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
– lloyd
Nov 11 at 10:09












As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
– lloyd
Nov 11 at 10:14




As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
– lloyd
Nov 11 at 10:14












@lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
– Klaus Gütter
Nov 11 at 10:58




@lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
– Klaus Gütter
Nov 11 at 10:58












MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
– lloyd
Nov 11 at 13:58




MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
– lloyd
Nov 11 at 13:58












@lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
– Klaus Gütter
Nov 11 at 14:18




@lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
– Klaus Gütter
Nov 11 at 14:18

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53246706%2fmocking-timers-when-unit-testing-a-communication-library-with-timeouts%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53246706%2fmocking-timers-when-unit-testing-a-communication-library-with-timeouts%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Full-time equivalent

さくらももこ

13 indicted, 8 arrested in Calif. drug cartel investigation