{"id":1664,"date":"2015-02-05T23:34:15","date_gmt":"2015-02-05T21:34:15","guid":{"rendered":"http:\/\/blog.ulrichard.ch\/?p=1664"},"modified":"2015-02-05T23:34:15","modified_gmt":"2015-02-05T21:34:15","slug":"code-coverage-for-c","status":"publish","type":"post","link":"https:\/\/ulrichard.ch\/blog\/?p=1664","title":{"rendered":"Code coverage for C++"},"content":{"rendered":"<p>Ever since I wrote automated tests, I wondered how complete the coverage was. Of course you have a feeling which parts are better covered than others. For some legacy code you might prefer not to know at all. But I thought test coverage was something easy to do with a language running on a VM such as Java, but hard with C++. Some things are not as hard as you think, once you give it a try.<\/p>\n<p>The thing that triggered my interest was the <a href=\"https:\/\/coveralls.io\/r\/ddemidov\/vexcl\" target=\"_blank\" rel=\"noopener\">coveralls<\/a> badge on the <a href=\"https:\/\/github.com\/ddemidov\/vexcl\" target=\"_blank\" rel=\"noopener\">readme page of vexcl<\/a>. By following it through, I learned that coveralls is just for presenting the results that are generated by gcov. Some more research showed what compiler- and linker flags I need to use. In addition I found out that lcov&#8217;s genhtml can generate nice human readable html reports, while gcovr writes machine readable xml reports. So the following is really all that needs to be added to your CMakeLists.txt:<\/p>\n<pre class=\"brush: bash; gutter: false; first-line: 1\">OPTION(CODE_COVERAGE       \"Generate code coverage reports using gcov\" OFF)\n\nIF(CODE_COVERAGE)\n    SET(CMAKE_C_FLAGS          \"${CMAKE_C_FLAGS}\n        -fprofile-arcs -ftest-coverage\")\n    SET(CMAKE_CXX_FLAGS        \"${CMAKE_CXX_FLAGS}\n        -fprofile-arcs -ftest-coverage\")\n    SET(CMAKE_EXE_LINKER_FLAGS \"${CMAKE_EXE_LINKER_FLAGS}\n        -fprofile-arcs -ftest-coverage\")\n\n    FILE(WRITE ${PROJECT_BINARY_DIR}\/coverage.sh \"#! \/bin\/sh\"n)\n    FILE(APPEND ${PROJECT_BINARY_DIR}\/coverage.sh \"lcov --zerocounters\n        --directory . --base-directory ${MyApp_MAIN_DIR}\"n)\n    FILE(APPEND ${PROJECT_BINARY_DIR}\/coverage.sh \"lcov --capture --initial\n        --directory . --base-directory ${MyApp_MAIN_DIR} --no-external\n        --output-file MyAppCoverage\"n)\n    FILE(APPEND ${PROJECT_BINARY_DIR}\/coverage.sh \"make test\"n)\n    FILE(APPEND ${PROJECT_BINARY_DIR}\/coverage.sh \"lcov --no-checksum\n        --directory . --base-directory ${MyApp_MAIN_DIR} --no-external\n        --capture --output-file MyAppCoverage.info\"n)\n    FILE(APPEND ${PROJECT_BINARY_DIR}\/coverage.sh \"lcov\n        --remove MyAppCoverage.info '*\/UnitTests\/*' '*\/modassert\/*'\n        -o MyAppCoverage_filtered.info\"n)\n    FILE(APPEND ${PROJECT_BINARY_DIR}\/coverage.sh\n        \"genhtml MyAppCoverage_filtered.info\"n)\n\n    FILE(APPEND ${PROJECT_BINARY_DIR}\/coverage.sh\n        \"gcovr -o coverage_summary.xml -r ${MyApp_MAIN_DIR} -e '\/usr.*'\n         -e '.*\/UnitTests\/.*' -e '.*\/modassert\/.*' -x --xml-pretty\"n)\n\n    ADD_CUSTOM_TARGET(CODE_COVERAGE bash ${PROJECT_BINARY_DIR}\/coverage.sh\n                        WORKING_DIRECTORY ${PROJECT_BINARY_DIR}\n                        COMMENT \"run the unit tests with code coverage and produce an index.html report\"\n                        SOURCES  ${PROJECT_BINARY_DIR}\/coverage.sh)\n    SET_TARGET_PROPERTIES(CODE_COVERAGE PROPERTIES\n        FOLDER \"Testing\"\n    )\n\nENDIF(CODE_COVERAGE)<\/pre>\n<p>The resulting html page is very detailed and shows you the untested lines in your source files in red.<br \/>\nFrom the produced xml file it&#8217;s easy to extract the overall percentage for example. You could use this figure to fail your nightly builds when it&#8217;s decreasing.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ever since I wrote automated tests, I wondered how complete the coverage was. Of course you have a feeling which parts are better covered than others. For some legacy code you might prefer not to know at all. But I thought test coverage was something easy to do with a language running on a VM [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7,1],"tags":[43,62,221],"class_list":["post-1664","post","type-post","status-publish","format-standard","hentry","category-software","category-uncategorized","tag-c","tag-coverage","tag-testing"],"_links":{"self":[{"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1664","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1664"}],"version-history":[{"count":0,"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1664\/revisions"}],"wp:attachment":[{"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1664"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1664"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ulrichard.ch\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1664"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}