Now you have tests in your project. But how can you be sure that the tests cover all the code (you need). It would be painful to find a bug because you didn't realize that part of code isn't tested yet.

Come Here's description from the site: is a tool for measuring code coverage of Python programs. It monitors your program, noting which parts of the code have been executed, then analyzes the source to identify code that could have been executed but was not.

Basic usage

To use coverage, run it on top of your test command. For Django, it would be:

coverage test

It's pretty much replace python command in when your running test, so I believe it's easy to figure out.

Once it's finished, you could see the report by running this command:

coverage report -m

It will print out the report in your terminal.

Or if you want to see it in html format, use:

coverage html

The html will available in htmlcov directory by default, and you can see it using your browser. When you export it to html format, you could see easily which part of code isn't covered yet easily.

Integration with your CI

Surely one of the reasons you have tests in your project is Continous Integration. With coverage, now you can make sure only certain threshold will pass.

For example, if your current coverage is 90%, then you add code without tests, the coverage will drops to be below 90%. You can set the CI to fail.

To do that use this command:

coverage report -m --fail-under=90

When the coverage drops below 90%, to command will exit with code 2 (which means there is error).

In your CI settings, run the command after you run the test. It would roughly look like this:

coverage test && coverage report -m --fail-under=90

Configuration files

To make running the command simpler, you could use a config file. By default it'll look for .coveragerc in wherever you run the coverage command.

Here's a simple config file so you don't need to type --fail-under=90 everytime:

branch = True
source =
fail_under = 90

Some notes

One of the more common questions when talking about coverage is what's the ideal coverage number. Personally, I suggest try to go for 100%. Of course there are some parts of your code that couldn't/doesn't need to be tested. For that part of code, you should instead ignore them when running the coverage.

You could add the ignored files in your config file:

branch = True
source =
omit =

fail_under = 90

You notice I have tests as the ommited files, because we want to analize the code, not the tests themselves.