Push Your Tests Down: Part 1 – Testing Levels

I this series, I want to touch on one of the biggest traps people fall into with test automation: writing too many high-level tests. I have made this painful mistake and struggled with constant test failures and spent many hours troubleshooting things that weren’t even problems in production. They were just bad (flaky) tests. I finally found my way out of that mess, and hopefully I can help you do the same.

For the beginners here, I’ll start with what levels you can write tests at and why lower level tests are more valuable. I’ll show you why testing through the UI layer is so painful, and how to push higher level tests down.

I’ll try to keep each post short and to the point. Without further ado, here is part 1.

Levels of Testing

There many different ways to test your code, but they can all be boiled down into three main categories or levels.

FYI: The names of the levels may differ based on who you are talking to, but the underlying concepts are the same. Seems no one can agree on what the best names are for these things.

Unit Tests

Unit tests are the lowest level tests. These tests are very focused and usually only test a few lines of code. They run completely in-memory and never access external resources like disk or network so they are very fast. Because of this, you can run them often and get very fast feedback on errors. I run mine every few minutes. When they fail, I can usually just hit CTRL-Z to undo my last few changes and they are passing again. No need to debug!

Even when they fail later (when I can’t just hit CTRL-Z), the problem is usually obvious. There’s only a few lines of code it tests, so I don’t have to look far for the problem. For the same reason, there’s only a few things that could actually cause the test to fail, so they don’t fail that often.

Unit Tests are very low cost. Easy to write. Easy to maintain.

Integration Tests

Integration tests validate the interactions of multiple classes with each other, and the environment (like the disk, network, databases, etc.). These tests are inherently slower than unit tests, so you can’t run them as often. This means it may take longer before you realize you introduced an error.

Plus, since they cover much more code, there are more reasons these tests can fail, so they tend to fail more often than unit tests. And when they do, sifting through more code means it takes longer to figure out where the problem is.

Integration tests take more effort to build and maintain than unit tests. They are harder to debug, and take longer to identify issues.

UI or End-to-End Tests

These tests are sometimes called “functional” tests as well. They test the fully deployed system as a black box, just like a user would use it. These usually interact directly with the UI.

These tests exercise all the code in the system, from the UI down to the database. It also exercises all the third party resources or external systems as well. So if anything in this chain breaks, the test will fail. And when they fail, because so many things can cause it to fail, it’s often very hard to determine what caused the failure. I often find myself sifting through log files and logging in to remote servers to figure out what the heck broke. Not fun.

These tests are also the slowest to run, so they aren’t run very often at all. So when you introduce an error, it may be a long time before you realize it. By then you have moved on to something else so it takes additional effort to get your head back around the problem so you can debug it.

These tests are brittle, and very difficult to maintain. They have the highest cost of all the types of tests you can write. They do have value, but it comes with a cost.

The Obvious Conclusion

So, given what you just read, where is the most valuable place to focus your testing efforts? Yeah, I don’t even need to give you the answer. It’s pretty self-evident:

Always test at the lowest level possible.

When you have tests at a high level, it’s best to “push them down” as far as you can. That’s what this series is all about.

So, take a look at your tests. Where have you focused all your efforts? Are you fighting to keep the tests running? Is there any correlation between the two? If so, stay tuned, we’ll look at ways to fix this.

Posted in Testing | Tagged | Leave a comment

Effective Test Automation Presentation – Keep Austin Agile 2014

This weekend I gave a presentation at the Keep Austin Agile conference called “Effective Test Automation”.

Effective Test Automation from Chris Edwards on Vimeo.

Keynote file

Slides as PDF


It’s easy to write tests, but it’s not so easy to maintain them over time. Tests should not be a drain on your productivity, they should enhance it. However, many teams struggle just to keep their tests running—sacrificing time they could be spending developing valuable new features. How can we avoid these pitfalls? What practices and principles are effective? Which ones lead to productivity drain? This talk seeks to answer those questions and more by separating the effective practices and principles from the ineffective ones.

Topics covered:

  • Where should we focus testing efforts? Unit Tests, Integration Tests or UI Tests?
  • How can we best use mocks and stubs without creating fragile tests?
  • How can we instill a culture of testing?
  • How should I handle test data in our tests?
  • How can I use Fluent Data Builders and Anonymous Data to simplify testing?
  • And much more.
Posted in Agile, Mocking, Presentations, Principles, Test-Driven Development, Testing | Leave a comment

Automatically Ignoring Untrusted SSL Certificates in Firefox Using WebDriver and C#

TL;DR – You can tell WebDriver to automatically ignore untrusted SSL certificates on Firefox by setting the “webdriver_assume_untrusted_issuer” preference to false in the Firefox profile.

We recently ran into an issue where our tests were failing because Firefox was showing the “This Connection is Untrusted” window. Firefox was complaining that our SSL certificate was not from a trusted source (this happens when you use self-signed certs for development). Here is the screen we were seeing:

The "This Connection Is Untrusted" page Firefox was showing

Googling the issue brought up a lot of solutions for java, but none that worked for C#. We called it a night, and the next morning, when I got in, my coworkers Jason Bilyeu and Carl Cornett had solved the issue. They found that you can set the “webdriver_assume_untrusted_issuer” preference to “false” in the Firefox profile and it will ignore the cert.

Here is the code:

var profile = new FirefoxProfile();
return new FirefoxDriverprofile );

I hope this helps anyone who has had issues with this like we did.

Posted in .NET, Acceptance Testing, C#, Testing, WebDriver | Tagged , , , , , | Leave a comment

Acceptance Criteria vs. Acceptance Tests: Minimize your documentation

To know that a story is done, we create a set of acceptance tests (story tests) that, when they pass, we know the story is complete and functions as expected.

The process we follow for defining our acceptance tests is comprised of two steps.

  1. We first come up with high level acceptance criteria.
  2. Right before, or during development, we use the criteria to define the actual acceptance tests.

You’ll notice that we make a distinction between Acceptance Criteria and the Acceptance Tests. Here are the definitions as I see them:

  • Acceptance Criteria is the minimal documentation that ensures a sufficient implementation of acceptance tests.
  • Acceptance Tests are the detailed specification of the system’s behavior for all meaningful scenarios, used to assert its correctness.

Acceptance Criteria

  • Ensures the team knows when they are done
  • Ensures the team does not forget important edge cases and considerations
  • Produced through collaboration of the developers, testers and product owners (3 amigos)
  • Created prior to development, during planning phase
  • Expressed at a high level (conceptual, not detailed)
  • Expressed in whatever form works best for the team…keep it minimal
  • Considers edge cases and failure scenarios
  • Keep it concise (minimum documentation needed by team…may be more for one team, less for another)

Acceptance Criteria is the minimal amount of documentation you need to specify the expected behavior of the feature and any edge cases that need to be considered. Agile favors working software over comprehensive documentation. This is an expression of that principle. We don’t flesh out every possible example, but provide “just enough” documentation to ensure the correct tests get written. The amount of documentation necessary may vary from team to team. We use a simple bulleted list to capture edge cases and things we need to consider. If your team is distributed, you may need more documentation. Teams that are co-located with testers and product owners (like us) need less.

Acceptance Tests

  • Defines behavior of system
  • Ensures the feature works as expected
  • Code implemented by developers and testers
  • Test definition can include product owners or customers if you are using a DSL that decouples the definition of a test from its implementation code (like with Gherkin or FitNesse)
  • Test definition can happen before development if using a DSL as mentioned above
  • Test implementation occurs during development (ideally in a test-first manner)
  • Tests are usually implemented in code, but if testing manually (hopefully only rarely), the “implementation” can be a list of steps/expectations

The Acceptance Tests are the fully detailed expression of the tests….their implementation. We use SpecFlow (which is the .NET equivalent of Cucumber). It allows us to specify Given-When-Then scenarios in the Gherkin syntax. We work with our product owners to create the actual test definitions in Gherkin, then developers pair with testers to implement them. When a test is not feasible to automate, we still develop the Gherkin and can simply follow it as a list of steps to run the test manually.

I have found it useful to conceptually separate the documentation of the tests (Acceptance Criteria) from the implementation (the actual Acceptance Tests). It helps me to remember not to over-specify the tests. As long as the documentation is enough to ensure sufficient tests will be implemented, I don’t need to add more detail.

Posted in Acceptance Testing, Agile, Testing | Tagged , , | Leave a comment

The Effective Developer – Passion

This post begins a series of topics I will be posting for a book I am writing entitled “The Effective Developer”. The topics come from my Padawan to Jedi presentation I have given at the Austin Code Camp for the last two years. There seemed to be a lot of curiosity and around the topics I presented, so I decided to dump that knowledge down on paper to benefit a wider audience. I hope you all find it useful.

I do ask one thing. If you read this, and have any feedback, please post it in the comments. I welcome any constructive criticism you may have. Is there something else I can add? Should I remove something? Please let me know… Thanks.


The effective developer is passionate about his work. He loves what he does and is therefore driven to do it well.

If you really want to be great at something, you have to love it. How can you be motivated to excel at something you don’t enjoy? A healthy passion provides a wellspring of motivation. It drives you to do your best, and constantly improve your best. Just think of the advantage this gives you.

You see, we tend to do the things we enjoy. We think about them, read about them, and practice them–because we like them; we call them hobbies. A passionate developer’s hobby is his job. Because he loves it, he is driven to do it well. As you can guess, I am passionate about software development. I love reading a good tech book or blog, writing code or writing this book. These activities sharpen my skills, but they don’t feel like work. I enjoy them and they come naturally to me. I love what I do and I do what I love. This is the biggest secret to my success.

Sidebar: A warning about passion!

Please be aware that passion can lead to an unbalanced life. Its easy to spend too much time on something you love doing. Resist that temptation. Don’t neglect the important areas of your life, like family, friends, church, etc. These are essential for happiness, and they are far more important than work. It’s tempting to believe that happiness can come from work alone. However, that kind of happiness is fleeting; it’s a lie; burnout and sadness soon take its place.

I will never forget what a wise friend once told me. He said, “I work to live, I don’t live to work”. This should be true for all of us.

When I interview developers, one of the most important things I look for is passion. I will hire a passionate developer who is lacking technically. I know their passion more than compensates for their deficiencies. Because passion can have such a profound affect on the ability to learn and grow as a developer, I believe it is one of the strongest assets a developer can have.

Posted in The Effective Developer | Tagged | 1 Response