Microservices Presentation from Keep Austin Agile 2015 Conference

On Friday I gave this presentation on Microservices at the Keep Austin Agile 2015 conference in Austin, TX. Below you will find the video and slides as well.

I presented Microservices as a solution that solves some very difficult problems, but it does so by swapping those problems of some other problems that are easier to solve. To use Microservices, your organization has to be mature enough to solve the problems Microservices introduce. It’s not a “free lunch” as Benjamin Wootten would say. I cover several patterns and practices that help in solving these problems, as well as common anti-patterns and pitfalls that I have fallen into.

Microservices – Practices, Patterns & Pitfalls from Chris Edwards on Vimeo.

The audio quality is pretty bad. For some reason, my laptop wasn’t set to record audio, so I had to use the audio from my back-up recorder…but it’s not that great. My apologies.

Here are some useful resources (that were in the presentation) to learn more on Microservices:

Posted in Uncategorized | Leave a comment

Push Your Tests Down: Part 1 – Testing Levels

I this series, I want to touch on one of the biggest traps people fall into with test automation: writing too many high-level tests. I have made this painful mistake and struggled with constant test failures and spent many hours troubleshooting things that weren’t even problems in production. They were just bad (flaky) tests. I finally found my way out of that mess, and hopefully I can help you do the same.

For the beginners here, I’ll start with what levels you can write tests at and why lower level tests are more valuable. I’ll show you why testing through the UI layer is so painful, and how to push higher level tests down.

I’ll try to keep each post short and to the point. Without further ado, here is part 1.

Levels of Testing

There many different ways to test your code, but they can all be boiled down into three main categories or levels.

FYI: The names of the levels may differ based on who you are talking to, but the underlying concepts are the same. Seems no one can agree on what the best names are for these things.

Unit Tests

Unit tests are the lowest level tests. These tests are very focused and usually only test a few lines of code. They run completely in-memory and never access external resources like disk or network so they are very fast. Because of this, you can run them often and get very fast feedback on errors. I run mine every few minutes. When they fail, I can usually just hit CTRL-Z to undo my last few changes and they are passing again. No need to debug!

Even when they fail later (when I can’t just hit CTRL-Z), the problem is usually obvious. There’s only a few lines of code it tests, so I don’t have to look far for the problem. For the same reason, there’s only a few things that could actually cause the test to fail, so they don’t fail that often.

Unit Tests are very low cost. Easy to write. Easy to maintain.

Integration Tests

Integration tests validate the interactions of multiple classes with each other, and the environment (like the disk, network, databases, etc.). These tests are inherently slower than unit tests, so you can’t run them as often. This means it may take longer before you realize you introduced an error.

Plus, since they cover much more code, there are more reasons these tests can fail, so they tend to fail more often than unit tests. And when they do, sifting through more code means it takes longer to figure out where the problem is.

Integration tests take more effort to build and maintain than unit tests. They are harder to debug, and take longer to identify issues.

UI or End-to-End Tests

These tests are sometimes called “functional” tests as well. They test the fully deployed system as a black box, just like a user would use it. These usually interact directly with the UI.

These tests exercise all the code in the system, from the UI down to the database. It also exercises all the third party resources or external systems as well. So if anything in this chain breaks, the test will fail. And when they fail, because so many things can cause it to fail, it’s often very hard to determine what caused the failure. I often find myself sifting through log files and logging in to remote servers to figure out what the heck broke. Not fun.

These tests are also the slowest to run, so they aren’t run very often at all. So when you introduce an error, it may be a long time before you realize it. By then you have moved on to something else so it takes additional effort to get your head back around the problem so you can debug it.

These tests are brittle, and very difficult to maintain. They have the highest cost of all the types of tests you can write. They do have value, but it comes with a cost.

The Obvious Conclusion

So, given what you just read, where is the most valuable place to focus your testing efforts? Yeah, I don’t even need to give you the answer. It’s pretty self-evident:

Always test at the lowest level possible.

When you have tests at a high level, it’s best to “push them down” as far as you can. That’s what this series is all about.

So, take a look at your tests. Where have you focused all your efforts? Are you fighting to keep the tests running? Is there any correlation between the two? If so, stay tuned, we’ll look at ways to fix this.

Posted in Testing | Tagged | Leave a comment

Effective Test Automation Presentation – Keep Austin Agile 2014

This weekend I gave a presentation at the Keep Austin Agile conference called “Effective Test Automation”.

Effective Test Automation from Chris Edwards on Vimeo.

Keynote file
Slides as PDF


It’s easy to write tests, but it’s not so easy to maintain them over time. Tests should not be a drain on your productivity, they should enhance it. However, many teams struggle just to keep their tests running—sacrificing time they could be spending developing valuable new features. How can we avoid these pitfalls? What practices and principles are effective? Which ones lead to productivity drain? This talk seeks to answer those questions and more by separating the effective practices and principles from the ineffective ones.

Topics covered:

  • Where should we focus testing efforts? Unit Tests, Integration Tests or UI Tests?
  • How can we best use mocks and stubs without creating fragile tests?
  • How can we instill a culture of testing?
  • How should I handle test data in our tests?
  • How can I use Fluent Data Builders and Anonymous Data to simplify testing?
  • And much more.
Posted in Agile, Mocking, Presentations, Principles, Test-Driven Development, Testing | Leave a comment

Automatically Ignoring Untrusted SSL Certificates in Firefox Using WebDriver and C#

TL;DR – You can tell WebDriver to automatically ignore untrusted SSL certificates on Firefox by setting the “webdriver_assume_untrusted_issuer” preference to false in the Firefox profile.

We recently ran into an issue where our tests were failing because Firefox was showing the “This Connection is Untrusted” window. Firefox was complaining that our SSL certificate was not from a trusted source (this happens when you use self-signed certs for development). Here is the screen we were seeing:

The "This Connection Is Untrusted" page Firefox was showing

Googling the issue brought up a lot of solutions for java, but none that worked for C#. We called it a night, and the next morning, when I got in, my coworkers Jason Bilyeu and Carl Cornett had solved the issue. They found that you can set the “webdriver_assume_untrusted_issuer” preference to “false” in the Firefox profile and it will ignore the cert.

Here is the code:

var profile = new FirefoxProfile();
return new FirefoxDriverprofile );

I hope this helps anyone who has had issues with this like we did.

Posted in .NET, Acceptance Testing, C#, Testing, WebDriver | Tagged , , , , , | Leave a comment

Acceptance Criteria vs. Acceptance Tests: Minimize your documentation

To know that a story is done, we create a set of acceptance tests (story tests) that, when they pass, we know the story is complete and functions as expected.

The process we follow for defining our acceptance tests is comprised of two steps.

  1. We first come up with high level acceptance criteria.
  2. Right before, or during development, we use the criteria to define the actual acceptance tests.

You’ll notice that we make a distinction between Acceptance Criteria and the Acceptance Tests. Here are the definitions as I see them:

  • Acceptance Criteria is the minimal documentation that ensures a sufficient implementation of acceptance tests.
  • Acceptance Tests are the detailed specification of the system’s behavior for all meaningful scenarios, used to assert its correctness.

Acceptance Criteria

  • Ensures the team knows when they are done
  • Ensures the team does not forget important edge cases and considerations
  • Produced through collaboration of the developers, testers and product owners (3 amigos)
  • Created prior to development, during planning phase
  • Expressed at a high level (conceptual, not detailed)
  • Expressed in whatever form works best for the team…keep it minimal
  • Considers edge cases and failure scenarios
  • Keep it concise (minimum documentation needed by team…may be more for one team, less for another)

Acceptance Criteria is the minimal amount of documentation you need to specify the expected behavior of the feature and any edge cases that need to be considered. Agile favors working software over comprehensive documentation. This is an expression of that principle. We don’t flesh out every possible example, but provide “just enough” documentation to ensure the correct tests get written. The amount of documentation necessary may vary from team to team. We use a simple bulleted list to capture edge cases and things we need to consider. If your team is distributed, you may need more documentation. Teams that are co-located with testers and product owners (like us) need less.

Acceptance Tests

  • Defines behavior of system
  • Ensures the feature works as expected
  • Code implemented by developers and testers
  • Test definition can include product owners or customers if you are using a DSL that decouples the definition of a test from its implementation code (like with Gherkin or FitNesse)
  • Test definition can happen before development if using a DSL as mentioned above
  • Test implementation occurs during development (ideally in a test-first manner)
  • Tests are usually implemented in code, but if testing manually (hopefully only rarely), the “implementation” can be a list of steps/expectations

The Acceptance Tests are the fully detailed expression of the tests….their implementation. We use SpecFlow (which is the .NET equivalent of Cucumber). It allows us to specify Given-When-Then scenarios in the Gherkin syntax. We work with our product owners to create the actual test definitions in Gherkin, then developers pair with testers to implement them. When a test is not feasible to automate, we still develop the Gherkin and can simply follow it as a list of steps to run the test manually.

I have found it useful to conceptually separate the documentation of the tests (Acceptance Criteria) from the implementation (the actual Acceptance Tests). It helps me to remember not to over-specify the tests. As long as the documentation is enough to ensure sufficient tests will be implemented, I don’t need to add more detail.

Posted in Acceptance Testing, Agile, Testing | Tagged , , | Leave a comment