Lets do a deep dive into some of the aspects of Cascade.


Cascade is made up of a number of components that work together to get the job done. The general procedure is to scan for scenarios, permute scenarios together into journeys and then filter those journeys into the final test set.

Once the test set is defined, we then call the global lifecycle methods, instantiate separate objects for each test, share fields between them all, and then call the test and scenario lifecycle methods as the test executes while notifying Junit and the reporter as we go.

Finally we generate a test report and dispose of everything.

All of these functions are encapsulated within Strategy components that are responsible for various bits and pieces.

  • Scanner
  • JourneyGenerator
  • ConstructionStrategy
  • TestExecutor
  • FilterStrategy
  • CompletenessStrategy
  • Reporter

Scanner - Implementations of this interface are responsible for finding class implementations that contain scenarios.

  • ReflectionsClasspathScanner - Cascade has one implementation of this Scanner that uses the Reflections library to scan the classpath for scenarios.

JourneyGenerator - Implementations of this interface take a set of scenarios and generate journeys from them. This class is the real workhorse of Cascade.

  • StepBackwardsFromTerminatorsJourneyGenerator - Cascade comes with one implementation of the journey generator based on an algorithm that works backwards from terminators.

ConstructionStrategy - This interface is implemented by components that create tests from a list of scenarios. This component is primarily engaged with instantiating all test artefacts, sharing fields between them according to the @Demands and @Supplies annotations and then calling the lifecycle methods on the control file.

  • StandardConstructionStrategy - Cascade comes with one implementation of the construction strategy.

TestExecutor - Implementations of this interface interface with whatever testing framework is currently in use. This class also shares fields between steps during the execution of the test, fires any step handlers that are configured (more on this later) and fires callbacks on the reporter for report generation.

  • StandardTestExecutor - Cascade comes with only one implementation that interfaces with JUnit.

FilterStrategy - Implementations of this class apply filtering rules to the set of journeys that the JourneyGenerator generates. This component is the mechanism that interprets and implements the filtering logic specified by the @FilterTests annotation. The journey generation logic progressively applies the filtering strategy as journeys are generated in order to manage the volume of potential tests.

  • StandardFilterStrategy - Cascade comes with only one implementation.

CompletenessStrategy - Implementations of this class should enforce the logic described in the chapter on Completeness. In summary, it reduces the set of tests so that they retain some completeness properties. This is only possibly by considering the set of tests as a whole, so this strategy runs after the initial test set has been generated.

  • StandardCompletenessStrategy - Cascade comes with only one implementation.

Reporter - Implementations of this strategy generate reports for the test run. Implementations of this strategy receive notifications during test execution in order to progressively generate output.

  • DisableReporter - No reports will be generated.
  • HtmlReporter - The default reporter is the html reporter. This reporter generates a set of files that are appropriate for retaining as part of a CI pipeline.

See the @Factory annotation for details on how to override these strategies.


Test construction is one of the gems of Cascade.

Consider the problem; You don't have a single artefact that defines the test. So you don't have a single artefact that can define all the dependent starting state.

Instead we have many scenario classes that will be almost randomly sequenced together, admittedly according to some rules. And each scenario contributes a portion of the starting state.

As you may have read on the topic of Using Cascade, the typically context within which Cascade will operate will require Cascade to setup starting state for a test. This means talking to stubs and databases immediately prior to executing a test, so that when the test executes, the data is there to support the scenarios in the current test.

How do we organise and collate that data in order to do this?

The answer to this is in the @Demands and @Supplies annotations.

The construction process is as follows:

  1. Instantiate control file and scenario objects.
  2. Collect supplied fields from instances (control file first).
  3. Inject demanded fields into all instances.
  4. Execute Given methods on scenario objects.
  5. Collect supplied fields from instances.
  6. Inject demanded fields into all instances.
  7. Execute Setup methods on control file.
  8. Inject demanded fields into all instances.

As you an see, the collection and injection of fields happens many times, in order to be as flexible as possible.

Take note of the following implications:

  • The variable name is what drives the mappings.
  • Variables supplied by Given methods cannot be demanded by other Given methods.
  • Given methods are executed in journey order.

So how about an example?

This example is taken from the first example application included with the Cascade Sources.

The sample application runs two login scenarios: the first scenario is successful while the second is not. The login step can be viewed here and is exemplified as:

public interface Login {
    public class SuccessfulLogin implements Login {

        private String username = "anne";
        private String password = "other";
The Successful Login Scenario (part)

The control file then takes over and sets up the stubs and database with this data. In the case of my sample application, the subject offers an endpoint to set all starting state. You can see that here but I've pulled out the relevant sections:

public class OnlineBankingTests {
    String username;

    String password;

    public void setup(Journey journey) {

        Map user = new HashMap();
        user.put("username", username);
        user.put("password", password);

        Response response =

        assertEquals(200, response.getStatus());
The Control File (part)

So the scenario Supplies the username and password of the user. The control file Demands the two fields and collates that data into a REST call that it makes on the subject system. In a more robust example the control file might make multiple REST and database calls to setup this state.

In the sample application, I have now setup a user and their password.

When it comes time to drive the user journey through this process using Selenium, all I have to do is have the Successful Login scenario enter the username and password it Supplied into the login fields and submit the form.

In the case of the FailedLogin scenario, the scenario should enter a password that does not match the password it supplied.

You can have another look at the Login example here.


Cleanup is much simpler.

  1. All the Clear methods are called on all instances.
  2. The Teardown method is called on the control file instance.

The only point to bear in mind is that the Clear methods are executed in journey order.

And in the case of exceptions, these are not guaranteed to run.

And that concludes the Construction and Cleanup processes for a single test. Report generation happens after all tests are complete.

Test Execution

So onto actually running the tests, which is the point of it all.

Lets go over what has happened so far.

  1. Cascade scans the classpath for scenarios.
  2. Then it generates journeys by linking the scenarios together into ordered lists.
  3. Filtering occurs according to some rules.
  4. Cascade then starts to iterate through each journey.
  5. Scenarios are instantiated, fields as shared and the setup method on the control file is called.

We are ready now to run a test.

This chapter covers how a test is actually executed. But I will just outline what happens after the test executes now. (It seems like a good time.)

  1. We cleanup after each test.
  2. Cascade then writes the results of the test to a report file.
  3. If we have another test to execute, we run that.
  4. Finally the test report is generated.

Running a test

Each test is a list of instantiated scenarios that cascade has shared fields between.

Cascade iterates through the list of scenarios in order, and calls their lifecycle methods. The primary methods are the When and Then methods. Cascade calls the When method, which contains an action. And then it calls the Then method which contains validation.

Test Structure

At this point you can plainly see the journey like nature of these tests. They really do walk the subject system through many state transitions, validating along the way.

Lets take a closer look at the lifecycle methods.

The When method

public void when() {

The When Method

The When method is annotated with the @When annotation.

There must be only one method and it must take no arguments. The method name can be anything you like.

The When method should contain code that drives a transition on the state machine of the subject system. In the example web applications, this would be a form submission or the clicking of a link.

It is a really good idea to put in a wait mechnanism of some kind to pause Cascade at this point, until the subject system has completed the transition. You can see examples of this in the examples.

The Then method

public void then() {

The Then Method

The Then method is annotated with the @Then annotation.

There must be only one method and it must take no arguments. The method name is irrelevant.

The Then method should contain code that validates that a particular state has been achieved by the subject state machine.


Interspersed between the When and Then methods are a number of calls to JUnit and to the Reporter strategy. These calls allow each system to progressively update the state of the test run as the tests are executing.

There are also error callbacks, so that in the event of an exception being thrown, those systems are notified.

You can add your own systems that watch tests in the form of Step Handlers.

The Step Handlers

Handlers are registered callbacks that fire as the tests execute. You configure handlers by implementing the Handler interface and then attaching the class using an annotation.

public class Login {

Configuring a Step Handler

This Step Handler will cause Cascade to pause for a second at the Login step.

You can configure handlers to execute at the step level as in this example, or you can configure them at the test level, by including the annotation in the control file. If a handler is configured at the test level, is applies to every step.

There are two lifecycle methods, the When and Then methods, so handlers can be configured in three possible places, by using the correct annotation.

You might have wondered why I suggested pausing Cascade in the When method so that Cascade is synchronized with the subject system when an action occurs, and not the Then method since the Then method immediately follows the When. The handlers are the reason. For example you might want to take a screenshot of the web page immediately after the action but before the validation. If you paused in the Then, your screenshot handler would be subject to race conditions.

So lets get to writing a handler. Here is an example:

public class WaitASecond implements Handler {
    public void handle(Object step) {
An Example Handler

As you can see, it is really straightforward. This example is very simple though. A much more sophisticated example would be to take a screenshot of the web page using Selenium after each action.

The handlers support the @Supplies and @Demands annotations.

You can see a complete example here in the example application included with the Cascade sources.

Test Generation

During this chapter I'm going to elaborate on how journeys are created from step files. Some of this is really heavy going. You really don't need to read this section. The only reason I've included it is because some people will question whether Cascade is safe to use for test generation. At times, the whole concept will appear to be Random.

It is not Random! The algorithm is deterministic. Consequently it is reliable. If you are worried about coverage, then conflating that issue with how Cascade seemingly randomly generates tests is conflating two issues. Coverage is a problem that exists for all test frameworks. In fact Cascade is better at handling this problem than most as it keeps test code organised.

The algorithm itself is quite complicated as it is necessary to deal with cycles in your step definitions, orphaned steps and implicit terminators. The test generator applies much of what Cascade is.

Having said that, you can write your own. If you do, look carefully at what the current one does as it implements some pretty significant functionality.

The Journey Generator implements this interface:

	public interface JourneyGenerator {
		List generateJourneys(List<Scenario> scenarios, Class<?> controlClass, Filter filter);

The Journey Generator Interface

The Journey Generator accepts these parameters.

  • scenarios <List> - This list of scenario objects hold all the references to the step files.
  • controlClass <Class> - This parameter is a reference to the controlClass class definition.
  • filter <Filter> - This filter object is an implementation of the FilterStrategy which implements the logic for enforcing the @FilterTests annotations in the control file.

The Standard Test Generation Algorithm

  1. Scenarios are sorted by the name of step file class.

    The generation of tests is determined by the ordering of the step files. This ordering removes any sensitivity from the classpath scanner on the test generator.

  2. Explicit Terminators are found.

    These are step files that are annotated with the @Terminator annotation.

  3. Re-entrant Terminators are found.

    These are step files that are annotated with the @ReEntrantTerminator annotation.

  4. Implicit Terminators are found

    These are step files that have no following step files declared. In other words, they are never mentioned in a @Step annotation.

  5. Terminators are sorted by class name.

    This step is possibly redundant at this point, since the test generator is single threaded currently.

  6. Journeys are generated by looping through all terminators and calling the trail generator function.

    The trail generator function is a recursive function that appends scenarios on to the end of a working list of scenarios that is a journey.

    1. The trail is tested if it includes the current scenario. If it does, an infinite loop error is generated.
    2. The generating trail function is called for each elible preceeding scenario.
    3. The current scenario is tested if it is the start of a journey. If it is, it is applied to the composite filter and if it passes that, it is added to the set of journeys and the trail generator function returns.

      The composite filter needs some elaboration.

      The composite filter composes other filters that are ordered. The first filter is the OnlyRunWith filter that implements the @OnlyRunWith logic. Following on from that is a specialised filter that is used for finding orphaned cycles. Then what follows is the filter supplied as a argument to the test generator and that should implement the @FilterTests logic. Finally there is the redundant filter test. This final filter builds up a journey image. I won't describe that here, as it is described later. The essential point to note is that each scenario that is not matched by the journey image, is accepted as a valid journey and is added to the journey image. This has the affect that later journeys are likely to be culled if they are redundant.

  7. Invalid scenarios that form orphaned cycles are tested for.

    Cascade will error if scenarios are found to not be connected to a journey start. These are journeys that are in orphaned cycles. Cascade can not find a journey start point for these journeys so they will never run.

  8. Journeys are tested for redundancy.

    The test generator performs a final check that no journey is composed within another.

    The algorithm for this final check needs some description.

    For each journey, a journey image is constructed of all journeys except the current one. The journey image is a summary of all the journeys that have been added to it. Once complete it contains all scenarios that appear at stage 1 for all journeys, and for stage 2 and so on until the image terminates at the last stage which contains only the scenarios of the journey that is the longest.

    The journey image can then be tested for whether it contains the journey in question. If the image covers the current journey, that journey is discarded.

    Note that this algorithm is dependent on order in two senses actually. Firstly, a journey that is composed in another, but is offset so that the composition is not matching in terms of its numerical order is not considered to be composed. And the secondly, the first journeys that are tested are culled first, meaning that later scenarios are more probable to remain in the set.

  9. Each journey is initialised.


As you can see from that description of the current algorithm, the Journey Generator has these responsibilities.

  • It generates lists of scenarios.
  • It delegates journey filtering to the filter strategy supplied, which implements the @FilterTests logic.
  • It finds journey initiators and terminators, by looking at step files with the @Terminator, @ReEntrantTerminator and @Step anotations (with no arguments).
  • It enforces the @OnlyRunWith logic.
  • It generates a practical set of journeys in a reasonable time period.

If you prefer to look at code, you can see view it here.

The Test Generation is Order Sensitive

If you look carefully at this algorithm, you will realise that it is order sensitive. The order of the step files passed to the Journey Generator affects the generation of journeys. You will probbably think at first that if the algorithm merely permutes the step files, then the same number of journeys will be generated. But the algorithm culls journeys as well. And it does so based on the journeys that have already been generated. Depending on the order of journeys generated, they are culled differently.

The algorithm attempts to generate a best-effort-minimal-set. A concept of Redundancy is defined that considers the ordinal value of steps to be significant. If a journey is entirely composed within another journey, but is executed later in order in the larger journey, then that journey is not considered to be composed within the other journey.

There is another order sensitive issue as well. The algorithm implements a double pass culling procedure. As journeys are generated they are culled according to the set of journeys that have been already accepted. Then at the end of the algorithm, we perform the same culling procedure, but play the journeys through the culling procedure in reverse order.

So the point is really, that the set of journeys that are generated are not the perfect set you might imagine. I could change the algorithm to correct these deficencies, but I've decided not to as the extra processing involved is considerable, and the way the current algorithm works is likely to offer the most value for the least amount of processing time.


Completeness is the true power of Cascade.

Using more traditional methods, black box tests or journeys are basically stories that are defined in either Tests or Story Files. These artefacts read pretty much like stories, in that they define a number of actions followed by validation that execute against the subject system.

There is typically a separate artefact for each journey or story. Over time there can be a great many artefacts.

Cascade is really all about finding clever ways to construct there journeys. By doing so we address these questions.

  • What have I missed?

    This question is not that trivial. You have composed state transitions into an unstructured data form. Finding gaps means going to all the unstructured data and drawing parallels between them, identifying commonality and trying to identify what is missing from the resulting picture. A developer or tester performs this action, so it is labour intensive and subject to human error.

    Cascade allows you to define all the scenarios for a step in the same place. This reduces the scope under consideration, which allows you to identify gaps more easily. The test artefacts are structured so its easy to investigate relationships.

  • What is redundant?

    This is the other side of the coin to what is missing. This isn't trivial either as these kinds of tests are very expensive. Any redundancy can have a great affect on the time to execute for tests which impacts the number of development iterations that occur which affect how efficient developers are and the stability of the code base when the subject system is finally deployed.

    The test generation algorithms reduce redundancy.

By structuring the test code, we have happened upon a really interesting feature of Cascade:

We can define what we mean by Complete.

You can make an active decision about the level of completeness you want for a given test set. You can then generate different test sets for different Continuous Integration pipelines.

So Cascade defines the following levels of Completeness.

  • UNRESTRICTED - This level of completeness accepts all journeys that the Journey Generator offers.
  • SCENARIO_COMPLETE - This level filters the set of journeys such that a minimal set of journeys is found, but all Scenarios are included in the set of journeys at least once. A Scenario here is an implementation of a Step class that contains a Then method.
  • TRANSITION_COMPLETE - Here, the set of journeys that result are a minimal set of journeys such that each Transition is included at least once. A Transition here is an implementation of a step class that contains a When method.
  • STATE_COMPLETE - The State Complete set of journeys is a minimal set of journeys where only one Scenario class is included for each State that is defined. A State definition here is most commonly the definition of a class or interface that has a @Step annotation.
public class OnlineBankingTests {

Configuring Completeness Level

Completeness is configured via the @CompletenessLevel annotation on the control file. It is optional, so if you leave it out, Cascade defaults to UNRESTRICTED.

The Completeness Strategy runs after the Journey Generator has run. This means that completeness is determined by the test set after the tests have been filtered.

The Completeness Algorithm

Here, I am going to describe the algorithm used to find the minimal set of journeys.

  1. The identifier producer function is determined based on the Completeness level.

    The identifier producer function is passed to the histogram generator which immediately follows.

  2. A histogram is generated for each State, Transition or Scenario based on the identifier producer function.

    The histogram is in some way an aggregation of all the steps that have been defined. Based on different identifiers, the step files are aggregated in different ways.

    For example, if two steps inherit off the same interface that has the @Step annotation, then if State Complete is defined, they end up in the same bucket. If Scenario Complete is defined, they end up in different buckets if they each have different Then methods.

  3. Each identifier is assigned an order number based on the number of step files that have been collated underneath it.

    Each identifier has a bucket. Buckets that have the same number of step files assigned to them receive the same order number. The order number is assigned from the smallest bucket to the largest, so that step files, in buckets with order numbers that are smaller, are more common than step files in buckets with larger order numbers.

  4. Each journey then has its value calculated based on the order number of the buckets to which the step files that belong to the journey have been assigned.

    This means that journeys with high values contain step files that are not very common.

  5. The most valuable journey is identified and extracted to the result set.

    And the buckets that hold the step files in that journey, have their order numbers set to zero.

  6. The algorithm repeats from step 4, where each journey's value is calculated.

    Since some buckets have had their order number set to zero, step files that are in the result set contribute nothing to the valuation of the remaining journeys.

  7. The algorithm stops when all remaining journeys have a value of zero.

And that's really all that needs to be said on Completeness.


But of course, Completeness isn't the only advantage. We have taken the trouble to structure our test artefacts by modeling the subject system's state machine. We can now generate reports that include diagrams of the state machine.

The Expected State Model

We have an alternative state model in terms of all the fields we Supply and Demand in our test artefacts. When we execute our tests, we update this model so that steps can reference the model in order to have an expected value for any assertions made.

We do this in the example application. I've extracted the relevant code below. This code sets up a standing order in the online banking example.

public class SetupStandingOrder {
    private WebDriver webDriver;

    public List standingOrders;

    public void when() {
        enterText(webDriver, "[test-input-description]", "magazine subscription");
        //enter more fields

        standingOrders.add(new StandingOrder().setDescription("magazine subscription"));

    public void then() {
        for (int row = 0; row < standingOrders.size(); row++) {
            assertStandingOrderRow(webDriver, row, standingOrders.get(row));
Updating the Expected State Model

As you can see in the example, we have created a standing order with a description. We add a new standing order object to the Expected State Model by adding to a field that this step Demands.

Cascade is aware of these fields since it shares them between components. And since Cascade is aware of those fields, it can include them in the test reports.

Which brings us to renderers.


You don't get the nice presentation for free. When Cascade generates the reports, it iterates through all the scenarios and persists all the fields shared between all the artefacts. It does so either through a default Jackson serializer or by using a renderer.

public interface StateRenderingStrategy {
    boolean accept(Object value);
    String render(Object value);
The StateRenderingStrategy Interface

There are two different rendering strategies.

The StateRenderingStrategy persists the current values for all the fields into html. It is intended to produce a static view of the expected state.

You can see an example used by Cascade here, to render a list of strings.

public interface TransitionRenderingStrategy {
    boolean accept(Object value);
    Object copy(Object value);
    String render(Object value, Object copy);
The TransitionRenderingStrategy Interface

The TransitionRenderingStrategy receives a before and after picture of the state transition that just occurred. This allows it to generate html describing the transition from one state to another.

Writing Transition Rendering Strategies can be a bit more complex. There are some examples in the sample applications as in here. They tend to be very customized to the data structure involved.

So how do you go about applying a renderer? There are two generate methods, that appear to be very similar.

public class TakeScreenshot implements Handler {

    @Supplies(stateRenderer = ScreenshotStateRendering.class)
    private String screenshot;

	//some code
Annotating the Screenshot Field

The first method is to annotate a field. There is an example of this in the TakeScreenShot handler which you can view in its entirety here. I've included the relevant section to the left.

When you configure the renderer this way, the renderer applies to the field based on its name. The accepts method in the interfaces are not used. In the example, I always return true, since I know the method is never called.

You would specify the renderer as an argument to the @Supplies annotation.

public class OnlineBankingTests {

Configuring a Renderer in the Control File

The other method is to configure the renderer in the control file.

When you use this method, the render uses the accepts methods defined within the renderers in order to determine which renderer will render which field.

You would make use of the @StateRenderingRule annotation to specify the general renderers.