Before it wrecks itself, Codebots checks itself

by Lana Brindley, Nov 24, 2017

When humans write code, they also need to write tests to ensure their code works properly and doesn't cause any problems in other pieces of the code that have already been written. The same is true when codebots write code, they also need to write tests.

The reason we test things is to ensure that what we expect to happen is what is actually happening. A test passes if what we expect to happen actually happens, and it fails if what actually happens is something different to what we expect.

Codebots use a system called 'model-based testing'. When you give a codebot instructions for the code you want it to write, the first thing it does is create a couple of different models. One of the models is used to write the code, and the other one is used to test the code and make sure it works properly. This is related to the fourth unbreakable rule of codebots: codebots write both the development and testing targets.

The different types of tests

Codebots are pretty smart, but when it comes to testing they usually need a bit of help from humans. This is especially important in situations where you have a human writing code alongside a codebot. In very general terms, a codebot can create tests and automatically run them for anything testing the 'back-end' system (all the bits that make your application work), but a human needs to write tests for the 'front-end' pieces (all the bits that you can see and interact with in your application). This will vary, depending on the level of complexity of the application you are creating, how much of the code has been written by humans, and whether or not the codebots have seen this type of thing before.

Once all the tests have been created, they are run against the code, and the results are placed on the dashboard for you to monitor. You can also use the Traceability Matrix to see each user story against each test, so you can evaluate how complete each work task is.

Automatically written tests

These are tests that the codebot can create itself, based on the information you have given it. The codebot writes models, which is then used to create both the code, and the tests.

Manually written tests

These are tests that you create to augment the automatically generated tests. You can use the Scenario Builder in the Codebots interface to drag and drop elements to create your tests, or you can write them directly yourself.

User acceptance tests (UATs)

These are a type of manually created test that seek to understand how users interact with the application. They cannot be automated, because they are looking at uniquely human factors, like whether or not we find a colour hard to read, or if the location of a button is difficult to find. It is important that you do user acceptance tests to make sure your application is easy for people to use, especially for people with a disability or impairment, but codebots cannot generate these tests.

Writing great tests

For the automatically generated tests, you don't need to do anything, the bots will do it all for you, and display the results in your Dashboard.

For manually created tests, including user acceptance tests (UATs), you will need to build them in the Scenario Builder. This section discusses how to construct a testing scenario, and the kinds of things you need to consider testing.

When you begin thinking about the tests you will require for your project, start by considering the user stories you created in the Stories Backlog. What will a user need to do with your system to achieve the goals you discussed in your epics and user stories? They will need to be able to do things like log in, click buttons to make things happen, enter and delete information, view content on a screen, and change content. Each of these interactions is considered to be a 'scenario' that requires testing. As an example, we're going to assume that the interaction you want to test is a user clicking on a button labelled "Create tracking number" and expecting a new tracking number to be displayed on the screen. 

Each scenario is expressed as a series of steps—things like entering text, clicking buttons, or loading pages—that a user must go through in that scenario. Each step in a scenario must be tested and pass, for the entire scenario to pass. In our example, the steps are logging in to the system, clicking on a button, and waiting for a response. Things like buttons and text fields are generally referred to as 'elements', and they will each have a unique identification code so that the system knows which element you mean

Tests are written in a standardised format, to ensure you get all the details you need. This format can be expressed as a sentence:
Given [a particular situation], when [I complete these steps], then [I expect this result].
Once you've filled in the blanks, you end up with a test that looks something like this:
Given I log in to the system, when I click on the element labelled "Create tracking number" in section "body", then verify that the field labelled "Tracking Number" contains a value.
Let's look at each of those statements in turn:

Given: This is the environment or situation that your user is in. They might need to be logged in to the system, looking at a particular page, or using a particular function.

When: These are the tasks your user needs to perform. Things like clicking buttons, writing in text boxes, or selecting options from menus. There can be more than one step here, depending on how complicated your test scenario needs to be.  

Then: This is what your user will expect to happen, and it's what is tested. If the thing your user expects to happen in this step actually happens, then your test will pass. If something else happens, then it will fail.
Sometimes, despite our best efforts, tests will fail. In this case, don’t panic! Failing tests is a part of software development: no one can be perfect all the time. If you see a failing test in your Codebots interface, you can view the details of the test and find out the reason why the test has failed, which will give you the information that you need to fix whatever is wrong.
The Codebots platform will be released in 2018. Click here for early access.