Testing with mocha, chai, and puppeteer
In this article I will outline how I am using Puppeteer with Mocha (a JS test runner) and Chai (a set of JS "tests" that we will make use of).
Puppeteer, for those who are unfamiliar, is "headless Chrome" (or more specifically, chromium, the open-source version of the Chrome web browser). We are going to use Mocha to run Puppeteer, and Chai to test some expectations on what we should be seeing inside puppeteer.
I have used a variety of test frameworks in the past, particularly in PHP... and I was never really very interested in it. That changed due to some of the reasons I enjoy puppeteer which you will see documented below.
Testing the happy path...
Before we get any further with this it is worth noting that end-to-end tests are typically very brittle so we are going to focus on testing the happy path which is what we think the user will be doing in the majority of cases. Following this, we will add additional tests to catch the edge cases we may not uncover until we work through some of the initial workflows of our site.
With that in mind, if you want to test a site that already exists and doesn't have any tests - you are in the perfect place. Puppeteer is perfect for making tests for projects that are already live because you are using a standard browser to run the tests.
Puppeteer: what is so good about it?
Some of the key features of Puppeteer set it apart from the other tools out there, here is a quick summary:
1. Very simple installation
A specific chromium (open source Chrome) version gets downloaded with the test suite. You will 100% know what version is running the tests, always (especially if you hard-code the version in package.json). For this reason it is usually good to make sure your node_modules folder is in your .gitignore file to prevent clogging your repo with a big chromium binary.
Your teammates will simply "npm i" and then they can run your tests. Zero config. Beautiful.
2. Less limiting than other libraries
The request library is often used for automated testing, but it is limiting. Having the full browser at your disposal allows you to test for UI elements that may not render correctly, delayed, or not at all in headless environments.
Puppeteer also has an option to turn headless mode off if you distrust "headless mode" which will allow you to watch the tests in real time.
So unlike with many other tools, you can fully test Vue.js, React, Angular, and any other dynamic site. You just "wait" for selectors to appear at the right time and then test it like any other context (note: waiting for selector can be a bit flaky so I often wait based on time).
You won't be installing awkward add-ons for basic functionality. You get it all by default.
3. Streamlined use of selectors
- These are the selectors you know... from JavaScript. You can query any object, and then perform actions on it (click, touch, keyup, etc). There are actions described throughout the API that will help with this part.
If you are just learning these skills in JavaScript they are all immediately transferable to other forms of work. As a business owner, I love this.
After getting a few tests written you will see that they are very
repetitive. Your workflow gets down to using dev tools
inspector to select an object, right-click the element, choose Copy, and
Copy Selector, and then dropping the result into your boilerplate
pattern. Super easy. The rest is just mastering the basics of (ES6+) JavaScript.
- It is very quick to begin writing tests. Most of the time you just need to find a selector on your rendered content that points to the thing to click/touch/etc, and a second selector (or more) to test the result of what you want to happen. You can find these very simply by opening the Chrome inspector (Control-Shift-J on most systems), clicking the arrow icon in the toolbar, and selecting the HTML element you want to get the selector for. Finally, when the selector is highlighted in code, you can right-click on that, choose "copy->selector" and then paste that into your test.
4. Async bonus
- A side effect of writing tests with puppeteer is you will become very aware of when you need to be using async and await, as well as promises, and for me, especially Promise.all(). These things will become second nature to you after awhile and the knowledge will benefit you when coding other JavaScript projects in the future.
Remembering what we are not testing...
Finally, before we begin it is important to note that Puppeteer is Chrome-focused. It shouldn't be used for browser compatibility testing... for that you should keep using tools like Selenium which automate the interaction across browsers but use an abstraction layer to do so, rather than native JavaScript.
Things you can not test:
The output of a generated PDF... you can click a link, open that PDF (it goes into a new tab) but I haven't been bothered to try switching context to that other tab to do anything with it. PDFs are a known limitation.
You can't test OS-level features. On MacOS, but not other operating systems, "Select All" is done by MacOS. So you can't use "select-all" behavior and expect that to work on any environment.
Getting setup (package.json, .gitignore)
Okay, enough talk. Let's get setup with some defaults so that our team can begin writing puppeteer-based tests!
Requirements:
- package.json (generate one with npm init)
- install our 4 packages: puppeteer, dotenv, mocha, chai
- edit the file to add the following:
- scripts pointing to the files
- relative path to mocha binary (so users do not need to install puppeteer globally)
- .gitignore to prevent committing the browser in node_modules to git
- tests folder with your tests
Bonus points if you make these all "dev" requirements so they do not roll out into your production scripts. In my case this package was not tied to another application so there was no distinction between dev and prod.
Example package.json file:
{
"name": "mocha-tests",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test-courses": "node_modules/mocha/bin/mocha tests/courses.js || exit 0",
"test-content": "node_modules/mocha/bin/mocha tests/content.js || exit 0"
},
"author": "Joe Somebody",
"license": "GPL-2.0",
"dependencies": {
"chai": "^4.1.2",
"dotenv": "^5.0.0",
"mocha": "^5.0.0",
"puppeteer": "^1.0.0"
}
}
This package.json file does the following:
- It allows us to "npm install" to get all the dependencies for the tests, including: chai, dotenv (we use this for variable storage), mocha, and puppeteer).
If you used "npm init" to create the package.json file, you can add our dependencies like so:
npm i puppeteer dotenv mocha chai
You will need to manually create the two "scripts" entries that are in this case project-specific. After installation users can simply run the following to test the site's "courses" tests contained in tests/courses.js:
npm run test-courses
... and that will run:
node tests/courses.js, using the puppeteer browser in node_modules
or alternatively, run the content tests from tests/content.js:
npm run test-content
... and that will run
node tests/content.js, using the puppeteer browser in node_modules
Example .gitignore file
The .gitignore file will keep the puppeteer browser from getting committed to your project repo, saving a ton of potentially wasted space in your repos.
node_modules/
variables.env
Creating a test template (tests/content.js, variables.env)
Create the file tests/content.js file for some tests:
// load your custom settings from your variables.env file require('dotenv').config( {path: __dirname.substring(__dirname.length -5,0) + 'variables.env'} ); // dependencies const expect = require('chai').expect; const puppeteer = require('puppeteer'); // pointer to our browser tab so it will persist between tests let page; const host = process.env.MOCHA_TEST_HOST; // the test suite describe('My test suite', function () { this.timeout(10000); // Useful when testing really slow Drupal sites // open a new browser tab and set the page variable to point at it before (async function () { global.expect = expect; global.browser = await puppeteer.launch( { headless: false } ); page = await browser.newPage(); page.setViewport({width: 1187, height: 1000}); }); // close the browser when the tests are finished after (async function () { await page.close(); await browser.close(); }); // @todo tests go here!!! });
to run it, go to the root of your project folder
ensure there is a variables.env file in that folder
- the file contains: MOCHA_TEST_HOST=http://example.com
then run "npm run test-content" per the name you set in package.json
the test will load the browser, pop open a window, and then close it because there are no tests yet! No failures == success!
If you cannot run the tests:
- Run "npm install" first.
- If that doesn't work, make sure you have node.js installed.
Example variables.env file:
MOCHA_TEST_HOST=http://example.com
Command to run the tests:
npm run test-content
Set "test-content" to run a specific file in your package.json.
Sample tests (add to tests/content.js)
These all go in your tests/content.js file, where we left that @todo in the example above.
PS. You can nest the describe/it blocks if you want!
1. Check the page title.
The first example we have just looks at the metadata coming back from puppeteer to check the page title:
it('homepage loads and has correct page title', async function () {
const [response] = await Promise.all([
page.goto(host, {timeout:0}),
page.waitForNavigation({timeout:0}),
]);
expect(await page.title()).to.eql('Crazy page that takes a lot of time to load');
});
This test is nice and quick because we aren't parsing any HTML.
2. User clicks the checkout button
This one is part of our previous example.
Note that click and waitForNavigation can happen in any order. We're not doing anything until both the click and the load of the next page are complete.
it('proceeds to checkout', async function () {
const [response] = await Promise.all([
page.waitForNavigation({timeout:0}),
page.click('#edit-checkout'),
]);
expect(await page.title()).to.eql('Checkout | My Awesome Webstore');
});
3. User logs into the application
This example loads the login form for a CMS, inputs the data, and submits it.
it('can login', async function () {
await page.goto(host + '/user', {timeout:0});
await page.type('#edit-name', 'joesomebody');
await page.type('#edit-pass', 'password1');
const [response] = await Promise.all([
page.waitForNavigation({timeout:0}),
page.click('#edit-submit'),
]);
let result = await page.$eval('.field-group-htabs-wrapper', e => e.innerHTML);
expect(result).to.include('Add a New Class');
});
The test passes if the user successfully logs in and sees some HTML with the class "field-group-htabs-wrapper", and if inside that class there is a string 'Add a New Class' that is present. If there are many HTML elements using this class, the first one is used for the test.
This test can be somewhat problematic because we are actually doing two page loads within the same test (first for the /user page, then for the "click submit" to come back). Keeping your tests to one request will make things smoother, and alleviate the need to explicitly set timeouts everywhere. You probably won't run into this problem outside of slow CMS testing though.
4. Just force it to waitFor a second...
This is a very brittle example but it does a few new things. Rather than waiting for a page to load, it simply waits 1 second (1000 ms):
it('selects an item from a slow AJAX select field', async function () {
const [response1] = await Promise.all([
page.click('#edit-field-class-level-und'),
page.waitFor(1000),
]);
const [response2] = await Promise.all([
page.keyboard.press('ArrowDown'),
page.waitFor(1000),
]);
await page.keyboard.press('Enter');
// @todo we should have an expect here, to test for a result
});
In this example we also make use of the virtual keyboard that puppeteer provides. After one second we presume that the AJAX content has loaded, press the down arrow, wait one more second, and then press enter. We don't actually test for anything in this example, so this would "pass" regardless of what happens unless we add an "expect" command in there.
Creating tests is now pretty much like this:
Now that you have seen a few examples you can simplify your workflow by using the provided examples.
To start a new project:
- Run npm init, create tests folder, update package.json scripts. Add a gitignore file.
To create a new set of tests in a new test file:
- Use the example above or copy an existing file in /tests and then update package.json to add it into the "scripts" list.
- Populate variables.env with any new re-usable settings
To create a new test within an existing test file (such as content.js above). This is mostly a copy-and-paste adventure now:
- copy 5-6 lines of code from elsewhere in the file (or from examples above)
- paste the lines of code elsewhere in the file
- update the "it" line to describe what you are testing
- update the "expect" line to reflect the result you want
- update the selector(s) you will be using in your test:
- in chrome or chromium, go to the page you are testing manually, and use the inspector to select what you want your test to click,
- right-click on the selected element (ie, the code in inspector), choose "copy->selector"
- paste that over top of the first selector in the example code
In other words, to find a selector: when you are looking at the page in chrome, press control-shift-j to launch the inspector, click the arrow in the corner, then click on the element on the page you want to use for your test. After clicking, the inspector will highlight some HTML in the inspector window. Right click that HTML. Choose Copy->Copy Selector. Paste that into your test.
Occasionally the test selector is too specific. I have found if the selector is very long I can often take elements out of the middle... makes it generic enough to target the first element. Depends on what garbage HTML you are dealing with. More often than not simply copying over a selector "just works" but sometimes you will have to edit them.
Things to watch out for
Waiting for a div to appear using waitFor didn't seem to work so we stuck to testing based on time or waitForNavigation (page load). Not ideal, perhaps it has been fixed, not sure what the current status is.
Many requests will run in parallel unless you explicitly prevent them from doing so. This is why I provided some examples with Promise.all, and also with sequential use of Promise.all.
Timeouts are expected to pass quickly. For this reason it is often simpler to test one page at a time, rather than loading multiple pages within a single test. It can be too granular at times to have to do that and sometimes you won't have a result (or even need one).
// for the times when you have a few tests to run // in sequence, but while they are running, there // is not really anything to test. expect(true).to.eql(true); // or maybe we just didn't write the test yet: // expect(true).to.eql(false);
Delays are sometimes necessary for text input. It also makes it more fun to watch if you disable headless mode. Some captcha systems test for "input too fast" so delaying it might be necessary in many cases.
"Select All" is an operating-system specific function and will differ on different platforms. May not be testable.
PDF rendering cannot be tested in great detail.
Refactoring ideas... Puppeteer as a migration source!
It should be pretty obvious that all of this could easily be a web scraper rather than a test suite. My team has been making use of Puppeteer in other contexts to extract data for extract-transform-load migrations. Puppeteer provides a quick way of getting a real-life rendered DOM with a consistent use of selectors. Huge time-saver when the alternative is doing dozens of database joins.