Skip to content

[2.0] Build: Add brower-based testing to find broken reference example sketches (and more) #1280

@nbogie

Description

@nbogie

There are 800+ distinct reference pages on the website (before counting translations). A large percentage of these pages have multiple sketches. That's a lot to test.

Add automated browser-based testing (with e.g. cypress / puppeteer / playwright) to find broken example sketches in reference pages (and tutorials and examples).

How to find sketch errors:

It should be possible to find a good percentage of errors by simply loading the page and scanning the console for errors: this will catch exceptions, js syntax errors, misspelled variables, missing assets, and more.

This isn't a proposal to automate testing of sketch interactions, nor even, for now, testing that the sketches do what they're supposed to. Automated comparison of canvas snapshots could follow but isn't being proposed.

False positives

  • Sketches which need camera or microphone permission may fail.
  • A few sketches intentionally cause errors to the console (e.g. relating to FES).
  • webgpu and other features might not be available on headless browser implementations.

So long as they don't break the entire test run and we can sort and filter the test results, we can likely deal with these without change at source. However, we could potentially tag examples at source where specific automated test issues are expected, or special test steps required. (Or, worst case, we could hand-maintain a list of pages to be filtered out from results.)

CI?

This tooling would be useful whether it gets run regularly as part of CI or just used as an occasionally-run manually-triggered review tool. I propose we don't worry about delivering a CI solution, initially. That said it's good to keep eyes open regarding potential CI issues during development.

Possible negatives

  • Used without awareness, such tools can generate spikes of accidental traffic for production

    • Even if it causes the server / CDN no significant grief, it may get the user throttled or ip-banned
    • There should be a work piece here in making clear the potential issues and under what conditions the script(s) should get used against staging/production environments.
  • Automated clients mess with analytics - as i understand it we don't have much/any. We should take care that the tests identify via user agent - any analytics should then be able to filter that traffic out.

  • That having some testing in place end-to-end lessens drive to test closer to the source.
    We can and should also do testing of sketches closer to the source (e.g. in p5.js repo build by extracting @examples from jsdoc and checking for valid JS syntax) as it's faster and the feedback is much more immediate at the point the bug is created. That testing solution is significantly more complex. The two approaches are not mutually exclusive and have their own advantages and disadvantages

Other potential uses

This proposed approach to sketch testing will also unavoidably encounter (and should log) unrelated website errors in the console.

The test platform could later be used to look for:

  • 404s,
  • bad external links
  • missing translations
  • images missing alt texts, etc.
  • sketches missing description()
  • pages or sketches which are particularly noisy with (non-error) console logs
    Browser-automation can normally also capture screenshots on an different browsers (chrome, firefox, etc). This has uses in, for example, assisting UI reviews, e.g. checking responsive design.

Putting some first browser-based automation in place and growing some community skills around it brings these and other possibilities closer.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions