Loading

Best practices for Scout UI tests

Best practices specific to Scout UI tests.

Tip

For guidance that applies to both UI and API tests, see the general Scout best practices. Scout is built on Playwright, so the official Playwright Best Practices also apply.

Default to parallel UI suites when possible. Parallel workers share the same Kibana/ES deployment, but run in isolated Spaces.

Mode When to use
Parallel UI tests (most suites), suites that share pre-ingested data (often using the global setup hook)
Sequential API tests, suites that require a “clean” Elasticsearch state

UI tests should answer “does this feature work for the user?” Verify that components render, respond to interaction, and navigate correctly. Leave exact data validation (computed values, aggregation results, edge cases) to API or unit tests, which are faster and less brittle.

What you’re testing Recommended layer
User flows, navigation, rendering Scout UI test
Data correctness, API contracts, edge cases Scout API test
Isolated component logic (loading/error states, tooltips, field validation) RTL/Jest unit test

When a test asserts user flow (not just “land on a page”), prefer navigation the way a user would: follow links and buttons, and use browser history (page.goBack()) instead of direct URL jumps where it matters for the scenario. Reserve page.goto / deep links for cheap setup when the test is not about navigation.

Setup/teardown using UI is slow and brittle. Prefer Kibana APIs and fixtures.

Playwright actions and web-first assertions already wait/retry. Don’t add redundant waits, and never use page.waitForTimeout() as it’s a hard sleep with no readiness signal and a common source of flakiness.

When an action triggers async UI work (navigation, saving, loading data), wait for the resulting state before your next step. This ensures the UI is ready and prevents flaky interactions with elements that haven’t rendered yet.

If an action fails, don't wrap it in a retry loop. Playwright already waits for actionability; repeated failures usually point to an app issue (unstable DOM, non-unique selectors, re-render bugs). Fix the component or make your waiting/locators explicit and stable.

Prefer stable data-test-subj attributes accessed using page.testSubj. If data-test-subj is missing, prefer adding one to source code. If that’s not possible, use getByRole inside a scoped container.

Scout configures Playwright timeouts (source). Prefer defaults.

  • Don’t override suite-level timeouts/retries with test.describe.configure() unless you have a strong reason.
  • If you increase a timeout for one operation, keep it well below the test timeout and add a short code comment explaining why (slow first load, CI variance, known heavy view, etc.).
  • After raising timeouts for flakiness, re-run the flaky test runner (or many local repeats) to confirm the new value is necessary.
  • Keep in mind that an assertion timeout that exceeds the test timeout is ignored.
  • Time spent in hooks (beforeEach, afterEach) counts toward the test timeout. If setup is slow, the test itself may time out even though its assertions are fast.

Tables/maps/visualizations can appear before data is rendered. Prefer waiting on a component-specific “loaded” signal rather than global indicators like the Kibana chrome spinner (our data shows they are unreliable for confirming that a particular component has finished rendering).

Do not rely on helpers that only wait for a global “loading” indicator to disappear. Each view should have an explicit readiness wait (or removal of such helpers in favor of those waits).

Prefer existing page objects (and their methods) over rebuilding EUI interactions in test files.

  • Prefer readonly locator fields assigned in the constructor for stable selectors, and methods for parameterized locators, multi-step actions, or flows. Thin getter-only methods for every field add noise; match the patterns used by built-in page objects (for example DashboardApp).

Page objects should focus on real UI interaction. Put HTTP mocks, interceptors, and similar setup in dedicated fixtures (for example fixtures/mocks.ts) so tests and reviewers can find them in one place.

Create methods for repeated flows (and make them wait for readiness).

Playwright creates a fresh browser context for each test, so there is no cached state to work around. Both page object methods and test code should be explicit about the action they perform, not defensive about the current state. Conditional flows (like "if modal is open, close it first") hide bugs, waste time, and make failures harder to understand.

Prefer explicit expect() in the test file so reviewers can see intent and failure modes. Also prefer expect() over manual boolean checks, as Playwright’s error output includes the locator, call log, and a clear message, which if/throw patterns lose.

In page objects, avoid expect(). Use waitForSelector / visibility waits to synchronize after navigation or actions (for example wait for a header to be visible). Assertions belong in specs.

If you must interact with EUI internals, use wrappers from Scout to keep that complexity out of tests.

Scout supports automated accessibility (a11y) scanning via page.checkA11y. Add checks at high-value points in your UI tests (landing pages, modals, flyouts, wizard steps) rather than on every interaction.

For the full guide (scoping, exclusions, handling pre-existing violations), see Accessibility testing.

If a page has onboarding/getting-started state, set localStorage before navigation.

If the same custom role appears in many specs, extract it into a browserAuth fixture extension instead of repeating the role descriptor everywhere. Tests then read like intent.