Skip to content

Multi-device test flows (chat, calls, pairing) β€” how are you handling them, and what else is blocking you from automating?Β #137

@arnoldlaishram

Description

@arnoldlaishram

Discussed in #136

Originally posted by arnoldlaishram April 28, 2026
Hey folks πŸ‘‹

I'm one of the maintainers. Opening this thread to learn about an area FinalRun doesn't support today but that I keep hearing about: tests that need two or more devices acting at the same time. And more broadly, what other scenarios are blocking you from automating end-to-end?

The multi-device case

Specifically where a single test needs two (or more) devices on different accounts, acting in coordination not multi-app flows on one device. For example:

  • Chat / messaging β€” User A sends from Device 1; User B receives, sees typing, marks read, replies on Device 2.
  • Voice / video calls β€” initiate from Device 1, accept on Device 2, assert state on both ends.
  • Multiplayer / live collab β€” games, whiteboards, shared docs, presence.
  • Pairing & handoff β€” QR on Device 1 scanned by Device 2; Bluetooth pairing; session continuation.
  • Push-driven flows β€” action on Device 1 triggers a notification that lands on Device 2.
  • Multi-account roles β€” admin vs. member, host vs. attendee, sender vs. receiver in the same app.

To be upfront: FinalRun doesn't support coordinated multi-device runs today. Single-device coverage is in a decent place; this is the next frontier I'm trying to understand before we build anything.

Questions I'd love your input on

  1. What's your setup today? Two emulators on one machine, physical devices, cloud device farm (BrowserStack, Sauce, AWS Device Farm), or a mix?
  2. How do you coordinate timing? Polling, shared state file, message queue, sleeps? How do you keep Device B from asserting "received" before Device A has sent?
  3. Test users / accounts β€” pre-seeded, generated per run, shared pool? How do you avoid parallel runs colliding?
  4. Where does it break down most β€” setup, flakiness, debugging, CI cost, something else?
  5. What have you given up testing E2E because multi-device wasn't worth it β€” and what do you do instead (backend tests, manual QA, mocked second user)?

And more broadly, what else is blocking you?

If you've hit other walls trying to automate mobile testing, please drop them in this thread or open a new discussion. A few examples to spark ideas:

  • Biometrics (Face ID, Touch ID, fingerprint)
  • Camera / QR / barcode flows
  • Deep links from outside the app (email, browser, other apps)
  • OTP / email codes / SMS verification
  • Payment sheets, in-app purchases, subscription flows
  • Permissions dialogs, system prompts
  • Background / foreground transitions, lock screen, interruptions (call comes in mid-test)
  • Network conditions (offline, flaky 3G)
  • Localization / RTL / accessibility settings
  • Anything else that makes you fall back to manual QA

I'd rather hear about the messy real-world blockers than the clean cases. War stories, "we tried X and it sucked because Y," or even one-line "this is impossible for us right now" β€” all useful.

Thanks. I'll summarize what comes out of this back into the repo so it's useful to anyone who finds the thread later.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions