← back to blog

cloud phone migration playbook: from one provider to another

May 07, 2026

cloud phone migration playbook: from one provider to another

a cloud phone migration playbook in 2026 is the document that keeps a vendor switch from becoming a six-month catastrophe. teams that move providers without one typically end up running both subscriptions for a quarter, breaking CI for two weeks, and losing test data they did not realize was vendor-locked. teams that follow a real playbook cut over in 30-45 days with zero production impact.

this guide gives you that playbook. five phases, named owners, gating criteria, and the specific risks at each step. if you have not yet decided to migrate, the vendor red flags and TCO worksheet help you confirm the move is worth it.

phase 0: pre-flight (week minus 2 to week 0)

before announcing the migration internally.

deliverable end of phase 0: a one-page migration brief with old vendor inventory, new vendor contract, named owner, and target cutover date.

phase 1: provision new vendor (week 1)

set up the new platform end to end.

at the end of this phase the new vendor is fully operational from an admin perspective. no test traffic yet.

phase 2: parallel-run setup (week 2)

this is the critical phase. you need both vendors running simultaneously, with traffic mirrored where possible.

CI integration

duplicate your CI pipeline. one set of jobs hits old vendor, one set hits new. compare:

# example GitLab CI snippet
android-tests-old:
  variables:
    CLOUDFONE_API: https://api.oldvendor.com/v1
    CLOUDFONE_TOKEN: $OLDVENDOR_TOKEN
  # ... existing job ...

android-tests-new:
  variables:
    CLOUDFONE_API: https://api.newvendor.com/v1
    CLOUDFONE_TOKEN: $NEWVENDOR_TOKEN
  allow_failure: true   # new vendor failures do not break the build during migration
  # ... same job ...

allow_failure: true on the new vendor jobs is non-negotiable in week 2. you are gathering signal, not gating.

multi-account workflows

if you use cloud phones for multi-account farms, do not migrate accounts to new vendor wholesale. pick 5-10% of accounts, move them, watch for fingerprint resets, login challenges, and platform anomalies.

automation and webhooks

duplicate webhook URLs. send events to both your old receiver and a new one. compare event shapes, missing fields, retry behavior.

end of phase 2: you have at least 5 days of side-by-side data showing new vendor performs at least as well as old.

phase 3: cutover (week 3)

flip the default from old to new. keep old as fallback for two weeks.

a useful trick: run a “new vendor only” canary day at the start of phase 3. one full business day where old vendor is suspended. if anything is missing, you find it now while you can roll back.

phase 4: deprecation (week 4-5)

old vendor is no longer in production but still has data you need.

be specific about data deletion. a written confirmation from the vendor that all your data is purged, with a date, satisfies most compliance regimes.

phase 5: post-migration (week 6+)

stabilize on the new vendor.

risk register

eight risks worth tracking through the migration.

risk likelihood impact mitigation
new vendor SLA worse than expected medium high parallel-run before cutover
audit log gap during migration medium medium export old logs, ship to SIEM directly
broken CI for >24h low high allow_failure on new vendor in week 2
account fingerprint reset on multi-account workflows medium high migrate 5% first, watch carefully
vendor refuses smooth off-boarding low high written notice per MSA, escalate publicly if needed
dual subscription cost overrun medium low negotiate parallel-run discount in advance
user training gap medium low record short videos, share in Slack
unknown integration on old vendor medium medium inventory in phase 0

review weekly with the migration owner. drop risks that proved harmless, add new ones discovered.

go/no-go gates

each phase has an explicit gate before proceeding.

gate criterion
phase 1 to 2 new vendor admin operational, RBAC mirrored, monitoring live
phase 2 to 3 at least 5 days of parallel-run data, pass rate within 2%
phase 3 to 4 72 hours stable on new vendor, zero P1 incidents
phase 4 to 5 data exported, audit log archived, written deletion confirmation pending
phase 5 close retrospective complete, TCO baselined, lessons documented

if any gate fails, do not proceed. fix the issue first.

rollback plan

a real rollback plan exists for phases 3 and 4 only. earlier than that, just keep the old vendor as primary.

most rollbacks happen in phase 3. plan for it.

frequently asked questions

how long does a typical cloud phone migration take?

30-45 days for a team with 50-200 phones and a clear playbook. teams without a playbook or with heavy custom integrations average 60-90 days.

can I migrate without parallel-running?

possible but risky. parallel-running is what catches the regressions before they become incidents. skipping it saves cost but adds incident risk roughly proportional to your fleet size.

what if the new vendor cannot match the old vendor’s device coverage?

mid-migration is the wrong time to discover this. it should have been caught in the POC. if it was not, pause the migration and either expand the new vendor coverage or keep the old vendor for the gap regions.

should I tell my team about the migration on day 1?

tell engineering and CI owners on day 1. tell broader team in week 2 with clear messaging about why and timeline. tell external customers only if there is a user-facing change (typically there is not).

how do I handle long-running session recordings during migration?

set a hard cutoff date. recordings older than X days move to your own storage. recordings between cutoff and migration date stay accessible until the contract end. nothing new gets recorded on old vendor after cutover.

ready to plan a clean migration? start a cloudf.one trial, use it as the parallel-run target, and follow the playbook above to swap providers without breaking production.