cloud phone migration playbook: from one provider to another
cloud phone migration playbook: from one provider to another
a cloud phone migration playbook in 2026 is the document that keeps a vendor switch from becoming a six-month catastrophe. teams that move providers without one typically end up running both subscriptions for a quarter, breaking CI for two weeks, and losing test data they did not realize was vendor-locked. teams that follow a real playbook cut over in 30-45 days with zero production impact.
this guide gives you that playbook. five phases, named owners, gating criteria, and the specific risks at each step. if you have not yet decided to migrate, the vendor red flags and TCO worksheet help you confirm the move is worth it.
phase 0: pre-flight (week minus 2 to week 0)
before announcing the migration internally.
- negotiate the new contract. include a 30-90 day parallel-run discount with the new vendor.
- read the old MSA exit clause. confirm the data export window, any fees, the off-boarding notification process.
- inventory current usage. how many devices, how many seats, how many integrations, what API endpoints are called.
- identify the migration owner. one person, named, with 50% of their time blocked for 6 weeks.
- brief the security team. they will need to re-run their review on the new vendor.
deliverable end of phase 0: a one-page migration brief with old vendor inventory, new vendor contract, named owner, and target cutover date.
phase 1: provision new vendor (week 1)
set up the new platform end to end.
- create the production account and admin users on the new vendor
- mirror your RBAC roles from old to new
- set up SSO and SCIM (if applicable)
- provision the same device count and tags as your current setup
- wire monitoring, audit log shipping, and billing alerts
at the end of this phase the new vendor is fully operational from an admin perspective. no test traffic yet.
phase 2: parallel-run setup (week 2)
this is the critical phase. you need both vendors running simultaneously, with traffic mirrored where possible.
CI integration
duplicate your CI pipeline. one set of jobs hits old vendor, one set hits new. compare:
- pass rate (should match within 2%)
- wall-clock time
- flake distribution by test
- cost per run
# example GitLab CI snippet
android-tests-old:
variables:
CLOUDFONE_API: https://api.oldvendor.com/v1
CLOUDFONE_TOKEN: $OLDVENDOR_TOKEN
# ... existing job ...
android-tests-new:
variables:
CLOUDFONE_API: https://api.newvendor.com/v1
CLOUDFONE_TOKEN: $NEWVENDOR_TOKEN
allow_failure: true # new vendor failures do not break the build during migration
# ... same job ...
allow_failure: true on the new vendor jobs is non-negotiable in week 2. you are gathering signal, not gating.
multi-account workflows
if you use cloud phones for multi-account farms, do not migrate accounts to new vendor wholesale. pick 5-10% of accounts, move them, watch for fingerprint resets, login challenges, and platform anomalies.
automation and webhooks
duplicate webhook URLs. send events to both your old receiver and a new one. compare event shapes, missing fields, retry behavior.
end of phase 2: you have at least 5 days of side-by-side data showing new vendor performs at least as well as old.
phase 3: cutover (week 3)
flip the default from old to new. keep old as fallback for two weeks.
- change CI default to new vendor
- swap webhook receivers to new vendor as primary
- migrate remaining multi-account workflows in waves of 25%
- keep old vendor’s admin access read-only for audit log access
- monitor closely for the first 72 hours
a useful trick: run a “new vendor only” canary day at the start of phase 3. one full business day where old vendor is suspended. if anything is missing, you find it now while you can roll back.
phase 4: deprecation (week 4-5)
old vendor is no longer in production but still has data you need.
- export everything: audit logs, session recordings, screenshots, user list, billing history
- store exports in your own S3 / GCS / Azure Blob with appropriate retention
- remove old vendor admin from your IdP
- schedule the contract end date with old vendor’s account manager in writing
- request final invoice and data deletion confirmation
be specific about data deletion. a written confirmation from the vendor that all your data is purged, with a date, satisfies most compliance regimes.
phase 5: post-migration (week 6+)
stabilize on the new vendor.
- run a retrospective with the migration team
- update internal docs and runbooks
- re-train any team members who only ever knew the old vendor’s UI
- baseline TCO numbers against the original TCO worksheet projection
- file lessons learned into the next vendor evaluation playbook
risk register
eight risks worth tracking through the migration.
| risk | likelihood | impact | mitigation |
|---|---|---|---|
| new vendor SLA worse than expected | medium | high | parallel-run before cutover |
| audit log gap during migration | medium | medium | export old logs, ship to SIEM directly |
| broken CI for >24h | low | high | allow_failure on new vendor in week 2 |
| account fingerprint reset on multi-account workflows | medium | high | migrate 5% first, watch carefully |
| vendor refuses smooth off-boarding | low | high | written notice per MSA, escalate publicly if needed |
| dual subscription cost overrun | medium | low | negotiate parallel-run discount in advance |
| user training gap | medium | low | record short videos, share in Slack |
| unknown integration on old vendor | medium | medium | inventory in phase 0 |
review weekly with the migration owner. drop risks that proved harmless, add new ones discovered.
go/no-go gates
each phase has an explicit gate before proceeding.
| gate | criterion |
|---|---|
| phase 1 to 2 | new vendor admin operational, RBAC mirrored, monitoring live |
| phase 2 to 3 | at least 5 days of parallel-run data, pass rate within 2% |
| phase 3 to 4 | 72 hours stable on new vendor, zero P1 incidents |
| phase 4 to 5 | data exported, audit log archived, written deletion confirmation pending |
| phase 5 close | retrospective complete, TCO baselined, lessons documented |
if any gate fails, do not proceed. fix the issue first.
rollback plan
a real rollback plan exists for phases 3 and 4 only. earlier than that, just keep the old vendor as primary.
- phase 3 rollback: flip CI default back to old vendor, restore old webhook receivers, document what failed
- phase 4 rollback: not possible if you have already deleted old admin access. restoring requires opening a fresh account with old vendor and importing data backward. avoid by keeping old admin read-only through phase 5.
most rollbacks happen in phase 3. plan for it.
frequently asked questions
how long does a typical cloud phone migration take?
30-45 days for a team with 50-200 phones and a clear playbook. teams without a playbook or with heavy custom integrations average 60-90 days.
can I migrate without parallel-running?
possible but risky. parallel-running is what catches the regressions before they become incidents. skipping it saves cost but adds incident risk roughly proportional to your fleet size.
what if the new vendor cannot match the old vendor’s device coverage?
mid-migration is the wrong time to discover this. it should have been caught in the POC. if it was not, pause the migration and either expand the new vendor coverage or keep the old vendor for the gap regions.
should I tell my team about the migration on day 1?
tell engineering and CI owners on day 1. tell broader team in week 2 with clear messaging about why and timeline. tell external customers only if there is a user-facing change (typically there is not).
how do I handle long-running session recordings during migration?
set a hard cutoff date. recordings older than X days move to your own storage. recordings between cutoff and migration date stay accessible until the contract end. nothing new gets recorded on old vendor after cutover.
ready to plan a clean migration? start a cloudf.one trial, use it as the parallel-run target, and follow the playbook above to swap providers without breaking production.