Moving your tagging from a client-side Google Tag Manager container to a server-side setup is one of those projects that promises cleaner data and better performance — and also has the potential to quietly break attribution across paid channels, analytics and CRO tools if you don’t plan carefully. I’ve run this migration a few times with clients and internal projects, and the single biggest lesson I keep coming back to is: treat the migration like an analytics product launch, not a simple config change.
Why go server-side — and what you risk
Server-side tagging can reduce ad/fraud blocking, speed up pages, centralise privacy controls and give you more control over the payloads you send to vendors. Platforms like Google Cloud Run, AWS Lambda or managed solutions such as Google Tag Manager Server-Side (GTM SS) or Tealium EventStream are viable options. But the trade-off is that you move responsibility for accurately reproducing client behaviour into a server environment where a missed header, cookie, or parameter can silently break conversions, attribution and audiences.
Before you touch any code, accept this: a broken attribution chain is not obvious until invoice day or a campaign manager flags an anomaly. So we build safety nets and thorough testing into the migration plan.
High-level migration checklist (overview)
Below is the checklist I use as a running guide during migrations. Each point maps to concrete steps later in the article.
Inventory: what to capture
You can’t migrate what you don’t understand. Start by exporting a full list of tags, triggers and variables from GTM (use the container export or a tag audit tool). For each item, capture:
Make a table if you like — I often use a simple two-column table mapping "Client-side element" to "Server-side equivalent".
| Client-side element | Server-side requirement |
|---|---|
| dataLayer push (purchase event) | Event schema with order_id, revenue, currency, user_id |
| Google Ads conversion tag | Conversion ID + gclsrc/gclid preservation or conversion API call |
| Facebook Pixel | Server-Side Conversions API with event_source_url, client_user_agent, fbc/fbp |
Design your server-side event schema
This is where many teams stumble. On the client you may be flexible with optional fields; the server needs consistency. Define a canonical event schema that includes:
Generate the event_id on the client before the dataLayer push. Why? If the client generates event_id and sends it with server events, you can deduplicate and match conversions to click-level records in downstream ads platforms.
Preserve attribution signals
Attribution breaks when you lose or change the values platforms use to connect a click to a conversion. The most common offenders:
Implement a client-side small script that captures query parameters (gclid, fbclid, etc.), writes them to first-party cookies with long TTLs, and injects them into every event payload sent to the server. If you use GTM, a simple custom HTML or tag can do this, but make it tiny and resilient.
Server endpoints and vendor forwarding
Decide whether your server will act as a proxy (receive events and forward to vendors) or as a translator (collect, transform, and call vendor APIs). For many teams switching from GTM, a hybrid approach works: use GTM SS as the central collector and write custom adapters to call Google Analytics 4 Measurement Protocol, Google Ads conversions endpoint, Facebook Conversions API, and any DSPs' server APIs.
Keep these rules in mind:
Testing & QA checklist
Testing is the heart of a successful migration. Here’s the checklist I run through with engineering and marketing:
Rollout strategy
I never flip the switch for all traffic at once. Use a staged rollout:
During each step keep client-side GTM active to serve as a fallback and comparison source. Communicate timelines to stakeholders — paid media teams need awareness so they can pause campaigns if conversion reporting drops unexpectedly.
Monitoring and alerting
After migration you’ll want real-time signals:
I use a combination of BigQuery (for GA4 exports), a lightweight ELK/Datadog pipeline for server logs, and custom Slack alerts for failures. If you’re on Google Cloud, Cloud Monitoring plus BigQuery is a neat stack.
Common pitfalls
These are the mistakes I’ve seen cause grief:
Tools and integrations worth knowing
Some tooling that helps:
One final practical tip: build a small "migration dashboard" that surfaces a single source-of-truth metric — for example, purchases per day from server vs client. That single number lets non-technical stakeholders see progress without getting buried in logs.