Trending March 2024 # Button Click Tracking With Google Tag Manager # Suggested April 2024 # Top 12 Popular

You are reading the article Button Click Tracking With Google Tag Manager updated in March 2024 on the website Cattuongwedding.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Button Click Tracking With Google Tag Manager

🚨 Note: If you’re new to Google Tag Manager, check out our Google Tag Manager tutorial and master the basics.

Before we get started with the tutorial, we need to learn a little bit of theory surrounding auto-event triggers within Google Tag Manager. Before you start installing event trackers through Google Tag Manager, you need to be aware that Google Tag Manager can deploy auto event triggers.

The process consists of Google Tag Manager → Auto Event Trigger → Event Tag → Google Analytics.

Are you new to Google Tag Manager? Learn the basics in our Google Tag Manager Self Study Guide!

The Auto Event Trigger

Google Tag Manager’s Auto Event Trigger has two functionalities: the listener functionality and the filter functionality. When these functionalities are combined, they are able to determine whether a tag (such as an event tag) is deployed and later transfers that information to Google Analytics. 

Listener Functionality Filter Functionality

The filter functionality will then determine whether this event is the right event,  determine whether it’s true or false, and eventually trigger your tag to transfer the information to Google Analytics (which could also be Facebook Analytics or AdWords).  

To begin with an example, here is our Demoshop website where Google Tag Manager is installed.

If you want to make sure that Google Tag Manager is actually installed, we can always look in our Google Tag Assistant for Google Chrome.

Reload the page and wait for a small Google Tag Manager console to pop up at the bottom of the screen.

Choosing A Trigger Type

Open another page on your website by applying the same technique as above to open it in a new tab.

Trigger the Event You Want to Track Refine the Filter

There are several matching options on the second dropdown menu such as RegEx, CSS Selector, and so on. But to make things easy, simply choose the contains option. 

To complete the Trigger Configuration, add the value of the variable you want to track. In this case, I will add the single_add_to_cart_button.

Copy the value of the variable and paste it into the last text box. Then, hit save.

Connect Trigger to a Tag 

Then, rename the tag.

Select Event for the Track Type and type Add to Cart on the Action text box. 

Now that you’ve set your tag’s different aspects, you have to define where to send all of this.  If you already have a Google Analytics account set on Google Tag Manager, you can simply choose that option. But if not, override settings in this tag and manually input your tracking ID.

Copy your Tracking ID and paste it on the Tracking ID text box in Google Tag Manager. 

To find out what had caused the missing tags, we need to check the Google Analytics event tag to check the errors in the assigned trigger.

To help correct this mistake, go over to the Triggers tab on Google Tag Manager’s default workspace.

Hit the refresh button.

Using Preview Mode Using Tag Assistant 

Through Google Tag Assistant, you can see that one event tag got fired.

Using Real-Time Report in Google Analytics

On the Events tab, you’ll see a new event entering your Google Analytics account.

Go back to the web page and observe how the Google Analytics tag has been deployed on the Google Tag Manager console.

Facebook Pixel

Now, we can also deploy other tags because we already have that trigger now prepared on Google Tag Manager, we can reuse that trigger.

So, for example here, I have a Facebook event that sends over a track event, add to cart, to Facebook. 

Google Ads Tag

Similar to Facebook Events, Google Ads can have tagged events via Google Tag Manager as well.

Hit refresh.

Go back to the web page and reload.

You can also see the three events here in our Google Tag Assistant. 

We can also check via our Facebook Pixel Helper where we can see that the add to cart event has been received.

FAQ Summary

If you are new to Google Tag Manager, then we encourage you to sign up for our free GTM course.

Sign up to the FREE GTM for Beginners Course…

You're reading Button Click Tracking With Google Tag Manager

How To Delay Facebook Pixel With Google Tag Manager

In this guide, we’ll show you how to build higher-quality audiences in your Facebook Ads by delaying your Facebook Pixel and eliminating bounced users from your audience.

This tutorial will explain:

Sign up to the FREE GTM for Beginners Course…

Why Should You Delay Facebook Pixel?

Delaying the Facebook Pixel from firing immediately when a user enters your website will filter out any user who isn’t really interested in what you have to offer.

This is totally fine—not everyone on the whole internet is part of your intended audience. However, it could be a problem if these users become part of your targeted audience for marketing campaigns.

If you want to optimize your audience for retargeting purposes, you might want to focus on users who have been on your website for more than a few seconds.

We can implement such a delay with the help of Google Tag Manager and create a Facebook audience of people who have stayed at least five seconds on your webpage.

🚨 Note: If you use The Facebook Pixel and haven’t integrated Google Tag Manager yet, be sure to check out our Facebook Pixel Tracking with GTM guide.

Creating Your Base Facebook Tag

Let’s begin by creating our base Facebook Tag. This Tag will load the Facebook Pixel library so your pixel can record other Facebook events.

For this, let’s create a new custom HTML tag in Google Tag Manager. 

Make sure to give your Tag an informative name. I like to use CHTML for “custom HTML” to describe the Tag type, then the tool and the scope or function of the Tag. In this case, I’ve named this Tag CHTML – Facebook – Base Pixel.

Since there isn’t an official integration for Facebook Pixel Tags, we’ll be making a custom one using the Custom HTML tag type.

Then, paste this pixel code into the HTML field of your new Tag. 

Since we’ll need this base Tag to fire first to track any other Facebook events, we’ll want it to fire on all pages across our website.

Testing Your Base Pixel Tag

While in preview mode, navigate around your website. If everything is implemented correctly so far, your CHTML – Facebook – Base Pixel Tag should fire on each webpage. 

I also like to use a browser extension called the Facebook Pixel Helper, which shows here that a PageView event has fired. This is the Tag that we just installed, so we know that the Tag is firing properly and will be sent to our Facebook Ads account.

Creating Facebook Event that Fires 5 Seconds After Page Load

Now that we have a base Tag that will load our Facebook Pixel library, we can create more customized event Tags to collect better data.

Next, let’s create a custom event for this Facebook Pixel that fires five seconds after the page load. This accomplishes our goal of tracking only users who are interested in our website and excluding users who bounce.

There are two main steps to this process.

First, we’ll need to create a timer trigger that will wait for five seconds before firing our Tag. 

Create GTM Timer Trigger

There is a built-in trigger inside Google Tag Manager that can help use accomplished this called the Timer trigger. 

The field Interval determines how long the timer will wait after trigger to fire a Tag. To achieve a five-second delay on our Facebook Pixel Tag, enter 5000 milliseconds in the Interval field.

If no Limit is placed on this trigger, then it will fire a Tag every consecutive interval. In this case, the Tag would fire again every five seconds.

We only want to fire our Tag once per page, so set the Limit to 1.

Finally, we need to set the conditions for this trigger. This will tell the trigger when it should start its timer.

We want this trigger to fire on all pages, so we’ll set the conditions to Page Path / matches RegEx /  .*. The dot-star ( .* ) in regular expression notation means that any value will be considered, so the timer will begin on any page on this website.

Finally, don’t forget to give your trigger an informative name—Timer – 5 Seconds is pretty self-explanatory—and Save it.

So with these settings,  we have a timer trigger that fires just once after five seconds on all pages. This is perfect for tracking pageviews from users who don’t bounce.

Build Custom HTML Facebook Tags

Now, let’s create a Tag that uses our new trigger and fires a custom Facebook event. 

Let’s give this Tag a name to distinguish it from our base Tag. I’ll call mine CHTML – Facebook – 5 Seconds. 

And here we need to type exactly like this: 

fbq(‘trackCustom’,’5 Seconds’);

This snippet is a piece of JavaScript. The trackCustom element allows us to create our own Facebook event that we can name whatever we want. I will use the name 5 Seconds so we can identify it in the Facebook Ads interface.

Next, we’re going to attach our five-second timer trigger to this Tag that we’ve just built. 

So this piece of code fires five seconds after each page load. 

Testing Your Delayed Pixel Tag

We’re almost done with this Tag. Our last step is to test it in the Google Tag Manager preview and debug mode.

If you check the Facebook Pixel Helper, you can also see that the PageView has fired and also our 5 Seconds event.

Finally, we should also make sure that our Facebook Ads account is receiving the correct data from these Tags.

In your Facebook Events Manager, go to Test events. Under the Receiving activity list, you should see events for both your base Tag (Page view) and for your timer Tag (5 Seconds) on each page you opened while in GTM preview mode.

Setting Tag Firing Priority

There’s one more step that we should take to ensure that our implementation is airtight.

We want to make sure that our base Tag always fires before the timer Tag, no matter what else happens on the page load. This is because without Facebook library initiating from the base Tag, the timer Tag will not successfully send information to our Facebook Ads account.

Under Advanced Settings, find the field labeled Tag firing priority. The higher a Tag’s firing priority compared to other Tags, the earlier it will fire (the default is zero). Any value greater than zero will ensure that this Tag fires before our other Tags.

Creating Facebook Audience in Facebook Pixel Interface

Choose the Facebook Ads account where you’d like to create an audience.

Choose the correct pixel that is tracking your new events. 

Then, we can create an audience of people who have stayed for at least five seconds by selecting for users tracked by our 5 Seconds event.

We also need to determine how long users stay in this audience after being tracked by this event. Remarketing works well within a short timeframe, so I’ll set this to one week.

With this custom audience, you can target users who were interested in your website with your remarketing campaigns.

🚨 Note: If you’re getting errors in your Meta Pixel, make sure to check out our handy guide on how to fix Meta Pixel errors.

FAQ How do I set a Tag’s firing priority?

To set a Tag’s firing priority in Google Tag Manager, follow these steps:

How do I create a Facebook audience?

To create a Facebook audience using your delayed Facebook Pixel events, you can follow these steps:

What are the benefits of delaying Facebook Pixel? Summary

So there you have it. This is how you can delay your Facebook pixel to build a higher quality audience.

Check out our guide on Facebook Pixel Purchase & Conversion Tracking with GTM which also improves your ability to track more qualitative data.

The Best Google Tag Manager Resources (The Definitive List)

Looking for more Google Tag Manager resources to help you improve your GTM knowledge? If you’re ready to expand your GTM skills beyond the basics, check out this overview of our resource guide that will give you the tools you need to succeed!

Google Tag Manager is not a “book skill”—you can’t just read the manual and become an expert.

This post is a summary of our “GTM Resource Guide” eBook. This resource guide is a one-stop shop of Google Tag Manager tools, experts, and documentation—all available for free. To see the full list of resources go get the eBook here!

In this collection of resources, you can find solutions and strategies discovered by others that can help you learn Google Tag Manager for yourself.

You’ll also find communities and GTM leaders who can help you troubleshoot your implementations, plus specialized tools that can take your tracking to the next level.

Ready for a preview of what’s inside? Let’s dive in!

Self-Teaching: Blogs, Videos, and Books

While GTM isn’t a “book skill,” reading (or watching) other GTM users’ experiences can give you perspective and demonstrate skills that you want to build. Sometimes, if you’re really lucky, someone will describe the solution to the exact challenge you’re facing!

In our resource guide, we’ll point you to our favorite bloggers, YouTubers, and authors on GTM problems. These people document their workflows, describe their solutions, and are usually on the cutting edge of changes or updates that affect your tracking.

It’s worth noting that due to the nature of book publication, even the most comprehensive books may become out-of-date after major updates to GTM. But it’s easy to subscribe to blogs and YouTube channels, which will give you bite-sized information as soon as it’s available.

And when you combine all three elements in your research, you’ll be in the best position to learn and excel in GTM.

Getting GTM Help: Communities and Experts

Depending on your implementation, sometimes you’ll run into problems that it seems like no one else is dealing with. You haven’t found a solution on any blogs or YouTube channels, so what’s next?

Other users! No matter how niche your problem seems, someone else has probably seen something similar.

There is a wealth of GTM knowledge in online communities, some of which is centered around a few industry icons. By querying broad communities of experienced GTM users and maybe a few specialists, you’re bound to get a gentle push in the right direction.

Finally, our resource guide is chock-full of tools to elevate your GTM game.

If you’ve every thought, “I wish I could do [this] with Google Tag Manager…” then someone’s probably built a tool to do just that.

Need a sandbox site to test your GTM skills? A user-friendly way to build custom Data Layers? How about a browser extension that lets you copy and paste entire containers across GTM accounts?

Becoming a Master: More GTM Training

Even with so many resources, it can be tough to master GTM without a little help.

That’s why we’ve built a course specifically for people who’ve learned the basics of GTM but want to start on the path to mastery. Our GTM Beyond the Basics course provides structure and guidance to help you become a GTM expert.

This course includes access to your own demo sandbox website, a series of tracking challenges with solution guides, and access to previously recorded trainings that take a deeper look at GTM.

Or, to get access to this GTM Beyond the Basics course plus additional training in GTM, JavaScript, and Google Data Studio, plus a dedicated community, workshops and events, training challenges, extra lessons, and even more resources, check out our MeasureMasters program!

Summary

So there you have it! This resource guide includes all my favorite resources for diving deeper into Google Tag Manager.

There’s always tons more to learn Google Tag Manager and about measurement in general. With new updates changing the measurement game all the time, you can put yourself in the best position to succeed by keeping this resource guide handy.

Tracking Google Knowledge Graph Algorithm Updates & Volatility

Like the core algorithm, Google’s Knowledge Graph periodically updates.

But little has been known about how, when, and what it means — until now.

I believe these updates consist of three things:

Algorithm tweaks.

An injection of curated training data.

A refresh of Knowledge Graph’s dataset.

My company, Kalicube, has been tracking Google’s Knowledge Graph both through the API and through knowledge panels for several years.

When I wrote about The Budapest Update’ in 2023, for example, I had seen a massive increase in confidence scores. Nothing that seismic on the scores has happened since.

However, the scores for individual entities fluctuate a great deal and typically over 75% will change during any given month.

The exceptions are December 2023, and over the last four months (I’ll come to that later).

From July 2023 to June 2023, we were tracking monthly (hence the monthly figures).

Since July 2023, we have been tracking daily to see if we can spot more granular patterns. I hadn’t seen any until a conversation with Andrea Volpini from Wordlift sent me down a rabbit hole…

And there, I discovered some truly stunning insights.

Note: This article is specifically about the results returned by the API and the insights they give us into when Google updates its Knowledge Graph – including the size, nature, and day of the update — which is a game-changer if you ask me.

Major Knowledge Graph Updates Over the Last 8 Months

Sunday, July 12, 2023.

Monday, July 13, 2023.

Wednesday, August 12, 2023.

Saturday, August 22, 2023.

Wednesday, September 9, 2023.

Saturday, September 19, 2023.

Sunday, October 11, 2023.

Thursday, February 11, 2023.

Thursday, February 25, 2023.

You can check the updates on Kalicube’s Knowledge Graph Sensor here (updated daily).

For anyone following the core blue link algorithm updates, you might notice that the two are out of sync, up until February 2023 updates.

The exceptions I found are (with my wild theorizing in italics):

Reach, Scope & Scale of These Updates

We can usefully consider three aspects of an update:

The magnitude (reach/breadth), which is the percentage of entities affected (so far, between 60-80%).

The amplitude (scope/height), or the change (up or down) in confidence scores on a micro, per entity level (the average amplitude for the middle fifty has been around 10-15%).

The shift (scale/depth), which is the change in confidence scores on a macro level (the Budapest update aside, this is less than 0.1%).

What We Found by Tracking the Knowledge Graph Daily

The Knowledge Graph has very regular updates.

These updates occur every 2 to 3 weeks but with long pauses at times, as you can see above.

The updates are violent and sudden.

We see that 60-80% of entities are affected, and the changes are probably immediate across the entire dataset.

Updates to individual entities continue in between.

Any individual entity can see its confidence score increase or decrease on any day, whether there is an update or not. It can disappear (in a virtual puff of smoke) and information about that entity can change at any time between these major updates to the Knowledge Graph algorithm and data.

There are extreme outlying cases.

Individual entities react very differently. In every update (and even in between), some changes are extreme. A confidence score can increase multifold in a day. It can drop multi-fold. And an entity can disappear altogether (when it does reappear it has a new id).

There is a ceiling.

The average confidence score for the entire dataset rarely changes by more than one-tenth of one percent per day (the shift), even on days where a major update occurs.

It appears there may be a ceiling to the scores the system can attribute, presumably to stop the more dominant entities from completely crowding out the rest (thanks Jono Alderson for that suggestion).

Following the massive raising of that ceiling during the Budapest update, the ceiling appears to have not moved in any meaningful manner since.

Every update since Budapest affects both reach and scope. None since Budapest has triggered a major shift in scale.

The ceiling may never change again. But then it may. And if it does, that will be big. So stay tuned (and ideally, be prepared).

After a great deal of experimentation, we have isolated and excluded those extreme outliers.

We do track them and continue to try to see any obvious pattern. But that is a story for another day.

How We Are Measuring

We have isolated each of the three aspects of the changes and measure them daily on a dataset of 3000 entities. We measure:

How many entities saw an increase or decrease (reach/breadth/magnitude).

How significant that change was on a micro-level (scope/height/amplitude).

How significant the change was to the overall score (scale/depth/shift).

What Is Happening?

One thing is clear: these updates have been violent, wide-ranging, and sudden.

Someone at Google had (and perhaps still has) “a big red button.”

Bill Slawski mentioned to me a Bing patent that mentions exactly that process.

The last two updates on Thursdays smack of the developers’ mantra “never change anything on a Friday if you don’t want to work the weekend.”

A Google Knowledge Graph Dance

Slawski suggested a concept to me that I think speaks volumes. Google has been playing “musical chairs” with the data – the core algorithms and the Knowledge Graph algorithm have very different needs.

The core algorithms have a fundamental reliance on popularity (the probability that inbound links lead to your site), whereas the Knowledge Graph necessarily needs to put that popularity/probability to one side and look at reliability/probable truthfulness/authority — in other words, confidence.

The core algorithms focus on strings of characters/words, whereas the Knowledge Graph relies on the understanding of the entities those same words represent.

It is possible that the updates of the core and Knowledge Graph algorithms were necessarily out of sync, since Google was having to “reorganize” the data for each approach every time they wanted to update either, then switch back.

Remember the Google Dance back in the day?

At the time it was simply a batch upload of fresh link data. This could have been something similar.

As of February 2023, Is the Dance Over?

It remains to be seen if that is now a “solved problem.”

I would imagine we’ll see a few more out-of-sync dances and a few more weird bugs due to updates of each that contradict each other.

But that by the end of 2023, the two will be merged to all intents and purposes and entity-based search will be a reality that we, as marketers, can productively and measurably leverage.

However the algorithms evolve and progress, the underlying shift is seismic.

Classifying the corpus of data Google possesses into entities and organizing that information according to confidence in its understanding of those entities is a huge change from organizing that same data by pure relevancy (as has been the case up until now).

The convergence of the algorithms?

Opinion: The following things make me think that winter 2023/2024 was the moment Google truly implemented the switch “from string to things” (after five years’ worth of PR):

The three-month hiatus from October to February when the core algorithm was relatively active, but the Knowledge Graph updates were very clearly paused.

The announcement that the topic layer was active in November.

The introduction of passage-based indexing to the core algorithm in February that appears to focus on extracting entities.

The seeming convergence of the updates (this is fresh; we only have two updates to judge from, and our tracking might later prove me wrong on this one, of course).

The Knowledge Graph Is a Living Thing

The Knowledge Graph appears to be based on a data-lake approach rather than the data-river approach of today’s core algorithm (delayed reaction versus immediate effect).

However, the fact that entities change and move between these major updates and the fact that the updates appear to be converging suggests that we aren’t far from a Knowledge Graph algorithm that not only works on fresh data rivers but is also integrated as part and parcel of the core algorithm.

Here’s a specific example that maps the updates to changes in the confidence score for my name (one of my experiments).

That vertiginous drop doesn’t map to an update.

It was a blunder on my part and shows that the updates to individual entities are ongoing, and can be extreme!

Read about that particular disaster here in my contribution to an article by SE Ranking.

The Future

My take: The “big red button” will be progressively retired and the violent and sudden updates will be replaced by changes and shifts that are smoother and less visible.

The integration of entities into the core blue links algorithms will be increasingly incremental and impossible to track (so let’s make the most of it while we can).

It is clear that Google is moving rapidly toward a quasi-human understanding of the world and all its algorithms will increasingly rely on its understanding of entities and its confidence in its understanding.

The SEO world will need to truly embrace entities and give more and more focus to educating Google via its Knowledge Graph.

Conclusion

In this article I have purposefully stuck to things I am fairly confident will prove to be true.

I have hundreds of ideas, theories, and plans, and my company continues to track 70,000+ entities on a monthly basis — over 3,000 daily.

I am also running over 500 active experiments on the Knowledge Graph and knowledge panels (including on myself, the blue dog, and the yellow koala), so expect more news soon.

In the meantime, I’m just hoping Google won’t cut my access to the Knowledge Graph API!

More Resources:

Image Credits

All screenshots taken by author, March 2023

Google Analytics 4 Faqs: Stay Calm & Keep Tracking

On March 16th, 2023, Google Analytics shocked the marketing industry by announcing that Universal Analytics would stop processing hits in July 2023.

This didn’t go over so well.

Some marketers are unhappy with the user interface; others are frustrated that GA4 does not have key features.

Many are still in the denial phase – besides, isn’t it still in beta?

Let’s take a step back and answer the burning questions here:

Why is this happening?

What do these changes mean?

What do I need to do right now?

Why Universal Analytics Is Updating To Google Analytics 4

Many marketers have built business processes around Universal Analytics and want to know why this change is happening.

So, I asked former Googler Krista Seiden, who helped build GA4 and is also the founder of KS Digital, “Why is this GA4 update happening?”

Seiden explained that GA4 has actually been in development for many years.

Originally, it came out as a public beta called App+Web, and in October 2023, it dropped the beta label and was rebranded as GA4.

“GA4 isn’t so much an update, but an entirely new way of doing analytics – set up to scale for the future, work in a cookieless world, and be a lot more privacy-conscious,” Seiden explained.

Google’s announcement blog was entitled,“Prepare for the future with Google Analytics 4.”

… for the future.

We keep hearing this; what does “for the future” mean?

When I read Google documentation and chatted with analytics experts, I noticed three main themes or ways that GA4 prepares your business for the future:

updated data model,

works in a cookieless world,

and privacy implications.

Let’s unpack each of these.

Data Model

A data model tells Google Analytics what to do with the site visitor information it collects.

Universal Analytics is built on a session-based data model that is 15 years old.

This was before internet devices like smartphones were widely used.

UA measurement was built for independent sessions (group of user interactions within a given time frame) on a desktop device and user activities were tracked with cookies.

Fun fact, I learned from the head of innovation at Adswerve, Charles Farina, that you can actually still implement GA javascript code from 15 years ago.

Yes, I’m talking about the original tracking code (Urchin).

And it still works today.

In the past few years, this old measurement methodology has become obsolete.

As much as we love Google Analytics, there are many examples of how it just does not work with the way users interact with our websites today.

Farina shared an example with conversions.

In Universal Analytics, goals are session-based. You cannot measure goals by user.

If a user watches four videos in one session, it can only count as one conversion.

In GA4, conversions (or goals) are event-based.

Cookieless World

Google Analytics works by setting cookies on a user’s browser when visiting your website.

Cookies allow a website to “remember” information about a visitor.

That information can be as simple as “this user has visited before” or more detailed, like how a user interacted with the site previously.

Cookies are widely used on the web. And they can be helpful for things like remembering what items you put in a cart.

However, cookies also pose a privacy risk because they share data with third parties.

As the world becomes more aware of privacy issues, users increasingly want to opt out of sharing their data.

And because more people opt out of sharing their data, Google Analytics cannot report on all the people who visit a website.

There is a growing gap in the data collected.

Google Analytics had to adapt to remain useful to website owners.

And they did.

GA4 is designed to fill in the gaps using machine learning and other protocols to create reports.

This is called “blended data.”

In the blog post about this change, Google explains.

“Because the technology landscape continues to evolve, the new Analytics is designed to adapt to a future with or without cookies or identifiers.

It uses a flexible approach to measurement, and in the future, will include modeling to fill in the gaps where the data may be incomplete.

This means that you can rely on Google Analytics to help you measure your marketing results and meet customer needs now as you navigate the recovery and as you face uncertainty in the future.”

Data Privacy

Data privacy is a big topic that deserves its own article in length. To oversimplify it, people want more control over their data and its use.

Laws such as GDPR and the California Consumer Privacy Act are enforcing this desire.

Google Analytics says that GA4 is designed with privacy at its core – but what does that mean?

All UA privacy settings will carry over, and we are getting new features.

For example, Google Analytics 4 does not store IP addresses and GA4 relies on first-party cookies, which supposedly keep them compliant with privacy laws.

I encourage you to use this time to consider your data strategy and set the tone for your company’s data privacy policy, assess your digital footprint and consent management, and ensure compliance.

What Do These Changes Mean For My Business?

The second thing marketers want to know is, “How is GA4 different?”

Or really, “How will these changes affect my business?”

Don’t get too caught up in comparing Universal Analytics and GA4.

The numbers won’t match.

It’s a rabbit hole with no actionable or otherwise helpful outcome.

As Seiden pointed out, this is not just a platform upgrade.

It’s a completely new version of Google Analytics.

GA4 is a new data model and a new user interface.

Keep reading for a summary of key differences between UA and GA4 data and how they affect your business.

Changes in Data Modeling

The most important change is the way data is collected.

Universal Analytics uses a session-based data model (collection of user interactions within a given time frame) and collects data as various hit (user interaction) types within these sessions.

This is why watching four videos in one session only counts as one conversion in UA.

Google Analytics 4 is user-based and collects data in the form of events.

Each event has a unique name (event_name parameter) used to identify the event, with additional parameters to describe the event.

For more on the differences between the two data models, see UA versus GA4 data in the Google help docs.

Spam Detection

Have you ever seen a giant spike in traffic in Universal Analytics or a bunch of random traffic sources that you couldn’t explain?

Spammers could send fake data to people’s Google Analytics accounts by using the Measurement Protocol.

As you can imagine, this created a big problem with inaccurate data.

Google has fixed this problem by only allowing hits with a secret key to send data to a GA4 property. This key is visible in your GA4 data stream settings but is not available publicly.

Data Retention

Data retention refers to how long Google Analytics keeps disaggregated data. At the end of the retention period, the data is deleted automatically.

The default setting for data retention in Universal Analytics is 26 months. But you could choose a different time interval, from 14 months to “do not automatically expire.”

In GA4, you can choose to retain data for two months or 14 months.

At the end of the retention period, you keep the aggregated data in standard reports, but the disaggregated data used in Explore reports are no longer available.

What is aggregated versus unaggregated data?

Think of aggregated data as a summary used to look at website visitors as a whole.

And disaggregated data is dissected or broken down into smaller subsections, such as a specific audience or segment.

Shorter retention periods are not really a big deal.

You can still accomplish the same use cases while doing more to respect user data privacy.

You can still run (aggregated) standard reports to show how well you are doing compared to past performance.

And data from the most recent months is the most useful if you want to make predictions and take action.

User Interface: Reporting

GA4 reporting comes with a learning curve.

With Universal Analytics, there was an emphasis on pre-built reports. It was fairly easy and quick to navigate “done-for-you” reports.

Google Analytics 4 is oriented toward taking greater ownership of our data. With that comes the flexibility of custom reporting templates.

Because the data model has changed and the platform is more privacy-conscious, replicating some of the tasks you performed in Universal Analytics may not be possible.

As an agency or freelancer, you have an additional responsibility to communicate wins and opportunities to your accounts.

And they’re going to need time to learn GA4 or, more likely, rely on you to learn GA4.

To visualize the data in a more familiar way to your clients, I highly recommend Data Studio.

What Do I Need To Do Right Now?

There is no need to panic.

You have time to implement GA4 configuration, time to update business processes, and time to learn new reports.

With that said, GA4 needs to take priority on your roadmap.

Audit your existing analytics setup and create a GA4 configuration plan.

Setting up GA4 before July 2023 is mission-critical.

Start building historical data so that you can do a year-over-year analysis next year.

Once GA4 events are collected, get your team up to speed and update your processes.

A year from now, they will need to be comfortable using Google Analytics 4 to make marketing decisions.

Start planning team training sessions. SEJ rounded up the top educational guides and GA4 resources here.

Last but not least, make plans to extract historical data in Universal Analytics before July 2023.  BigQuery doesn’t cost anything aside from the low storage fees.

Final Thoughts

You’re not just getting an upgrade when you switch to Google Analytics 4. You’re getting an entirely new way of analytics.

This solution is necessary to respect user data privacy and get actionable insights in a cookie-less world.

At the heart of this change is a new data model that makes GA4 different from what we have used in the past decade.

Right now, it’s important to configure GA4 and conversion events for year-over-year data when UA is sunset in July 2023.

After embracing the change, you might enjoy the flexibility and user insights with GA4.

Happy tracking!

More resources:

Featured Image: Paulo Bobita/Search Engine Journal

How To Take A Screen Shot On Iphone With A Home Button

If you want to take a screenshot of an iPhone that has a Home button, then you’ll find the process to be super simple. In fact, taking a screenshot using an iPhone, iPod touch, or iPad is really easy, and the process is the same on all devices regardless of which model it is as long as they have a physical Home button to press.

Let’s jump right in and learn how get a picture of the devices screen captured:

How to Capture Screen Shots with iPhone 8, iPhone 8 Plus, iPhone 7 Plus, iPhone 7, iPhone 6s, iPhone 6s Plus, iPhone SE, iPhone 6, iPhone 6 Plus, iPhone 5s, and earlier

To capture the screen shot of any iOS device with a Home button, just do the following:

Press the Power button and Home button simultaneously

When the screen flashes, a screenshot has taken of whatever is on screen in iOS

You just need to give a quick simultaneous press to both the Power and Home buttons, the screen shot is obvious because the iPhone or iPad screen flashes and a sound effect is made when it is successfully captured.

Your screenshots will automatically be stored in your iPhone Photos app. To view them, simply tap on Photos and you’ll find the screenshot at the very end of your Camera Roll or Albums view in Photos.

If you’re confused, just refer to the images in this post which will demonstrate where you can see the Power and Home button highlighted. The Power button is on the side of new iPhone models and the top of older iPhone models, where as the Home button is located on the bottom of all devices in the middle.

This is where the Power button and Home buttons are on new model iPhones, anything newer than iPhone 6 and iPhone 6 Plus, including iPhone 8 Plus, iPhone 8, iPhone 7 Plus, iPhone 7, iPhone 6s, iPhone 6s Plus:

Here’s where the Power button and Home button are on the iPhone 5S, iPhone 5, iPhone 4S, 4, 3GS, 3G, and 2G:

The important thing to remember is that taking screenshots on any iPhone device with a Home button is the same, meaning all iPhone 8, iPhone 8 Plus, iPhone 7 Plus, iPhone 7, iPhone 6s, iPhone 6s Plus, iPhone SE, iPhone 6, iPhone 6 Plus, iPhone 5s, iPhone 5, iPhone 4s, iPhone 4, iPhone 3GS, iPhone 3g, and the original iPhone all take screenshots the same way. This is a process that is now different from the newest iPhone models without Home buttons, which rely on different screenshot methods instead. If you have a newer device, you can learn how to screenshot iPhone 11, iPhone 11 Pro, iPhone 11 Pro max and how to take screenshots on iPhone X, XR, XS, XS Max, since those devices use the volume button rather than the Home button in the button combination pressing for snapping screenshots.

I know this may seem like a beginner tip, but I was just asked how to do this the other day by someone who I would consider extremely technically savvy, so perhaps it’s not as widely known as it should be especially for recent iPhone converts. I run into the question of how to Print Screen on a Mac rather frequently too, but while the Mac may require a key combo to remember, iOS is even easier. This is in contrast to older versions of Android which, apparently, involves installing the SDK first… hmm, that was changed in newer versions however, so no matter what type of device you have it’s pretty easy to take pictures of the screen nowadays.

Updated: 12/19/2024

Related

Update the detailed information about Button Click Tracking With Google Tag Manager on the Cattuongwedding.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!