You are reading the article Tracking Google Knowledge Graph Algorithm Updates & Volatility updated in December 2023 on the website Cattuongwedding.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Tracking Google Knowledge Graph Algorithm Updates & Volatility
Like the core algorithm, Google’s Knowledge Graph periodically updates.
But little has been known about how, when, and what it means — until now.
I believe these updates consist of three things:
Algorithm tweaks.
An injection of curated training data.
A refresh of Knowledge Graph’s dataset.
My company, Kalicube, has been tracking Google’s Knowledge Graph both through the API and through knowledge panels for several years.
When I wrote about The Budapest Update’ in 2023, for example, I had seen a massive increase in confidence scores. Nothing that seismic on the scores has happened since.
However, the scores for individual entities fluctuate a great deal and typically over 75% will change during any given month.
The exceptions are December 2023, and over the last four months (I’ll come to that later).
From July 2023 to June 2023, we were tracking monthly (hence the monthly figures).
Since July 2023, we have been tracking daily to see if we can spot more granular patterns. I hadn’t seen any until a conversation with Andrea Volpini from Wordlift sent me down a rabbit hole…
And there, I discovered some truly stunning insights.
Note: This article is specifically about the results returned by the API and the insights they give us into when Google updates its Knowledge Graph – including the size, nature, and day of the update — which is a game-changer if you ask me.
Major Knowledge Graph Updates Over the Last 8 Months
Sunday, July 12, 2023.
Monday, July 13, 2023.
Wednesday, August 12, 2023.
Saturday, August 22, 2023.
Wednesday, September 9, 2023.
Saturday, September 19, 2023.
Sunday, October 11, 2023.
Thursday, February 11, 2023.
Thursday, February 25, 2023.
You can check the updates on Kalicube’s Knowledge Graph Sensor here (updated daily).
For anyone following the core blue link algorithm updates, you might notice that the two are out of sync, up until February 2023 updates.
The exceptions I found are (with my wild theorizing in italics):
Reach, Scope & Scale of These UpdatesWe can usefully consider three aspects of an update:
The magnitude (reach/breadth), which is the percentage of entities affected (so far, between 60-80%).
The amplitude (scope/height), or the change (up or down) in confidence scores on a micro, per entity level (the average amplitude for the middle fifty has been around 10-15%).
The shift (scale/depth), which is the change in confidence scores on a macro level (the Budapest update aside, this is less than 0.1%).
What We Found by Tracking the Knowledge Graph DailyThe Knowledge Graph has very regular updates.
These updates occur every 2 to 3 weeks but with long pauses at times, as you can see above.
The updates are violent and sudden.
We see that 60-80% of entities are affected, and the changes are probably immediate across the entire dataset.
Updates to individual entities continue in between.
Any individual entity can see its confidence score increase or decrease on any day, whether there is an update or not. It can disappear (in a virtual puff of smoke) and information about that entity can change at any time between these major updates to the Knowledge Graph algorithm and data.
There are extreme outlying cases.
Individual entities react very differently. In every update (and even in between), some changes are extreme. A confidence score can increase multifold in a day. It can drop multi-fold. And an entity can disappear altogether (when it does reappear it has a new id).
There is a ceiling.
The average confidence score for the entire dataset rarely changes by more than one-tenth of one percent per day (the shift), even on days where a major update occurs.
It appears there may be a ceiling to the scores the system can attribute, presumably to stop the more dominant entities from completely crowding out the rest (thanks Jono Alderson for that suggestion).
Following the massive raising of that ceiling during the Budapest update, the ceiling appears to have not moved in any meaningful manner since.
Every update since Budapest affects both reach and scope. None since Budapest has triggered a major shift in scale.
The ceiling may never change again. But then it may. And if it does, that will be big. So stay tuned (and ideally, be prepared).
After a great deal of experimentation, we have isolated and excluded those extreme outliers.
We do track them and continue to try to see any obvious pattern. But that is a story for another day.
How We Are MeasuringWe have isolated each of the three aspects of the changes and measure them daily on a dataset of 3000 entities. We measure:
How many entities saw an increase or decrease (reach/breadth/magnitude).
How significant that change was on a micro-level (scope/height/amplitude).
How significant the change was to the overall score (scale/depth/shift).
What Is Happening?One thing is clear: these updates have been violent, wide-ranging, and sudden.
Someone at Google had (and perhaps still has) “a big red button.”
Bill Slawski mentioned to me a Bing patent that mentions exactly that process.
The last two updates on Thursdays smack of the developers’ mantra “never change anything on a Friday if you don’t want to work the weekend.”
A Google Knowledge Graph DanceSlawski suggested a concept to me that I think speaks volumes. Google has been playing “musical chairs” with the data – the core algorithms and the Knowledge Graph algorithm have very different needs.
The core algorithms have a fundamental reliance on popularity (the probability that inbound links lead to your site), whereas the Knowledge Graph necessarily needs to put that popularity/probability to one side and look at reliability/probable truthfulness/authority — in other words, confidence.
The core algorithms focus on strings of characters/words, whereas the Knowledge Graph relies on the understanding of the entities those same words represent.
It is possible that the updates of the core and Knowledge Graph algorithms were necessarily out of sync, since Google was having to “reorganize” the data for each approach every time they wanted to update either, then switch back.
Remember the Google Dance back in the day?
At the time it was simply a batch upload of fresh link data. This could have been something similar.
As of February 2023, Is the Dance Over?It remains to be seen if that is now a “solved problem.”
I would imagine we’ll see a few more out-of-sync dances and a few more weird bugs due to updates of each that contradict each other.
But that by the end of 2023, the two will be merged to all intents and purposes and entity-based search will be a reality that we, as marketers, can productively and measurably leverage.
However the algorithms evolve and progress, the underlying shift is seismic.
Classifying the corpus of data Google possesses into entities and organizing that information according to confidence in its understanding of those entities is a huge change from organizing that same data by pure relevancy (as has been the case up until now).
The convergence of the algorithms?Opinion: The following things make me think that winter 2023/2023 was the moment Google truly implemented the switch “from string to things” (after five years’ worth of PR):
The three-month hiatus from October to February when the core algorithm was relatively active, but the Knowledge Graph updates were very clearly paused.
The announcement that the topic layer was active in November.
The introduction of passage-based indexing to the core algorithm in February that appears to focus on extracting entities.
The seeming convergence of the updates (this is fresh; we only have two updates to judge from, and our tracking might later prove me wrong on this one, of course).
The Knowledge Graph Is a Living ThingThe Knowledge Graph appears to be based on a data-lake approach rather than the data-river approach of today’s core algorithm (delayed reaction versus immediate effect).
However, the fact that entities change and move between these major updates and the fact that the updates appear to be converging suggests that we aren’t far from a Knowledge Graph algorithm that not only works on fresh data rivers but is also integrated as part and parcel of the core algorithm.
Here’s a specific example that maps the updates to changes in the confidence score for my name (one of my experiments).
That vertiginous drop doesn’t map to an update.
It was a blunder on my part and shows that the updates to individual entities are ongoing, and can be extreme!
Read about that particular disaster here in my contribution to an article by SE Ranking.
The FutureMy take: The “big red button” will be progressively retired and the violent and sudden updates will be replaced by changes and shifts that are smoother and less visible.
The integration of entities into the core blue links algorithms will be increasingly incremental and impossible to track (so let’s make the most of it while we can).
It is clear that Google is moving rapidly toward a quasi-human understanding of the world and all its algorithms will increasingly rely on its understanding of entities and its confidence in its understanding.
The SEO world will need to truly embrace entities and give more and more focus to educating Google via its Knowledge Graph.
ConclusionIn this article I have purposefully stuck to things I am fairly confident will prove to be true.
I have hundreds of ideas, theories, and plans, and my company continues to track 70,000+ entities on a monthly basis — over 3,000 daily.
I am also running over 500 active experiments on the Knowledge Graph and knowledge panels (including on myself, the blue dog, and the yellow koala), so expect more news soon.
In the meantime, I’m just hoping Google won’t cut my access to the Knowledge Graph API!
More Resources:
Image Credits
All screenshots taken by author, March 2023
You're reading Tracking Google Knowledge Graph Algorithm Updates & Volatility
Lawsuit Asks Google To Reveal Algorithm
Lawsuit Asks Google to Reveal Algorithm
U.K.-based chúng tôi a “search engine” for parents of young children, has sued Google on the basis that its organic search ranking apparently declined in March 2005, allegedly because it was unfairly “penalized” by Google. According to this Reuters article, the site lost “70 percent” of its organic traffic when it was “downgraded.” The lawsuit seeks financial damages and asks that Google reveal the methodology behind its Page Rank algorithm.
Having read only the coverage and not the actual complaint, I’m struck by a couple of things that are amazing about this lawsuit. First, I think it’s dead on arrival. I think the company probably knows this and either hopes for a nuisance settlement or is looking for publicity.
The suit contends that its rankings did not change on Yahoo!, MSN or other engines. But it seems to be saying that these other rankings don’t matter, because Google is responsible for the lion’s share of its traffic. There’s a sense of both dependence and entitlement here — and how search engine rankings can be “life or death” for an online business.
If Google were truly the only search engine and there were some unjustly punitive action that was directed toward KinderStart (and provable), perhaps this claim might have a chance. (Even then the damages would be highly speculative.) But given that there is a competitive marketplace and other engines can equally be used to find KinderStart, it’s unlikely that any court would rule in KinderStart’s favor. Think about the precedent: It would be the end of organic search — every other site would litigate when its ranking fell. Finally, given that it’s a trade secret, KinderStart is also audacious in asking for Page Rank to be revealed. There’s zero chance that this aspect of the suit will succeed either.
Greg Sterling, Local Search and Convergence Columnist – Greg Sterling is managing editor of The Kelsey Group who also writes the Local Media Journal Blog.
Button Click Tracking With Google Tag Manager
🚨 Note: If you’re new to Google Tag Manager, check out our Google Tag Manager tutorial and master the basics.
Before we get started with the tutorial, we need to learn a little bit of theory surrounding auto-event triggers within Google Tag Manager. Before you start installing event trackers through Google Tag Manager, you need to be aware that Google Tag Manager can deploy auto event triggers.
The process consists of Google Tag Manager → Auto Event Trigger → Event Tag → Google Analytics.
Are you new to Google Tag Manager? Learn the basics in our Google Tag Manager Self Study Guide!
The Auto Event TriggerGoogle Tag Manager’s Auto Event Trigger has two functionalities: the listener functionality and the filter functionality. When these functionalities are combined, they are able to determine whether a tag (such as an event tag) is deployed and later transfers that information to Google Analytics.
Listener Functionality Filter FunctionalityThe filter functionality will then determine whether this event is the right event, determine whether it’s true or false, and eventually trigger your tag to transfer the information to Google Analytics (which could also be Facebook Analytics or AdWords).
To begin with an example, here is our Demoshop website where Google Tag Manager is installed.
If you want to make sure that Google Tag Manager is actually installed, we can always look in our Google Tag Assistant for Google Chrome.
Reload the page and wait for a small Google Tag Manager console to pop up at the bottom of the screen.
Choosing A Trigger TypeOpen another page on your website by applying the same technique as above to open it in a new tab.
Trigger the Event You Want to Track Refine the FilterThere are several matching options on the second dropdown menu such as RegEx, CSS Selector, and so on. But to make things easy, simply choose the contains option.
To complete the Trigger Configuration, add the value of the variable you want to track. In this case, I will add the single_add_to_cart_button.
Copy the value of the variable and paste it into the last text box. Then, hit save.
Connect Trigger to a TagThen, rename the tag.
Select Event for the Track Type and type Add to Cart on the Action text box.
Now that you’ve set your tag’s different aspects, you have to define where to send all of this. If you already have a Google Analytics account set on Google Tag Manager, you can simply choose that option. But if not, override settings in this tag and manually input your tracking ID.
Copy your Tracking ID and paste it on the Tracking ID text box in Google Tag Manager.
To find out what had caused the missing tags, we need to check the Google Analytics event tag to check the errors in the assigned trigger.
To help correct this mistake, go over to the Triggers tab on Google Tag Manager’s default workspace.
Hit the refresh button.
Using Preview Mode Using Tag AssistantThrough Google Tag Assistant, you can see that one event tag got fired.
Using Real-Time Report in Google AnalyticsOn the Events tab, you’ll see a new event entering your Google Analytics account.
Go back to the web page and observe how the Google Analytics tag has been deployed on the Google Tag Manager console.
Facebook PixelNow, we can also deploy other tags because we already have that trigger now prepared on Google Tag Manager, we can reuse that trigger.
So, for example here, I have a Facebook event that sends over a track event, add to cart, to Facebook.
Google Ads TagSimilar to Facebook Events, Google Ads can have tagged events via Google Tag Manager as well.
Hit refresh.
Go back to the web page and reload.
You can also see the three events here in our Google Tag Assistant.
We can also check via our Facebook Pixel Helper where we can see that the add to cart event has been received.
FAQ SummaryIf you are new to Google Tag Manager, then we encourage you to sign up for our free GTM course.
Sign up to the FREE GTM for Beginners Course…
Google Travel Updates How It Recommends Flights And Hotels
Google has updated it’s travel portal to include pandemic relevant information. Now Google’s travel portal is showing Travel Trends that reveal additional information about flights and hotels, changing how decision making is done.
Travel TrendsThe travel trends shows consumers multiple kinds of information that takes into account the Covid-19 epidemic.
The changes improve on pandemic related features already in search and Google Travel.
Google already provides Covid-19 related warnings about destinations in the regular search:
The new changes affect the Google Travel portal and give additional information on a more granular level.
For example, Google Travel will show useful information like hotel and flight availability for each destination.
Hotel and Flight Availability InformationAccording to Google:
“In the next week, you’ll see the percentage of open hotels with availability and flights operating at the city or county level based on Google Flights and Hotels data from the previous week.
When you visit chúng tôi and tap on a trip you’re planning, or search for hotels and things to do, you’ll now see trendlines for hotel and flight availability. Links to additional local resources, including the number of COVID-19 cases, are provided as well.”
The context of the user interface is hotels. So the interface starts out showing hotel related information.
Screenshot Of Hotel and Flight Availability Travel Advisory WarningAs you can see in the screenshot below, Google says that Las Vegas is trending upward.
Screenshot of Google Warning Shows Las Vegas is Trending for Covid-19 Free Cancellation FilterThe Google travel portal has also added a filter that when toggled shows only hotels that offer free cancellation.
Google added this filter to help travelers who might change their mind because of Covid-19 considerations.
According to Google:
“Due to the uncertainty around COVID-19, people often want flexibility when making travel plans. Many hotels and vacation rentals now offer free cancellation to give travelers more confidence when planning trips. Search for a hotel, and later this month a vacation rental, on chúng tôi and filter to see only rooms or properties with free cancellation policies.”
A Lesson in Being Useful Google Travel and Covid-19 RelevanceGoogle’s travel portal is an example of how to be useful and relevant to travelers.
Whatever their reasoning, chúng tôi does not provide the information itself.
Google takes a different approach by providing the information that is useful to users. By doing that, Google keeps users on their site and they also makes users happy, thereby building loyalty.
Google has failed at a lot of things, like building a social network or creating a competitor to YouTube.
Those failures demonstrate that it’s not enough for Google to “favor” its own properties in order to beat their competitors. It’s not that simple, it takes more than that.
If other companies focused more on keeping their users informed and happy, the loyalty that creates would keep them from running to Google.
Being relevant and keeping users happy is what keeps users returning to Google, it’s what they do best.
Google’s new travel portal is Google doing what they do best, organizing the world’s information and making it useful.
CitationsOfficial Google Announcement
Make Travel Decisions with Confidence
Google Analytics 4 Faqs: Stay Calm & Keep Tracking
On March 16th, 2023, Google Analytics shocked the marketing industry by announcing that Universal Analytics would stop processing hits in July 2023.
This didn’t go over so well.
Some marketers are unhappy with the user interface; others are frustrated that GA4 does not have key features.
Many are still in the denial phase – besides, isn’t it still in beta?
Let’s take a step back and answer the burning questions here:
Why is this happening?
What do these changes mean?
What do I need to do right now?
Why Universal Analytics Is Updating To Google Analytics 4Many marketers have built business processes around Universal Analytics and want to know why this change is happening.
So, I asked former Googler Krista Seiden, who helped build GA4 and is also the founder of KS Digital, “Why is this GA4 update happening?”
Seiden explained that GA4 has actually been in development for many years.
Originally, it came out as a public beta called App+Web, and in October 2023, it dropped the beta label and was rebranded as GA4.
“GA4 isn’t so much an update, but an entirely new way of doing analytics – set up to scale for the future, work in a cookieless world, and be a lot more privacy-conscious,” Seiden explained.
Google’s announcement blog was entitled,“Prepare for the future with Google Analytics 4.”
… for the future.
We keep hearing this; what does “for the future” mean?
When I read Google documentation and chatted with analytics experts, I noticed three main themes or ways that GA4 prepares your business for the future:
updated data model,
works in a cookieless world,
and privacy implications.
Let’s unpack each of these.
Data ModelA data model tells Google Analytics what to do with the site visitor information it collects.
Universal Analytics is built on a session-based data model that is 15 years old.
This was before internet devices like smartphones were widely used.
UA measurement was built for independent sessions (group of user interactions within a given time frame) on a desktop device and user activities were tracked with cookies.
Fun fact, I learned from the head of innovation at Adswerve, Charles Farina, that you can actually still implement GA javascript code from 15 years ago.
Yes, I’m talking about the original tracking code (Urchin).
And it still works today.
In the past few years, this old measurement methodology has become obsolete.
As much as we love Google Analytics, there are many examples of how it just does not work with the way users interact with our websites today.
Farina shared an example with conversions.
In Universal Analytics, goals are session-based. You cannot measure goals by user.
If a user watches four videos in one session, it can only count as one conversion.
In GA4, conversions (or goals) are event-based.
Cookieless WorldGoogle Analytics works by setting cookies on a user’s browser when visiting your website.
Cookies allow a website to “remember” information about a visitor.
That information can be as simple as “this user has visited before” or more detailed, like how a user interacted with the site previously.
Cookies are widely used on the web. And they can be helpful for things like remembering what items you put in a cart.
However, cookies also pose a privacy risk because they share data with third parties.
As the world becomes more aware of privacy issues, users increasingly want to opt out of sharing their data.
And because more people opt out of sharing their data, Google Analytics cannot report on all the people who visit a website.
There is a growing gap in the data collected.
Google Analytics had to adapt to remain useful to website owners.
And they did.
GA4 is designed to fill in the gaps using machine learning and other protocols to create reports.
This is called “blended data.”
In the blog post about this change, Google explains.
“Because the technology landscape continues to evolve, the new Analytics is designed to adapt to a future with or without cookies or identifiers.
It uses a flexible approach to measurement, and in the future, will include modeling to fill in the gaps where the data may be incomplete.
This means that you can rely on Google Analytics to help you measure your marketing results and meet customer needs now as you navigate the recovery and as you face uncertainty in the future.”
Data PrivacyData privacy is a big topic that deserves its own article in length. To oversimplify it, people want more control over their data and its use.
Laws such as GDPR and the California Consumer Privacy Act are enforcing this desire.
Google Analytics says that GA4 is designed with privacy at its core – but what does that mean?
All UA privacy settings will carry over, and we are getting new features.
For example, Google Analytics 4 does not store IP addresses and GA4 relies on first-party cookies, which supposedly keep them compliant with privacy laws.
I encourage you to use this time to consider your data strategy and set the tone for your company’s data privacy policy, assess your digital footprint and consent management, and ensure compliance.
What Do These Changes Mean For My Business?The second thing marketers want to know is, “How is GA4 different?”
Or really, “How will these changes affect my business?”
Don’t get too caught up in comparing Universal Analytics and GA4.
The numbers won’t match.
It’s a rabbit hole with no actionable or otherwise helpful outcome.
As Seiden pointed out, this is not just a platform upgrade.
It’s a completely new version of Google Analytics.
GA4 is a new data model and a new user interface.
Keep reading for a summary of key differences between UA and GA4 data and how they affect your business.
Changes in Data ModelingThe most important change is the way data is collected.
Universal Analytics uses a session-based data model (collection of user interactions within a given time frame) and collects data as various hit (user interaction) types within these sessions.
This is why watching four videos in one session only counts as one conversion in UA.
Google Analytics 4 is user-based and collects data in the form of events.
Each event has a unique name (event_name parameter) used to identify the event, with additional parameters to describe the event.
For more on the differences between the two data models, see UA versus GA4 data in the Google help docs.
Spam DetectionHave you ever seen a giant spike in traffic in Universal Analytics or a bunch of random traffic sources that you couldn’t explain?
Spammers could send fake data to people’s Google Analytics accounts by using the Measurement Protocol.
As you can imagine, this created a big problem with inaccurate data.
Google has fixed this problem by only allowing hits with a secret key to send data to a GA4 property. This key is visible in your GA4 data stream settings but is not available publicly.
Data RetentionData retention refers to how long Google Analytics keeps disaggregated data. At the end of the retention period, the data is deleted automatically.
The default setting for data retention in Universal Analytics is 26 months. But you could choose a different time interval, from 14 months to “do not automatically expire.”
In GA4, you can choose to retain data for two months or 14 months.
At the end of the retention period, you keep the aggregated data in standard reports, but the disaggregated data used in Explore reports are no longer available.
What is aggregated versus unaggregated data?
Think of aggregated data as a summary used to look at website visitors as a whole.
And disaggregated data is dissected or broken down into smaller subsections, such as a specific audience or segment.
Shorter retention periods are not really a big deal.
You can still accomplish the same use cases while doing more to respect user data privacy.
You can still run (aggregated) standard reports to show how well you are doing compared to past performance.
And data from the most recent months is the most useful if you want to make predictions and take action.
User Interface: ReportingGA4 reporting comes with a learning curve.
With Universal Analytics, there was an emphasis on pre-built reports. It was fairly easy and quick to navigate “done-for-you” reports.
Google Analytics 4 is oriented toward taking greater ownership of our data. With that comes the flexibility of custom reporting templates.
Because the data model has changed and the platform is more privacy-conscious, replicating some of the tasks you performed in Universal Analytics may not be possible.
As an agency or freelancer, you have an additional responsibility to communicate wins and opportunities to your accounts.
And they’re going to need time to learn GA4 or, more likely, rely on you to learn GA4.
To visualize the data in a more familiar way to your clients, I highly recommend Data Studio.
What Do I Need To Do Right Now?There is no need to panic.
You have time to implement GA4 configuration, time to update business processes, and time to learn new reports.
With that said, GA4 needs to take priority on your roadmap.
Audit your existing analytics setup and create a GA4 configuration plan.
Setting up GA4 before July 2023 is mission-critical.
Start building historical data so that you can do a year-over-year analysis next year.
Once GA4 events are collected, get your team up to speed and update your processes.
A year from now, they will need to be comfortable using Google Analytics 4 to make marketing decisions.
Start planning team training sessions. SEJ rounded up the top educational guides and GA4 resources here.
Last but not least, make plans to extract historical data in Universal Analytics before July 2023. BigQuery doesn’t cost anything aside from the low storage fees.
Final ThoughtsYou’re not just getting an upgrade when you switch to Google Analytics 4. You’re getting an entirely new way of analytics.
This solution is necessary to respect user data privacy and get actionable insights in a cookie-less world.
At the heart of this change is a new data model that makes GA4 different from what we have used in the past decade.
Right now, it’s important to configure GA4 and conversion events for year-over-year data when UA is sunset in July 2023.
After embracing the change, you might enjoy the flexibility and user insights with GA4.
Happy tracking!
More resources:
Featured Image: Paulo Bobita/Search Engine Journal
Race To Knowledge: Putting Project
“It builds a better workforce for us,” says Ralph Dobson, senior technical services engineer at the electric company. “What we’re trying to do is get students to understand more about electricity and what it’s like to work as a project team.” The work doesn’t involve “just the fun part” of getting ready for a race, he adds. The students do write-ups of their design, budgets, schedules, and work. “It’s just like a real job. The boss gives you so much to spend, and that’s all you can spend.”
Students from West Hawaii Explorations Academy, Hawaii’s first charter public high school, have competed in the race for four years — first as a school within a larger school in Kona on the Big Island and then as a separate institution headquartered on the grounds of the Natural Energy Laboratory of Hawaii. In fact, the students’ enthusiasm for the interdisciplinary electric car race and a previous solar-car competition gave WHEA founder Bill Woerner the idea of starting a school devoted to project-based learning.
West Hawaii Explorations Academy student Quinn Keogh works under the guidance of volunteer mentor Bill McKown.
Credit: Edutopia
The Team to BeatCredit: Edutopia
In 2001, WHEA students were returning to the race as the team to beat. The school’s team won the state championship in 2000, even with the handicap of making its own parts because of the limited stock in local hardware stores. Students built the frame out of rejected carbon-fiber sailboard masts, cut and welded pedals and T-joints, designed and built the steering system, and lashed aluminum rods together to build the car’s canopy.
The 2001 car — futuristically sleek, snug, and covered with an ironed-on fabric that took seven coats of bright red paint — was the result of months of work under the guidance of Bill McKown, a retired director of research and development at General Mills and a volunteer mentor at WHEA. Like other teams, WHEA’s received a basic kit from the Hawaii Electric Company that included a motor, a controller, a potentiometer, an emergency disconnect switch, a fuse, a contact, gears, a steering kit, and a brake kit.
The students buy their own batteries — two 12-volt batteries, in WHEA’s case — and put the car together. The work requires a range of academic applications. Students do math equations. They study electricity, aerodynamics, and the effect of weight and strength on car performance. But the student work doesn’t stop there. Extensive documentation is required of the design and building process, the business and community contacts, and money raised and spent. Total spending for the car is limited to $2,500. An oral presentation also is required based on questions picked at random, and students create a Web site.
Champions in 2000, the WHEA team finished fourth in 2001.
Credit: Edutopia
Real-World LessonsCredit: Edutopia
Throughout, the young builders are doing what people in the real world do — bouncing ideas off one another, researching, trying proposals that sound good, failing occasionally, and then coming up with alternatives. Vehicles are judged on design, construction, safety, appearance, aerodynamic design, and use of recycled materials.
McKown says he believes the main benefit — aside from the fact that students will remember what they’re learning because they’re using that knowledge in a practical way — is that it gives them experience in completing a job on time. “A lot of people emerge into adulthood and have never had to complete a multidisciplinary project on time with all the uncertainties of making a device work,” says McKown. “Putting things together often requires a type of disciplined thinking that gives instant feedback on whether you can follow through to complete a job in a timely way.”
The car that completes the most laps wins. In 2001, that wasn’t to be for WHEA’s team. The car had some brake problems, and the school came in fourth. But the team vowed to return.
Diane Curtis is a veteran education writer and former editor for The George Lucas Educational Foundation.Update the detailed information about Tracking Google Knowledge Graph Algorithm Updates & Volatility on the Cattuongwedding.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!