1) Ratings are higher in the fall than the viewing levels dictate they should be. (Because of something I referred to as "Early Fall Hype."
2) Viewing levels are higher in the late spring than they "should" be because Nielsen changed its definition of "viewing level" during the middle of the season.
Now, here's how I'm gonna try to adjust for each of these!
Early Fall Hype
Last time I proved (if you have a really loose definition of "proved") that, despite viewing/ratings correlating fairly well throughout most of the rest of the season, there is something causing broadcast TV ratings to be higher early in the fall than their viewing levels suggest they should be. I'm not here to explain what exactly this is or why it happens, because I don't really have any way of doing that. Maybe it's the heavy promotion, maybe it's some sort of collective hunger for new programming, I dunno. I just want to try to figure out how big it is.
Let's bring back in the viewing vs. ratings table from last time (leaving out spring, which we'll get to next time):
Fall | F/W | Winter | W/S | |
18-49 PUT | 34.27 | 34.97 | 35.56 | 33.48 |
18-49 Rating | 2.67 | 2.57 | 2.65 | 2.46 |
Ratio | 12.83 | 13.62 | 13.42 | 13.59 |
As I said last time, there is a pretty strong linear correlation in the last three of these. The ratio between PUT and that particular selection of shows is pretty close. But in the other case, the ratio is significantly lower; in other words, the shows get higher ratings in the first six weeks of the season than it seems like they should based on overall viewing levels.
What "should" these shows be rating? To get that, we'll get the average ratio for the other three sections (13.55) and divide the fall PUT by that. That gives us an expected demo rating of 2.53. With an actual average of 2.67 and an expected average of 2.53, that means the fall ratings "should be" about 5.2% lower than they actually are.
So, I don't really have a formula yet, but the first aspect will be that everything that airs in the first six weeks of the season will get a -5.2% multiplier to its "true strength." Factoring in this Early Fall Hype Factor, I should be able to eliminate the inflations of the early fall.
More on Those Damn Methodologies
This *might* be the last time I ever have to talk about Nielsen's two different ways of calculating viewing levels. (But probably not.) It even bores me, and that is really saying something because I have found most of this stuff pretty interesting.
Update (8/5/11): In doing some early testing of the "final" True Strength formula, it's becoming clear that a lot of the True Strengths in the last few weeks of the season (the "New Methodology" time) are extremely inflated. I think the biggest reason is that those averages are being brought way up by the big drops from the fairly abnormal Saturday Fox shows. I could just take those select shows out, but it felt kind of slimy to just take stuff out till I get what I want, so I decided to approach it from a different "theoretical" way. I decided to just compare the half-hour PUT levels from the last two weeks of Old Methodology (the two that take place after Daylight Saving Time) with everything under the New Methodology. Here's what I came up with by hour.
8:00 | 9:00 | 10:00 | AVG | |
Post-Meth | 31.85 | 36.64 | 36.70 | 35.06 |
Pre-Meth | 30.85 | 34.20 | 33.66 | 32.91 |
Diff | -3.1% | -6.6% | -8.3% | -6.1% |
Update (9/13/11): I've reduced this from an hour-by-hour breakdown down to half-hour-by-half-hour. Here's the new table:
8:00 | 8:30 | 9:00 | 9:30 | 10:00 | 10:30 | AVG | |
Post-Meth | 30.14 | 33.55 | 35.86 | 37.42 | 37.31 | 36.10 | 35.06 |
Pre-Meth | 29.76 | 31.95 | 33.72 | 34.69 | 34.46 | 32.86 | 32.91 |
Diff | -1.3% | -4.8% | -6.0% | -7.3% | -7.7% | -9.0% | -6.1% |
As expected, this "theoretical" calculation indicates I should be adjusting for the New Methodology much less than I was previously. The downside is that it's kind of a small sample size for Old Methodology - just two weeks. The best difference is that it's all after Daylight Saving Time, so it seems like it's closer to something that I could probably use across the whole year (since the post-Meth and pre-Meth numbers are relatively apples-to-apples).
Maybe the actual adjustment should be a bit larger; this assumes that everything post-Methodology has the same "true viewing" as those first two weeks after DST, but in reality it probably keeps declining a bit. So that may be something I take a look at as more New Methodology viewing levels come in early next season. However, most True Strengths don't seem to make drastic moves after the methodology change now, which is about all I can ask for.
This change doesn't require me to go back and redo very much, since there are only minor adjustments at 8:00, so most of the Competition stuff isn't greatly changed. I have gone back and changed the "constant PUT" from 33.75 to 34.12 in the formula due to the viewing levels getting raised a bit after 3/27/11, plus I've changed "normal competition" from 0.24*PUT to 0.23*PUT on weekends and from 0.31*PUT to 0.30*PUT on weekdays (though I think that adjustment mainly came out of my decision to count sports less, driving broadcast PUT levels way down). If you don't remember exactly what I'm talking about, just look for those numbers on the next edition of the "Formula So Far"
The only part of the old Methodology diatribe I'm saving is the below, which explains why I'll make Old Methodology conversions even next season when everything's New:
*- As I said in a previous post, the only real reason to convert New Methodology PUT to Old Methodology is so we can use the ratings in the spring 2011 on a level playing field with the rest of the 2010-11 season. It shouldn't really be an issue in 2011-12, when everything should be New Methodology. But I think I'm going to keep this conversion in the final True Strength formula and try to convert all PUT calculations in 2011-12 to "Old Methodology." This is because I think the old definition of "viewing levels" was something that would correlate closer with Live + SD ratings, since it's about the tendency of a show in a given timeslot to get viewed.
Though most of this stuff seems to work relatively cleanly, we'll take a look at another way of tracking viewing vs. ratings: the big "events." We've looked at how those events affect viewing already, but does that match up with ratings?
No comments:
Post a Comment