I’m listening to a fascinating podcast on Mouse Chat where Len talks about many interesting things including the accuracy of their models. Len said something like the predictions btw 10am-5pm are typically accurate within minutes. I’ve noticed that the predictions are off around opening and I’m unclear how good these predictions are later in the day (say, after 5). By Len saying the predictions are solid btw 10-5, is he also saying (indirectly) that they are less good outside that time? If so, any guesses why? Curious!
Mouse Chat you say? I will have to get that one and listen. Interesting that the wait times would be most accurate during the busiest time of the day. There’s probably some logical math reason why that is, but it’s hurting my head to even consider it. Thank goodness there are people who LIKE that kind of thing!
Maybe it’s because they get most of their user submitted wait times during that time frame? Still especially curious about the first hour or so of opening and why- at least in my experience- the TP’s are off
I have noticed that in the last hour or two of late nights, a lot of rides seem to have “?” as the wait time lately.
There have been blog posts about the effects of fpp on standby wait times. Len has said super-headliners are slightly down, others slightly up, overall the effect is small.
However, I would opine that there may be other mitigating factors, whose time frame coincides w/fpp, such as the opening of New FL, including 7DMT. I wonder about that effect. Though, of course that only affects MK.
10 am to 5 pm is typically the busiest part of the day, so it’s what we focus on for crowd predictions.
We update models on a regular basis, usually one or two per week, depending on how they’ve performed over the past month. Our Star Tours model was last updated in March, and is consistently within 3 minutes of what actually happens in the parks. We tweaked Toy Story Mania on Tuesday. Most of the models were updated last month.
There are a couple of ways of gauging their performance. The simplest is to ask what the attraction model’s average predicted wait was between 10 and 5, and compare that to what actually happened.
Using that metric, the models are within about 6 minutes over the past week, and somewhere between 5 and 7 minutes over the past month. (We just updated some of the models, so their “previous 30 day performance” doesn’t exist yet, thus the range in estimate.)
Incidentally, you can see this measurement for yourself by looking at our wait time graph for any attraction for any day in the recent past. Here’s yesterday at Space Mountain.
That’s a simple metric, and there are disadvantages to using it. For example, if you plot a prediction on one line, and it looks like this “/” and the actual waits look like this “”, so that both lines together are an X shape, then both lines have the same average, even though the predictions were off by quite a bit at the extremes.
So another way of looking at accuracy is to look at the average error for hourly predictions. For example, what did the model predict for Big Thunder at 10 am, 11 am, noon, 1 pm, 2 pm, 3 pm, 4 pm, and 5 pm? We can calculate the hourly error that way, and it avoids the problem mentioned above.
Using that metric, the models are within 9 minutes this week and 7 minutes for the month.
@ejj - let me know which attractions have ?s in Lines. That usually means we got the attraction’s operating hours wrong, or something like that. Easy enough to fix.
@Len. Thank you! So, if the predicted and actual crowds levels are off by as many as 3 (rare I know but it happens as it did at DHS last week), does that then mean that the forecasted waits are off and therefore TP’s are off (ie it can happen that a user waits significantly more or less if actual/predicted levels don’t agree)? If the models are updated weekly, does this mean that user submitted wait times aren’t used real-time?