Under the “how we did” page, it shows within how many minutes the predictions were for the previous day. I am not sure how to read this. If -3 is listed, does that mean that the ride wait time was 3 minutes less than TP predicted or does it mean the TP prediction was 3 minutes less than the actual ride wait time?

# The how we did section

I have been wondering this for more than three years!

This goes for me as well. Most predictions are obviously very close, but it would be good to understand the +/- nature of the results.

I read it as they predicted 2 minutes less than actual if it’s a -2, and 2 minutes more than the actual if it’s a +2 but I see how it could be read the other way too.

Any input here please @len?

If it’s (-3) then we were 3 minutes too low in our prediction (e.g., the average wait was 25 minutes, we predicted 22).

Below are some stats on the Crowd Cal’s performance for 2017 so far, compared to 2016. We’ve improved in almost every metric. That’s due to (1) better modeling; and (2) updating the models much more frequently.

Magic Kingdom’s 1-to-10 scale

Average error 2017 YTD: 1.31

Average error 2016 YTD: 1.54

Average error 2016 (all): 1.29

% days where we’re +/-1 (2017): 62.7%

% days where we’re +/-1 (2016): 61.4%

Standard deviation 2017 YTD: 0.90

Epcot’s 1-to-10 scale:

Average error 2017 YTD: 1.07

Average error 2016 YTD: 1.36

Average error 2016 (all): 1.39

% days where we’re +/-1 (2017): 75.1%

% days where we’re +/-1 (2016): 62.5%

Standard deviation 2017 YTD: 0.95

Hollywood Studios 1-to-10 scale:

Average error 2017 YTD: 1.20

Average error 2016 YTD: 1.65

Average error 2016 (all): 1.76

% days where we’re +/-1 (2017): 66.9%

% days where we’re +/-1 (2016): 59.5%

Standard deviation 2017 YTD: 1.00

Animal Kingdom’s 1-to-10 scale (excludes Pandora for now):

Average error 2017 YTD: 1.46

Average error 2016 YTD: 1.72

Average error 2016 (all): 1.69

% days where we’re +/-1 (2017): 57.4%

% days where we’re +/-1 (2016): 52.1%

Standard deviation 2017 YTD: 1.08