The Varying Strength of Ridership Recovery (2)

In a recent article, I reviewed route-level ridership data cited in the 2023 Annual Service Plan as well as the 2019-2021 numbers posted on the TTC’s Planning web page.

During debate at the July 14 TTC Board meeting, an issue came up about the unexpectedly poor performance during the pandemic era of 25 Don Mills. This got me thinking about how the “results” could be influenced by when counts were done and particularly on routes that have both express and local branches under different route numbers.

To explore this, I recast the 2019-2021 stats in tables with and without the express 9xx routes consolidated into their local equivalents.

First, here are the stats with the local and express routes separate. The gallery below contains the first set of routes, but the complete list is in the following pdf. The data here are the same as presented in the previous article, but reformatted for easier browsing.

Here are the stats with the 9xx routes’ data rolled into their local equivalents.

The three express/local routes on these sample pages show the differing effects.

Route201920202021% Recov 2020% Recov 2021
Victoria Park
24 local22,75112,23314,07754%62%
924 express6,4723,66357%
24/924 local+express29,23312,23317,74042%61%
Don Mills
25 local27,98816,48118,71959%67%
925 express16,6249,07455%
25/925 local+express44,61216,48127,79337%62%
Dufferin
29 local27,48723,02122,08784%80%
929 express15,72213,23884%
29/929 local+express43,20923,02135.32553%82%

In all three cases, the express service did not operate in 2020, and so all of the riding, such as it was, occurred on the local route number. This inflated the apparent ridership retention of the local route over the actual level on the corridor considering the two routes as one operation.

The effect was so strong on Dufferin that its local recovery rate went down slightly in 2021 because growing demand on the corridor was not enough to offset the shift of riders back to the express service.

The moral of the story here is that looking at stats in isolation can lead to incorrect conclusions if the underlying network and service plan are not taken into account. This applies to simplistic rankings such as “top 20” and “bottom 20” that can exclude routes with almost identical performance. A better metric would be the collection of all routes above or below a certain recovery rate.

Politicians who fund and, nominally, direct transit systems love easy-to-understand metrics that often hide or even distort what is going on. I will turn to TTC measurement indices and standards in a future article.