How Unreliable Is My Service? (Updated January 24, 2015)

Fourth Quarter 2014 Update: Results for fourth quarter of 2014 have been consolidated into a new table below.

Route_Performance_Summary_2014Q4

The headway reliability numbers for many routes continue to lie well below the TTC’s targets for bus and streetcar operations.

Routes which have improved by more than 10% since 3Q14 are:

51 Leslie, 60 Steeles West, 125 Drewry, 126 Christie, 172 Cherry Street, 198 UTSC Rocket, 301 Queen Night, 322 Coxwell Night, and 512 St. Clair

Routes which have declined by more than 10% since 3Q14 are:

35 Jane, 36 Finch West, 55 Warren Park, 66 Prince Edward, 87 Cosburn, 88 South Leaside, 97 Yonge, 109 Ranee, 111 East Mall, 122 Graydon Hall, 133 Neilson, 141 Downtown via Mt. Pleasant Express, 160 Bathurst North, 161 Rogers Road, 162 Lawrence-Donway, 195 Jane Rocket, 224 Victoria Park North, 502 Downtowner and 508 Lakeshore

A few items worth noting:

  • Service quality has declined considerably on both of the Jane routes despite a recent reorganization into local and express services, and adjusted running times to match actual experience.
  • Reliability of the Blue Night services continues to be poor at a time when (a) there is no “congestion” on most routes as an excuse for delays, and (b) reliability is of particular importance to riders.
  • The 501 Queen car at 52% (nothing to crow about) is more reliable than the Downtown Beach Express Bus at 45%.

Looking at the data over a two-year period, a very long list of routes has seen a decline of more than 10% in headway reliability. Only a few routes, mostly night services, have improved by more than 10% since 1Q13:

  • 10 Van Horne, 52 Lawrence West (which has been reorganized since 1Q13), 90 Vaughan, 102 Markham Road, 117 Allness, 171 Mount Dennis, 301 Queen Night, 303 Don Mills Night, 308 Finch East Night, 309 Finch West Night, 311 Islington Night, 322 Coxwell Night, 353 Steeles East Night and 385 Sheppard East Night

Third Quarter 2014 Update:

The statistics have not changed much from the second quarter. One issue with many routes operating on wide headways (night services and express routes) is that they have consistently low performance values. Such routes should, of course, be measured for on-time performance, not headway adherence, because missed vehicles have a far graver effect on would be riders than on a route that operates every 5 minutes. Express-to-downtown routes (the 140 series) should be measured for on time performance in their catchment areas. Their headway once they are on the express leg of their journey is of no consequence to riders.

Second Quarter 2014 Update:

There is little change in the route performance statistics for the second quarter despite our having emerged from a bitter winter. The change from Q1 to Q2 is less than 10% for most routes with some improving and others falling behind. Those that are beyond the 10% mark can, in some cases, be explained by route-specific issues such as construction, but not all of them.

Two new routes appear for the first time, 172 Cherry and 195 Jane Rocket. It is mildly amusing that the Cherry bus, which must fight its way through construction downtown, manages a 69% reliability score while the Jane express service manages only 58%.

In this quarter, the 58 Malton and 52 Lawrence routes were combined. Their former scores in the mid-50% range have astoundingly improved to 81% on the consolidated route. I will follow this up with the TTC to see what magic they have wrought here.

First Quarter 2014 Update:

Route_Performance_Summary_2014Q1

The reported reliability stats continue to be dismal. Although it is tempting to say “ah, yes, but Toronto had an appallingly bad winter”, there is a basic problem here: the statistics reported by the TTC didn’t change very much and many routes actually improved relative to the end of 2013.

I will not rehash my critiques of this method of reporting service quality (see the original article below) beyond noting the the TTC’s targets show that irregular service will be the norm — 1 in 3 trips can exceed the target, but service remains acceptable. This means that in a typical day, a rider can expect to encounter at least one “off target” service in their travels.

Finally, a long-standing issue has been the inability to maintain reliable service on the Queen car due to its length and the mixture of Humber and Long Branch services. Although April 2014 is not included in these statistics, the CEO’s report for June 2014 notes an improvement in that month’s streetcar average:

The increase in performance was attributable to the turnback of the 501 Queen route at Humber Loop for the Gardiner bridge work. This shortened the route and promoted a more reliable eastbound service. [Page 10]

The original article from October 24, 2013, follows below.

The TTC has just published its headway reliability results for the third quarter of 2013.  These numbers purport to show the percentage of service that operates within 3 minutes, give or take, of the scheduled headway on each route.  The goal is that bus service does this 65% of the time and streetcar service 70% of the time.

On a daily basis, these numbers are rolled up to the system level, but this hides wide variations by route and time of day.  Weekends are not reported on at all.

The system barely manages to achieve its goal on good days, and has little headroom to absorb events such as bad weather.

To simplify browsing the route-by-route data, I have consolidated the three quarterly reports into one table.  The information is listed both by route, and ranked by the reliability index.

[The table originally linked here has been replaced with an updated version at the start of the article.]

There are many problems with these numbers:

  • On routes with short headways, it is easy to be within 3 minutes of target.  Indeed, it is difficult to get beyond that target, and even a parade of buses or streetcars may count as one “off target” and several (the parade itself) “on target”.
  • There is no measure of bunching, nor is there any indication of whether all or only part of the scheduled service actually operated over most or all of a route.
  • There is no definition of what part(s) and directions of the route are measured, or how this might skew reported values.  Performance at locations beyond common short-turn points may not be reported, or may be masked by data from central parts of a route.
  • There is no time-of-day reporting.  From service analyses presented on this site, it is clear that across the system, service at evenings and weekends is much less well-managed (assuming it is managed at all).
  • On routes with wide headways, on-time operation is more relevant to riders than headway because they must plan journeys based on the schedule.  This is particularly important where connections between infrequent services are part of a trip.

The TTC acknowledges that the headway adherence measurements are inadequate, and they are working on “Journey Time Metrics” based on the scheme used in London, UK.  This approach looks at typical trips and the time required including access, waiting, in vehicle and transfer times to better reflect service as seen by a rider.  For example, a frequent service with well-regulated headways is useless if the buses are full.  An advertised headway is meaningless if half of the service is randomly short-turned and wide gaps are a common experience.  The effect of a big delay in someone’s trip is much more severe than a short one because this adds to the unpredictability of journey times.

How, exactly, this will be boiled down into representative journeys while still preserving a granular view into system operations will be interesting to see.  I believe that a combination of metrics will be needed, and the managerial penchant for a single index to report the behaviour of a large and complex system is dangerous because of what it hides.  (I say this also from personal, professional experience in another field.)  Without the details, the organizational goal becomes one of “gaming” the system to ensure a lovely column of green tick marks on a scorecard that masks pervasive problems.

105 thoughts on “How Unreliable Is My Service? (Updated January 24, 2015)

  1. Steve:

    There was a major redesign of the schedule.

    ATU has said that the main problem with service quality and reliability as well as safety is unrealistic bus scheduling.

    Recognizing the benefits of improved scheduling it would be great if the management and ATU were working cooperatively on revamping schedules for some of these laggard routes. It would also be nice if ATU would provide some incentives or gentle pressure to operators to ensure they aren’t deliberately failing to meet current and redesigned schedules.

    Cheers, Moaz

    Like

  2. I am not surprised to see 85 Sheppard East has dropped all the way to a 56 on reliability. There is a noticeable decline in service on that route. I try to avoid 85 when I can and take the 199 Finch instead.

    Like

  3. Steve said:

    The TTC acknowledges that the headway adherence measurements are inadequate, and they are working on “Journey Time Metrics” based on the scheme used in London, UK. This approach looks at typical trips and the time required including access, waiting, in vehicle and transfer times to better reflect service as seen by a rider. For example, a frequent service with well-regulated headways is useless if the buses are full. An advertised headway is meaningless if half of the service is randomly short-turned and wide gaps are a common experience. The effect of a big delay in someone’s trip is much more severe than a short one because this adds to the unpredictability of journey times.”

    Yes while I may care about normal journey time when I am travelling for pleasure and have flex in my schedule, I need to know with a much higher degree of certainty the outer bounds of trip time when I am travelling for work, especially if I am required to be there at a set time, or for an appointment. I need to know that a journey may on average be 45 minutes, but 15% of the time will take 1:20 and 5% an 1:30 minutes. If it is a critical journey, I will be leaving myself an hour and a half, or not selecting transit. Caught a couple of times on this, I would be very upset, and would not consider transit to be a reliable mode if there is a reasonable degree of beyond the bound journeys. Journey time metrics will need to be time of day sensitive and provide a range with an upper bound that represents at least 95% of journeys below that. These journey time will then need to be reviewed frequently for surface transit.

    Like

  4. The past couple of weekends I have been caught in large gaps in 501 and 504 service BEFORE sunrise. There is absolutely no traffic on the streets at this time yet vehicles manage to get bunched up into pairs somehow.

    It’s not just the streetcars either. Dufferin, Yonge, Bathurst and the other bus routes have buses appearing in bunches of 2 or 3 with large gaps. It can’t be explained away by traffic because there is none!

    Steve: Yes, some operators have a very flexible interpretation of schedules especially on weekend mornings (not to mention evenings).

    Like

  5. Steve, any update on what’s going on with the disappearing NVAS LED screens that used to be in shelters? I am seeing more of them disappear.

    Steve: I am seeing some disappear, but new ones are appearing, so it’s hard to tell exactly what’s going on. I will inquire.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s