Good News Far Too Much of the Time

The TTC has now launched a public-facing version of an internal campaign pitching its new organization and attitude to serving riders under the rubric Modernizing the TTC.  The same information appeared in a poster recently issued throughout the organization.

From a service delivery point of view, the key pages are the 25 Key Performance Indicators and the Daily Customer Service Report.

The “KPIs” are intended to give ongoing information about ridership, service quality and station conditions including availability of escalators and elevators.

Strangely enough, the daily report is not available to the public, only a snapshot from March 19 on which — surprise! — everything is just fine, thank you very much.  This is precisely what has been wrong with the TTC for so many years — they are addicted to hearing good news.

Three changes are badly needed.

1.  Put real time data online

It’s all very well to know service ran so well two months ago, but I want to know how these indices are tracking today and over recent weeks.  Riders who waited while full buses zoomed past their stops, or were thrown off of short-turning streetcars, need to see what’s happening now and whether the TTC’s stats reflect their actual experience.

2.  Put more detail online

While the system as a whole may meet its targets, that does not reflect actual rider experience at a route-by-route level or at various times of the day.  The public information should be subdivided by route so that riders (and members of Council) can check against their local services rather than system averages, and the stats should be subdivided by time of day to distinguish busy peak periods from quiet evenings or weekends.

3.  Collect meaningful statistics

The statistics and targets now reported by the TTC have been with us in one form or another in places like the Chief General Manager’s Report (now the CEO’s Report) for some time.  Everything looks rosy until one thinks about what data drives the KPIs and whether it is really meaningful.

Subway and surface operations are measured by the proportion of trips that operate within three minutes of the scheduled headway.  It’s good to see the TTC moving away from “on time” as a measure of service quality because in most cases customers only care that vehicles/trains are regularly and reliably spaced.  They couldn’t care less if they are “on time” except in cases of wide headways.

However, if a service is scheduled to run every 4 minutes, this means that any headway from 1 minute to 7 minutes is acceptable for the statistics.  Even worse, a parade of vehicles each 1 minute apart meets the target except for the first in the queue where, presumably, there is a large gap.  A parade of 10 cars would be 90% “on time” because 9 of the 10 would be within 3 minutes of their scheduled headway.

With uneven headways more passengers accumulate in the wider gaps.  What most riders see is the train, bus or streetcar that arrives with a heavy load after a long wait, and they may not even be able to board.

The KPI needs to be revised so that vehicle bunching cannot produce statistics showing an acceptable quality of service.  As things stand, it would be easy to achieve a target of 2/3 of trips within an acceptable headway and still have quite ragged service especially on “frequent” routes.

Where headways are wider (some off-peak services and especially those with branches), on time performance is much more important.  Riders would like to plan their travel based on when a bus is supposed to appear rather than having to face waits of 20 minutes or more.

At a route level, an index is required to track service quality not  just at the route’s peak point, but at termini and common short-turn points.  Some routes have multiple peak points, and reporting only on one of them can misrepresent what many riders actually experience.

A sad commentary on the reliability of the SRT is that its service target is to operate 80% of scheduled trips.  Whether this will happen in a snowy winter remains to be seen.

Elevators and escalators are supposed to be 97% available.  However, I understand that this status is of about 9am and does not reflect whatever outages may occur through the day.  Moreover, devices that are out of service for maintenance don’t count against the target.  Unfortunately, a rider who cannot use stairs only cares that they cannot use their station.

As of May 16, 2012, there are seven escalators listed as out of service by the TTC not including devices at Union Station affected by the second platform project.  From a rider’s point of view, these are just as unavailable as a bus or streetcar that shows up after a long gap or hopelessly late.  They are a service that is expected but not available.

Outages for planned maintenance should be included in the stats, even if as a separate category.  Availability stats should be based on all-day operations, not once-a-day surveys.  (Note that it is not necessary to physically visit every station, but simply to log trouble calls that come in.)

When I spoke with the TTC about the fundamental problems in their statistics and goals, they freely admit that these just are not good enough.  However, management and Commissioners are now trumpeting a scorecard of success just at a time when they really need to set the standards higher.  All those green checkmarks will change at least to yellow if not red when the bar is raised.  TTC management and staff must be ready to accept the need for improvement against goals and measurements that reflect what passengers actually see day-to-day.

12 thoughts on “Good News Far Too Much of the Time

  1. Further to that … just put the raw data online in zip files so we can download it and make these KPI’s for you … Steve and I (and others) have done similar stuff with data that you give out on a one-off … but make an ftp site with the zipped data for the previous day and we will download it and analyze it … trust me, we will put your analysis department out of business in a few weeks … not only that, we’ll make sure that you know where the trouble locations are, we’ll do all the stuff above that Steve is talking about (headway at endpoints, midpoints, every stop, heat maps), websites with more details than anyone could possibly care to know about … and it will be done for free and fairly quickly … and we’ll make sure all the code is open source so you can go in and tweak it for internal use if you want … most of this has been doable for three years now almost since we first started getting little bits of data, now we need a way to get it all (which we were promised over a year ago) … I still don’t understand why the public hasn’t been given access to the historical GPS data in a reasonable format (polling the Nextbus feed every 20 seconds is way less accurate than just getting the raw data in zip files).

    (TTC-PLEASE CONTACT ME – GEORGE_VIC_BELL@HOTMAIL.COM)

    Steve: George — please note that the 20 second polling interval is set by the design of the TTC’s system, not by Nextbus. For the level of analysis you are talking about, believe me, every 20 seconds is more than enough.

    The “promise” to which you refer is contained in this motion from the July 14, 2010 Commission Meeting:

    [Item 8.f] Chair Giambrone further moved the following motions:

    That the final policy on open data be brought to the September Commission Meeting.
    That the policy specifically includes real-time vehicle locations and that historical/archival vehicle location information be provided as part of the policy.
    That the TTC provide real-time displays of the vehicle locations on the TTC webpage by the end of the year.

    The motions by Chair Giambrone carried.

    In fact the Open Data policy did not appear on the very busy September agenda, and the matter died with the expiry of the Miller/Giambrone administration.

    Like

  2. Thanks for calling attention to these issues, Steve. A scorecard that is all green every time the public sees it is worse than no scorecard at all. TTC staff cannot be called upon to do better and make Toronto proud by Andy Byford if they are presented with a report card that already says A+. Challenges that are honest and *open* are fundamental to improvement.

    Like

  3. Schedule adherence is a very important performance metric even on a metro system. Maintaining a consistent headway alone will not provide good service. An hour has 3600 seconds. If there is a 100 seconds headway, 36 trains should pass through a station in an hour. Whether it is stated or not, there is a schedule. If we use 16:00:00 as a starting point, there should be a train at 16:01:40 (displays as 16:01), 16:03:20 (dispalys as 16:03).

    There is a joke in Japan about this. In Toyko, the foriegner will say that the JR Yamanote Line runs every two minutes. The Japanese person will correct the foreigner by saying that he trains actually run at 1 minute 40 seconds. Even a 20 seconds discrpency is fatal on such a line.

    With this in mind, let’s take a look at what happens on the TTC. The T35A08 typically travels at about 43 miles per hour on a clear stretch. This would mean that it travels at about 63 foot per second. If a 20 seconds delay was to occur, that is 1261 foot of distance that cannot be travelled at regular speed for the train behind. On a fixed block system, this is the distance of about three blocks. Even on a moving block system, the train would need to slow down quite a bit before being allowed to creep up. This is why JR East specifies on time performance as plus or minus 15 seconds. Anything more than that, even ATO cannot even out the schedule.

    As speed increases, schedule adherence becomes even more important. If the TTC operated trains that travel at 160 miles per hour, a 10 second delay will cause minutes of delays on the trains directly behind.

    Like

  4. How difficult would it be for a software developer to create an app that works for both operators and supervisors? One that would allow communication with Transit Control, list real time where all vehicles are at a given time, separate them by run numbers, allow operators to sign on and off, and even let operators know if they are following the headway +-1 min? Somehow, I feel the technology to do all of this is possible, just slightly cost prohibitive (the cost of a couple thousand tablets).

    Doing something like this would actually produce a real result: it would improve service. Sure, the way the TTC seems to be communicating with the public is a nice change (@TTChelps does work), but if cell reception on subway platforms or renovated washrooms are what people are talking about, we’re not going to get anywhere.

    Or we could just do this.

    Steve: A rose by any other name …

    Like

  5. Steve, are you more or less likely to get responses from TTC on recommendations like this, compared to previous commissions?

    Steve: There are two points to a “response”. The first is that someone actually talks to me. This is more a function of individuals within the organization than the Commission. Brad Ross and Chris Upfold are very chatty, but that’s their job. The second part is that something actually happens. I don’t have a sense yet of how receptive the Commission will be to real rather than superficial improvements, or how strong their advocacy will be for restoring some of the cutbacks of the past two Ford budgets.

    As for changing the metrics by which service is rated and the level of detail reported, the TTC sounds interested but is defensive at the same time. They have been sipping on the Koolaid.

    Like

  6. I lived in the Japan for 6+ years and after moving back to Toronto, I couldn’t (and still can’t) get over the number of delays due to medical emergencies and/or passenger assistance alarms. This very rarely happened in Japan. If vehicles were late, the ticket kiosk would hand out chits so you could have proof that your train/subway/bus was in fact late. Could you imagine the TTC handing these out? We would end up deforesting the entire Boreal forest.

    In Toronto, however, it seems like anyone with a splinter or the hiccups can hold up thousands of commuters. Andy Byford even made a plea to commuters to trigger the alarms for genuine emergencies only (good luck with that). It would interesting to know:

    a) how the frequency of our passenger assistance alarms / medical emergencies compare globally
    b) why they seem to occur so frequently here
    c) how they can be lowered without compromising safety

    This all comes down to quality data provided in a timely and transparent manner, which would certain light a fire under the TTC’s &%$ to get their act together.

    Like

  7. Agreed the 20 second interval is enough for doing the analysis…the problem is setting up a relatively robust way of getting that data every 20 seconds (ie. reboot your machine once during the day and the stats are not going to be very accurate as you’ve missed 5 minutes of data…) – I think I may have been able to get around this by rewriting my tool for the cloud…I think I’ll probably make use of microsoft’s 90 day free demo when it’s done…after that…maybe the TTC wants to host it?

    Steve: I wish they would as this would simplify ad hoc requests for data. However, the external market for this is small (as a count of first-order users as opposed to those who might view the data filtered through an app). The real market should be TTC planners and their colleagues, but for years they were denied access to the data (the same info I get just by asking) thanks to internal wrangling.

    Like

  8. On the subway system, I wouldn’t really say that delays are that much of an problem. There is always a “passenger assistance alarm” and there tend to be far too many delays when entering the last station of the line (Finch is notorious for this) but overall I find the performance of the subway system satisfactory.

    I am sure that if you tried to create “key performance indicators” for the 400 series highways that they would be much, much worse than the subway system. There is always a lane “blocked due to a collision” and congestion is severe and unpredictable, especially on 401. You haven’t seen unreliability until you drive in Toronto traffic in rush hour.

    Steve: The point here is that if the first version of the KPIs already shows that everything’s just grand, thank you, how do you create an incentive to do better, or even suggest that maybe the problem is with an invalid KPI? If your measure of success is “the sun rose today”, this will get 100%, but won’t account for that lousy weather, or your collection of inside-out umbrellas.

    Like

  9. A thought about headway adherence vs. schedule adherence.

    A number of frequent routes are actually comprised of a number of overlapping branches that, considered in isolation, are not frequent. A good example of this is 96 Wilson, with four branches running every 20 minutes during the AM peak, overlapping to provide a 5-minute headway on the main section of the route. In this case, headway adherence is important on the main section east of Jane, but schedule adherence is arguably more important on the outer branches.

    In cases like this, can headways be managed on the main portion of the route while managing the schedule on the outer branches? Or are these concepts incompatible?

    (I use the 96 as a particularly good example, but one could point to any number of cases. The 501 service west of Humber could be another, for example.)

    Steve: I’m not sure that would be practical especially with four branches. The likelihood that these will all run on some sort of manageable time is low, and the blending of services is probably routinely bad even if nothing else like weather or traffic adds its problems to the mix.

    Like

  10. One metric that may be hard to gather but is very meaningful to passengers is whether or not the bus is crush loaded. This, of course, is usually a consequence of failure to maintain headways but above all else, this is a plain and simple indicator of failure to manage the route effectively. In the absence of extraordinary unexpected loads, this should never happen.

    Steve: The major problem with TTC ridership stats is that they are reported on a peak hour and all day basis, but as totals. If you have 10 buses and 3 of them are empty, this looks on average like an appropriate level of service. Another problem is that a vehicle may have a light load because it is short turning and therefore does not really contribute to the line’s capacity, but it gets counted anyhow. Finally, the passenger counts don’t ever “see” the people who could not get on or gave up and walked or took taxis.

    Like

  11. I have added features to TransSee so that it now uses the GTFS data shows the scheduled arrival time for each vehicle and how far off schedule it is. This feature has to be enabled in the settings and currently only works on the TTC.

    The results have been remarkable. As ragged as the headways are, the schedule adherence is worse. On Queen about a quarter of the vehicles are out of order. On Dufferin vehicles vary from being 10 minutes ahead of schedule to 20 minutes behind schedule. I do see things like two bunched vehicles cause by one being behind schedule and one being ahead of schedule, but in most causes it is a lot more complicated then that.

    This only strengthens the argument for using headway based operations rather then schedule based.

    Like

Comments are closed.