A New Way To Measure Service Quality?

At its recent Board Meeting, the TTC received a presentation [scroll to p. 3] from Chief Service Officer Richard Leary on plans to update management and measurement of surface route service quality.

The monthly CEO’s report includes a number of “Key Performance Indicators” (KPIs) intended to track various aspects of the transit system. However, the methodology behind some of the KPIs, notably those related to service quality, leaves a lot to be desired. Moreover, information that could track basic issues such as vehicle reliability is not included. This begs the question of whether the indicators exist more as a security blanket (“we have KPIs therefore we are good managers”) than as meaningful management tools, not to mention as reports to the politicians and public.

A telling chart on page 6 of the presentation shows how badly the TTC has drifted from transit industry norms:

ServiceKPIsAssessment

The TTC aims to have almost enough vehicles available for service relative to actual needs, and operates with a lower spare ratio than the industry overall. This has two effects.

  • When unusual demands for service arise, there is no cushion to roll out extras.
  • Vehicles are not maintained often enough to prevent in service breakdowns. This shows up in a mean distance between failures that is very much lower than the industry average.

The situation is actually compounded by an internal measure of service delivery: a garage counts a bus as “entering service” if it makes it across the property line onto the street. Whether the bus runs for an entire day or breaks down a block from the garage, it counts toward service provided. This is complete nonsense, but shows how the construction of a metric can induce behaviour that is counterproductive. Actually keeping the bus in the garage could allow it to be repaired and improve reliability, but that’s not what the garage is measured for.

Moving to a higher spare ratio and more frequent routine maintenance on vehicles is expected to yield better service with fewer in service breakdowns. Late in 2014, the TTC began this shift by slightly increasing spare ratios at each garage, and the MDBF for the bus fleet has risen to 7,000km. This will have to be tracked over a longer time, however, to ensure that the improvement is permanent and can be linked to further increases in spares and maintenance work.

This has a non-trivial cost for the TTC. With a total scheduled service of about 1,500 buses, a 6% increase in spares represents 90 vehicles, or a substantial portion of a typical yearly bus purchase, not to mention a fair amount of garage space.

On the operations side, the TTC aims at roughly 2/3 “punctuality”, defined as a vehicle being withing 3 minutes of scheduled headway. There is some debate about whether they are actually reporting relative to scheduled time, and that for less frequent services, the scheduled time is more important to riders. The problem is compounded by the mechanics of this measurement.

ServicePerfChart

The measurement points do not include the termini of routes, and so the scheme rewards short-turning to preserve something vaguely like the scheduled headway in the middle of the route. It can be argued that there are more riders in the central section, but this is not valid for all routes. Moreover, ragged service on the ends of routes is echoed across the entire line thanks to irregular dispatching and bunching.

Missing from this chart is another important factor: the time of day. During periods when service is less frequent, reliability becomes even more important because scheduled headways are wider, and the time spent waiting for vehicles (including transfers enroute) can substantially add to total trip time.

The TTC proposes to change its measurements to look at departures and arrivals at terminals, and to count the number of trips actually operated versus scheduled service. The intention is to measure and thereby focus on a different view, a different philosophy of what “good service” should look like that is. with luck, better attuned to a rider’s view of the system. The four new metrics are:

  • Number of short turns
  • On time departures and arrivals at terminals
  • Missed trips (a side effect of both short turns and of missing vehicles)

The premise of “on time” measurements is that if the line starts on time from a terminal and aims to be on time at the other end of the line, there is a good chance it will provide better service over the route. This partly addresses a common behaviour seen in every service analysis I have published on this site (and much more unpublished data) showing that service is irregular at the outset of trips even under ideal conditions. However, without attention to service along routes, especially long routes with branches or scheduled turnbacks, quality can deteriorate for many riders.

The effect will also vary depending on the riding profile of a route. In some cases, one end of a line is a peak point (a subway station, say) with most travel oriented to and from it. Other routes have one or more peaks distributed along them (the downtown streetcar lines are good examples). Locations where service reliability will affect many riders will vary over the course of a day, and weekend patterns will not match weekdays. Managing the ends of the line is a start, but attention to mid points is important too.

One issue with reliability is the question of whether scheduled running times are adequate for typical traffic and passenger demand. In some, but not all, cases there are shortfalls, and service cannot stay on time. The inevitable result is short turns and a focus on keeping operators on time (to avoid overtime costs at the end of their shifts). In theory, keeping service on time with short turns should provide reliability at least on part of a route, but in practice late running, short turns, and bunching (some caused by the short turns) are an unending situation that frustrates riders.

If running times are extended, the problem becomes a choice of a reasonable amount of time for typical conditions rather than scheduling for worst case traffic and weather. A related problem is the issue of “recovery time” at terminals for operators to have a break, particularly after long trips. The TTC does not want to make “recovery” a contracted requirement, and yet it includes this provision in many schedules. In practice, however, that time exists more to pad out timings to make schedules work, and the “recovery” time can vanish when conditions change. A simple example of this occurs on branching routes where the round trip time for each branch must be a multiple of the common headway, or of two headways if vehicles always stay on the same branch. On 501 Queen, recovery times have nothing to do with conditions throughout the day, but only exist to make the blended Humber and Long Branch services work (at least on paper).

Finally, there is limit to terminal capacity for vehicles that arrive early and/or have scheduled recovery times.

Experiments on 504 King and 512 St. Clair have included various tactics:

  • Extended running times to allow for actual conditions
  • Use “run as directed” extras to fill gaps as an alternative to short turns
  • Increased supervision to manage on-time operation and minimize short turns

 

ServicePerfStClair

ServicePerfKing

The changes in stats at the point where new schedules and supervisory tactics were implemented are evident in the charts above, but King and St. Clair show how two very different routes can have different results. The 512 St. Clair line is relatively short (7 km) and is entirely on a private right-of-way. Opportunities to “reset” service occur frequently with a one-way trip of about 30 minutes. Generous recovery times have been included in the schedules so that there is little excuse for late departures. By contrast, 504 King is a longer route (12.8 km), runs in mixed traffic through areas where congestion is common, and can be affected by spillover effects from events on parallel roads such as Queen Street or the Gardiner/Lakeshore. Moreover, it has multiple peak points and demand is not conveniently concentrated by location or direction. A one-way trip can take 65-75 minutes.

The contrast shows up in the on time arrival stats where the new schedules have little visible effect on King, although the short turn and missed trip stats both go down reflecting less need to cut trips short to keep operators on time.

On 29 Dufferin, the TTC took a different approach dispatching on headways at terminals in place of schedule changes.

ServicePerfDufferin

The big improvement here is in short turn stats. By definition, dispatching on headways will provide regular service, but these vehicles will not necessarily be “on time”. Riders don’t care, but the stats look terrible. Missed trip stats can improve with fewer short turns, but dispatching on headways may well result in fewer trips because the available vehicles are being operated to match actual conditions. (If the dispatch headway is 5′ in place of a scheduled 4′ value, the result could actually be better service if the 4′ schedule was not actually practical for on-the-street conditions.)

This is an example of the problems of taking stats out of context. It is important to understand what the goals were for a specific route and time rather than simply making route-to-route comparisons where the management strategies could differ.

On March 29, 2015, the schedules for 29 Dufferin were changed to give extra running time with the following projected effect:

ServicePerfDufferinProj

In a few months we will see whether the predicted changes actually materialize.

An important point here is that not all routes require extensive supervision and schedule revisions. The three routes above were chosen because they are busy and they had known problems with service quality. Meanwhile, on a much simpler route, 44 Kipling South, existing service is not too bad.

ServicePerfKiplingS

This is a short route (5.2km) with a one way trip time of 15-18 minutes. Note that its scheduled speed is somewhat higher than the 512 St. Clair car reflecting the fact that this route has much different loading patterns, and is able to run for much of its length in free flowing traffic with few stops. On time departures are easy to achieve, and the relatively low proportion of on time arrivals indicates that the route has plenty of running time.

There are short turns during fall 2014, although their actual numbers are low. This is an example of the problem of using raw counts rather than percentages because we have no way of knowing what proportion of the service is affected. Intriguingly the missed trip percentages are relatively high for such a small, simple route.

All of these stats are presented as weekly averages, and the real story from a rider’s perspective requires drilling down to daily or hourly data, possibly even to location-specific data for special cases. A relatively high on time performance can mask wide variations with very good off-peak and counter-peak service masking poor peak service. Much relatively good service during weekday daytime operations with a high level of supervision can mask poor service at evenings and weekends.

At the very least, the weekly stats need to have high, low and standard deviation values included to flag situations for detailed review and improvement.

In a coming article, I will review the 512 St. Clair route in detail including data from before the right-of-way was built west from Bathurst to Keele and data from the period when the new schedules and supervision came into play.

A big challenge for the TTC is that “good service” should not require continuous, hands-on supervision for the basics such as leaving on time (or on headway) and avoiding bunching. Some of this must happen because “that’s the way we do it” – a phrase rationalizing past practices – becomes a badge of honour for the quality of service every day.

30 thoughts on “A New Way To Measure Service Quality?

  1. I am assuming that the vehicle availability stat refers to scheduled service and not to all vehicles. The fact that the TTC has 99% availability would assume that they know they normally cannot run the scheduled service. The industry standard of 102% to 103% availability would mean that they have “hot spares” available to perform change offs with.

    A 6% to 8% spare ratio sounds good on paper but it means that only 4% to 6% are available for maintenance if you need to keep hot spares for change offs. If the stats are all a percentage of scheduled service then a 20% spare ratio would mean you would need 300 spares for 1500 scheduled buses for a total of 1800 while a spare ratio calculated on total fleet size would give 360 spares for a fleet of 1800 with only 1440 operating. It would be nice to know which one it is but I am going to assume it is measured against schedule service instead of fleet size.

    Steve: It is against scheduled service.

    Like

  2. Robert Wightman said:

    “A 6% to 8% spare ratio sounds good on paper but it means that only 4% to 6% are available for maintenance if you need to keep hot spares for change offs.”

    To me the question has been there for sometime, is one of the reasons that until now the TTC has avoided changing service metrics, is that it would lay so bare, the fact that they really need a large increase in the pool of buses they have to offer highly reliable service, and as such become a major source of conflict with City Hall.

    Steve: Actually the old metrics predate Rob Ford. They effectively institutionalized the best that the TTC could do (which wasn’t very good) and set the bar low enough that the TTC would usually manage to pass it. I have been working to get this nonsense changed since the days of David Miller’s mayoralty, but there is a mystique about “experts” that pols are loath to challenge.

    Like

  3. Steve wrote: (re: 44 Kipling South) “On time departures are easy to achieve, and the relatively low proportion of on time arrivals indicates that the route has plenty of running time.”

    Is this a typo? Or are these buses not ‘on time’ because they arrive early?

    Steve: Yes, that is correct. The route has a low “arrival time” performance metric almost certainly because it consistently runs early. I am not entirely sure that this is a meaningful number, and worry that it requires detailed knowledge of each route to interpret. On that basis, it is not a value that can be used for system-wide comparison.

    On another note: The goal for short turns should be zero, and actual results almost always zero. Short turns should be for physical constraints (e.g. road/track blockages, not schedule or labour reasons). Why? Because they are an egregious and infuriating affront to customer service. It would happily also make the choice of absolute or % reporting irrelevant.

    The fact that improvements are planned to be implemented with non-negligible short turns remaining (however many fewer) is a sign of brain-damage.

    Steve: As you say, there are legitimate reasons for some short turns, and keeping the counts will give a sense of how often this happens for whatever reason. Yes, the metric should stay low, but if it isn’t reported, then there is no check on real world operations. For example, the numbers might look good on sunny weeks in May, but not so good in freezing, snowy February. If the schedules cannot handle winter, then there is something wrong with the scehdules, not the weather.

    Like

  4. Steve – great to see some details on the new approach. It does seem to be a step in the direction of measuring service on the street. I would like to see the on time departure target even higher, however, that would likely require more vehicles on many routes or schedule adjustments we may not like, so that may have to wait.

    Like

  5. Hi Steve,

    Thanks for linking to this report.

    My endless frustrations with the Dufferin and St Clair routes eventually ended up in a phone conversation with Rick Leary last fall. This was just before the scheduling pilot projects were put into place on these routes. I did notice a big improvement in reliability on Dufferin while the route was being actively supervised — bunching all but disappeared and service was much more predictable. Since then, things have gone downhill a bit (one can still wait 10-15 mins for a bus during rush-hour and watch in awe as a crowd of 200-250 people gather expectantly outside Dufferin Station) but things are better than they were before the pilot.

    Perhaps the greatest lasting advantage has been tacit “all-door boarding” or POP system at Dufferin Station. A bus arrives, passengers disembark through all doors, then people get on the bus through all doors. I am sure the system is getting scammed, but it speeds things up.

    On a related note, is there any break-down of “% of vehicles breaking down in service”? What do the numbers look like for subways, RT, streetcars, buses look like on their own?

    Steve: They have these numbers, and they should be added to the published metrics. I am waiting to see whatever changes arrive in the CEO report. After the debacles of the TYSSE and ATC projects, I think TTC management needs to have its dirty laundry hung out where everyone can see it.

    Like

  6. Steve said:” Actually the old metrics predate Rob Ford. ”

    I was surprised however, that Byford did not push for new metrics to be introduced earlier. It seems to me, that in order for him to really make any substantial institutional change, he needs to be able to show that there are real and substantial issues from a customer perspective. Being able to show that we are not operating on time, and we are the issue (dispatched on time) is a real opportunity to have something that can be fixed.

    The other thing of course is I love the fact that they are showing early as an issue as well as late, and both as a negative (I have no issue with getting to my stop early, however, I do have one with the bus being there before it is supposed to be when I am not yet on it.).

    Steve: I think part of the issue is that the new metrics have not been implemented for all routes yet, and so a system-wide measure cannot be produced. Once those numbers are available, there will also be the question of just how well they do reflect the health of service across all routes. However, basics such as MDBF stats for all modes, together with tracking of in service failures and reasons for major delays are available now and should be published.

    If the TTC is making efforts to improve its system, we should be able to see the numbers. Conversely, if the “improvements” are not working, we need a discussion of why if only to establish whether the presumed reasons behind service unreliability are actually correct.

    Like

  7. I’d be interested in the time that busses are spent in the “spare pool”….in theory they should be in, get their maintenance done and be out…how many are “multi-day” or “permanent spare” due to reliability issues or age problems…if the majority of busses are in the spare pool for less than a day (ie. minor maintenance) is there any benefit to just doing that maintenance overnight so that the number of spares required could come down?

    Steve: Actually there are at least three types of “spares” for maintenance. One is for regular planned work that should pick up any routine problems before they cause in service failures. One is for work done under warranty to retrofit fixes to new vehicles. The third is for major overhauls that take a vehicle off of the street for weeks. Not mentioned in the presentation is the question of scheduling routine maintenance so that as many subsystems are reviewed in one pass as possible. There have been problems with scheduling work for the same vehicle at different times.

    Likewise the numbers should show the number of buses per route arriving early and the number arriving late … if only 5 arrive late but are 2 hours late each, and every other bus is early but only by a minute … the stats could wash, and it might not look too bad.

    Anyways, it looks like there is an attempt to improve things now … whether it’s enough will be in the numbers.

    Like

  8. Is there still on street supervision of St. Clair Car? On Saturday just after 12 Noon while I waited for a 71 Runnymede at Gunn’s Loops (a poor infrequent service at any time) there were 5 cars come and go before 1 bus. There was 1 car arrived and then immediately left. A second car was waiting to enter the loop and a third was held at the traffic signal approaching. All three travelled east in a convoy. No TTC Supervisor anywhere in sight.

    Steve: I think that based on the vehicle tracking data I have looked at, the new supervision is only on weekdays during the daytime. The route reverts to its “normal” state otherwise.

    Like

  9. The Mean Distance Between Failures (MDBF) numbers are pretty shocking to me. They seem extremely low compared to private passenger vehicles, which likely run more than 100,000km between failures with regular maintenance. Granted, TTC buses travel 4x time average distance in a year, and are to last 15 years (or 18 with overhaul) to a passenger car’s 15 years. Obviously buses have more features and moving parts, but 2.5 to 6K MDBF? 2K is one week, so that’s a failure every 1 to 3 weeks for each bus. Something is terribly wrong here. I’m not impressed by the industry 10K figure either.

    I am most curious to know what the most common causes of failure are.

    Like

  10. There is an easy way to address the “on-time arrival at terminus” issue for routes like Kipling South: change the metric to “arrives late at terminus.” Like Malcolm N said, riders only care if the vehicle is early if they are trying to catch it. At the last stop or two, stops that are close enough to the end that boarding is negligible, being early is a bonus, especially when approaching a terminus at a subway station. Implement the “on-time” metric at intermediate points, including one reasonably close to the station (say, at the last stop with considerable boarding). I would hate for drivers to dawdle through the last few stops before a subway terminal because of pressure to make the “on-time arrival at terminus” metric look better.

    Steve: Mid-route timepoints and stats have to be tempered by type of route and even time of day. During periods of wide headways, on time performance is vital to passengers who use the schedule to plan journeys. However, with “frequent service”, reliability of the headway is more important. Some routes have conditions that vary, and holding to a fixed schedule could be counterproductive. Stats based on “on time” values, as opposed to headway reliability, could encourage line management behaviour that actually interferes with service. My concern is that the TTC not create a single metric attempting to condense everything into a single model. When the targets against which management is measured don’t fit the real world, the inevitable effect is to “game the system” to make numbers look good.

    Like

  11. Steve said:

    “If the TTC is making efforts to improve its system, we should be able to see the numbers. Conversely, if the “improvements” are not working, we need a discussion of why if only to establish whether the presumed reasons behind service unreliability are actually correct.”

    Yes, it is critical that the measures actually reflect service on the street, and targets not just be what the TTC can achieve falling out of bed.

    Steve said:

    “My concern is that the TTC not create a single metric attempting to condense everything into a single model. When the targets against which management is measured don’t fit the real world, the inevitable effect is to “game the system” to make numbers look good.”

    Amen – Active headway management for instance makes sense on the busy routes, and run early does not matter when the bus before you did as did the one prior to that. Also if you close up by a minute so what, as long as the bus behind you is close to the correct headway (i.e. driver is not creating a gap behind). However, on scheduled service, I would prefer to see them be on time, even if that meant running slow for the last few stops. I have had the experience on routes with scheduled services of seeing a bus pull away before I got to the stop, and I was 5 minutes early. The next bus was not to be there until 1/2 hour later, what I gain from being early is trivial compared to the irritation of sitting at a stop because a bus was running “hot”. So yes, I believe having metrics that reflect route type is very important, and early can in fact worse than late, and much easier to fix, as long as there is a measure and incentive to do so. I would argue that a bounded measure where both anything more than a minute early or more than 3-5 late on scheduled service should count as a negative event.

    Steve: I know that Rick Leary at the TTC is considering an asymmetrical measure of “on time” that allows more leeway on the “late” side than “early” because being early has a major effect on riders as you describe, and can be “fixed” simply by sitting for a time at a stop.

    Like

  12. The stats for 44 Kipling South are all nice and heartwarming until you actually try to catch a southbound bus in the AM or northbound in the early afternoon. Then you will discover that the buses are packing with high school and college students riding the entire length of the route to/from school. It is a common source of complaints that there are more people waiting at Kipling station than can get on in the morning. In the afternoon, I’ve been bypassed by two northbound buses at Horner and Kipling because they were chock-full. Someone who works at that corner told me that this is the usual situation; we both gave up and took a 110A to Islington station instead.

    There needs to be some kind of measure for “people left behind at stops”. The King and Queen cars are certainly not the only routes that routinely do this. The vehicles can be on time, but that’s no help if you can’t board them.

    Steve: An excellent point, and one that is very hard to measure from afar. Mind you, the onboard passenger counters (once they are all in place and linked to real time route tracking) will give us info on this. I know there are plans to incorporate these data into the master database of service tracking info, but parts of this must await the new CAD/AVL (Computer aided dispatch, automated vehicle location) system to be implemented over the next few years.

    Like

  13. Steve,
    When I saw the article heading, I was honestly expecting one of your more typical posts heavily critiquing TTC management practices, but in this case I am honestly not sure of your opinion on this change.

    I feel like this is all heading in the right direction and so while change isn’t fast, it’s at least happening. Lest we forget institutional inertia that exists at all multi-thousand employee institutions.

    Are you taking a wait-and-see approach to the next round of stats and the system-wide measures before a more exhaustive critique?

    Steve: Yes. At this point I am reporting on their intentions, but from talks with TTC management, I know they have far more data available. The question is which data will be incorporated into an updated report on service quality, and what the new targets will be.

    Like

  14. Steve said:

    “An excellent point, and one that is very hard to measure from afar. Mind you, the onboard passenger counters (once they are all in place and linked to real time route tracking) will give us info on this. I know there are plans to incorporate these data into the master database of service tracking info, but parts of this must await the new CAD/AVL (Computer aided dispatch, automated vehicle location) system to be implemented over the next few years.”

    Both of these are exciting, and if they are being dynamically tracked, have huge possibilities for service management and improvement. If you had carefully placed hot spares, being able to address the issue, before it became a no board condition. Just having regular count of vehicle loading location and time, would likely allow you to predict the exact location in advance of many of these high loads, and assign vehicles to run in advance.

    Like

  15. Steve:

    Actually the old metrics predate Rob Ford. They effectively institutionalized the best that the TTC could do (which wasn’t very good) and set the bar low enough that the TTC would usually manage to pass it. I have been working to get this nonsense changed since the days of David Miller’s mayoralty, but there is a mystique about “experts” that pols are loath to challenge.

    It would be interesting to see how good of a reality check report card we could develop from the publicity available data. Infographics are popular because they work and pols don’t challenge “experts” because they are normal people and don’t get their head around the numbers. Anyone want to try?

    Steve:

    My concern is that the TTC not create a single metric attempting to condense everything into a single model

    I would suggest that they have a group of metrics to breakdown lines by “type” of service. It will still have some data being round pegs in square holes, but at least there would be a choice of a better fit.

    Steve:

    Yes. At this point I am reporting on their intentions, but from talks with TTC management, I know they have far more data available. The question is which data will be incorporated into an updated report on service quality, and what the new targets will be.

    Would this data be available through a FOI request? Or is it already in the public domain? I think the level of transparency to the information that they don’t use is very telling of how much the report is a typical happy-happy song-and-dance.

    Steve: An FOI may be required depending on how much is involved in publishing the data. Things like MTFB are tracked already. Detailed stats for every route are not as the TTC is still in the process of adapting their newly developed software to a variety of routes, and they are concentrating on the “problem children” first. I’m prepared to wait and see what they do publish, and then go after the rest if there are big gaps.

    Like

  16. Don’t surface operators have a code that they punch in for transit control when they reach sardine levels and have to pass up passengers on subsequent stops? That could be a crude proxy for the situation that Ed describes.

    I agree entirely about “on headway” vs. “on time”… just trying to keep the discussion simple since that is what is presented in the table. And further to my point, “on headway” is not particularly meaningful at the far end of a route, beyond the last stop that routinely sees boarding passengers.

    Steve: Yes, there is a button to signal overload conditions, but I am not sure if this info is tracked in permanent data that could be reviewed for planning purposes.

    Like

  17. I can’t help but feel that the numbers in the first table would be a lot more useful if the streetcar fleet were broken out from the bus fleet. Are the buses really that unreliable (>5% breaking down in service) or are streetcars pushing that number up? By how much? There is a shortage of streetcars; is there also a shortage of buses before the need to make up streetcar service? (i.e. is the 99% service availability systemic, or purely a function of a much larger – percentage wise – shortfall in streetcars?) Similarly with spares and MDBF.

    Steve: The numbers shown are for the bus fleet. The streetcar numbers are worse.

    Like

  18. Steve said:

    “The numbers shown are for the bus fleet. The streetcar numbers are worse.”

    Yes, it really does require a fleet that is large enough to have significant time to do aggressive preventive maintenance. When you are running the vehicles for that long and that hard every day. There is a need for both more service on the street, and to allow for lower loadings, so that there is less urgency in actually keeping the vehicle in service. The vehicles run much longer, and further than cars yes, but the operations within which the operate are also the worst for a vehicle in terms of service life and durability – constant stop go, with real loads.

    If we think in terms of the daily operations, and a vehicle shortfall – If you have a significant spare pool you are not sending marginal vehicles into service. If in your fleet you only have those that do not work, those you have actively torn apart for repair, plus the handful that need work, but will run, well if they run the pressure will be there to put them on the street.

    Steve: For 2013, the fleet of 1,851 buses travelled a total of 129,577,000km or about 70,000km per vehicle on average. The fleet of 247 streetcars travelled 12,451,000km or about 50,400km per vehicle. Some of the difference is due to spare ratios, and some to average scheduled speed for the streetcar network. The average speed of the bus network (in 2012) was 19.7km/hr while for the streetcars it was 14.0.

    The current (March 29, 2015) schedules call for a peak streetcar service of 202 vehicles out of a fleet of 252 (including the five Flexitys now in revenue service), a spare factor of almost 25%.

    If we convert the mileage values to hours, the values are 3,553 hours/vehicle for buses and 3,601 hours/vehicle for streetcars. Even with the higher spare ratio, the streetcars actually provide more hours of service per vehicle than the buses because the ratio of off peak to peak service is higher on the streetcar network than on the bus network. More streetcars, proportionately, are in service for long hours than buses because of the nature of the routes they serve.

    Like

  19. Steve said:

    If we convert the mileage values to hours, the values are 3,553 hours/vehicle for buses and 3,601 hours/vehicle for streetcars. Even with the higher spare ratio, the streetcars actually provide more hours of service per vehicle than the buses because the ratio of off peak to peak service is higher on the streetcar network than on the bus network. More streetcars, proportionately, are in service for long hours than buses because of the nature of the routes they serve.

    Yes, this is an amazing amount of service on an annual basis. That is basically 10hrs per day, 7 days a week. Perhaps more to the point, nearly the equivalent of 2 full work years worth of hours for each streetcar or bus (someone working 9-5 with 1/2 hour for lunch 50 weeks a year being 1875 2-3750). When you think [of] the amount service required in that kind of heavy use, especially for a vehicle that was already over 10 years old, the engine hours under load is absolutely insane (a better metric than kilometers) for these vehicles.

    Like

  20. <blockquote>@malcoim:

    The number of engine hours for streetcars is completely unremarkable, chiefly because they have electrical motors which should have a much longer life and greater mtbf than any engine.

    Consider the life of other electrical transport modes for comparison, rather than buses.

    The limiting factor of longevity is probably electronic control systems, not motors.

    Steve: This was particularly an issue for the CLRVs and the early generations of solid state subway controllers because the technology changed so fast. Couple that with specification of “military grade” components (thank you UTDC) and you have a recipe for hardware that goes stale very quickly.

    Like

  21. Ross Trusler said:

    “The limiting factor of longevity is probably electronic control systems, not motors.”

    In the case of electric motors there is the simple question of age, and being relatively in the weather. I agree it is not the same as a diesel engine (note engine hours not motor hours). However, 30 years of use for heavy electrical motors is not hard, however, controls etc still have issues. I was however, focused on the buses.

    Like

  22. PS. to me the control issue really is a design question. These should be designed with very well understood inputs and outputs, so that they can be reasonably replaced. Beyond that it is another argument for opting where feasible for electrically based equipment. It is much easier to maintain the basic drive systems for a very long time.

    Like

  23. Malcolm said:

    “PS. to me the control issue really is a design question. These should be designed with very well understood inputs and outputs, so that they can be reasonably replaced. Beyond that it is another argument for opting where feasible for electrically based equipment. It is much easier to maintain the basic drive systems for a very long time.”

    While I wholeheartedly endorse the principle of design by interface contracts, I doubt that maintaining an interface is realistic or at least economically feasible over a 40+ year time span. Even in the electrical sphere, common design voltages and paradigms now change over that long a period, although that is hardly insuperable. Any implementation now will be electronic for economic reasons. Meanwhile, if we are to squeeze more capacity out of our roads and transit, control complexity will continue to rise exponentially, placing electric control in our past. Transit vehicles will increasingly be nodes in a computing mesh.

    This is not to say that it can’t be done, indeed it certainly can, but it is not without other costs.

    Like

  24. Ross Trusler said:

    “Any implementation now will be electronic for economic reasons. Meanwhile, if we are to squeeze more capacity out of our roads and transit, control complexity will continue to rise exponentially, placing electric control in our past. Transit vehicles will increasingly be nodes in a computing mesh.

    This is not to say that it can’t be done, indeed it certainly can, but it is not without other costs.”

    I agree with the notion that it is unlikely to be worthwhile to try and design new system for the old streetcars. However, as we buy new equipment, it would be nice to be able to both maintain and possibly update the system, to stay up with new technology. The need for new streetcars is clear regardless, however, it is not so clear, that we will not want to be able to update the controls of the new cars as time moves forward. While 40 years is a long time, it is quite possible with steady maintenance, to keep an electrically driven system working reasonably well, and updating controls will allow it to work with new systems that may come into place.

    Steve: It is important to distinguish between the power control system and the propulsion gear. The former is digitally controlled solid state while the latter is electromechanical. One can swap out the control package without changing the motors and related mechanical equipment.

    Like

  25. Steve – to get myself back on topic, I would really like to say, I think this particular change in direction by the TTC really does excite me. I think they should be targeting an even higher level of on time departures, somewhere north of the 95% range, however, on many routes this will likely mean more vehicles will need to be allocated.

    I also would hope in the case of really busy routes especially, they create a vehicle loading condition report card. As you have noted having a vehicle on time/headway, does not represent service if I cannot board it, and does not represent good service if it is packed to the rafters. I would also hope the reports would include service conditions on these routes on a time of day basis, and the TTC would look to start exception, or out of bounds reporting. The report cards also need to highlight the number of times, when and where vehicles were seriously overloaded, or when stops, or platforms cannot clear due to space limitations in the vehicles serving the stops.

    The new reporting system needs to be able to highlight where there is already trouble in capacity, and where trouble is brewing.

    Steve: If you’ve been following on Twitter, this morning has been a textbook example of laissez-faire botched up service on St. Clair with packs of cars running back and forth and no apparent supervision to short turn or space them out. Are the bad old days back? TTC isn’t replying.

    Like

  26. Steve said:

    “If you’ve been following on Twitter, this morning has been a textbook example of laissez-faire botched up service on St. Clair with packs of cars running back and forth and no apparent supervision to short turn or space them out. Are the bad old days back? TTC isn’t replying.”

    I think the way the service metrics are managed will be a big part of the longer term answer to this. If the metrics do a good job of highlighting the failures, then they will likely be addressed, if they give a solid passing grade, when there is bad service, well…

    I think that someone has to come up with something that resembles an executive information system for the transit report cards. Years ago, I can remember working on corporate systems like this, where it would present rolled up data, but would permit borrowing in, and the design could be set up so that it could highlight points where the programmed logic would throw a flag at the rolled up level to highlight what would be deemed pertinent variance within the data. (The profit line would look good, but when you burrowed in, it might all come from a single division, or a single region).

    Here if we were reporting (as discussed under St Clair) events, the system could be designed to throw a flag to encourage digging, if there were a reasonable number of total gaps, but they were largely concentrated on a handful of routes, or at certain times of day etc. If we are to provide council, management, and the public real reporting, that encourages good management, and shows issues really, it will need to be somehow interactive, and have the ability to show this type of concentration of issues. So if the TTC had fixed all the routes, except St Clair, there should still be a naughty little red underline, on a green data point, that just begged to be clicked on.

    Steve: The situation on St. Clair appears to be a combination of two problems. First is that the schedule data loaded into NextBus does not appear to have all trips included, and so Nextbus does not track the trips it does not know about. This is playing havoc with displays of vehicle locations and predicted arrival times.

    Second, even without this there were cases where parades of cars (the ones Nextbus was actually tracking) departed en masse as if nobody was managing the service.

    I agree that exception reporting is essential, but it is the detail that throws up those exception flags.

    Like

  27. Steve said:

    “I agree that exception reporting is essential, but it is the detail that throws up those exception flags.”

    I really do think this would need to be the possibility of many details, any one of which being out of reasonable bounds for any real period that needs to throw up a flag. However, to be honest that is one of the beauties of a well designed Executive Information System, and why they are so hard to do well. Actually deciding the extent and nature of the conditions that trips a upper level flag and for whom. Throw too many for a Councillor, or Mayor, and people will start ignoring them. Of course, the flags can be tiered, and even designed to show on one persons screen and not the next, so not at a council level below a certain point, but still on Byford’s, then not on Byford’s but still on a district manager’s screen. All have the choice to dig, but not all need be flagged for each bound. It really can be a very involved process to get it right, and it needs to be designed to reflect what conditions really are important for each level of management.

    Like

  28. Steve – one of the other flags that should really be being thrown, for the planning group would be wait times for buses at intersections – and how it is distributed (large concentrations at certain intersections being highlighted). If the wait time is based on holding for schedule or headway, that of course would need to be noted, but otherwise, it would be very useful to have an automatic track of these intersection waits, and an automatic look at where they were concentrated. If there was a long period where the same intersections represented a serious delay (or where massively over represented in the overall delay), relative signal times, signal timing, and signal priority adjustments could be looked at, and where there was room, even queue bypass lanes could be built.

    The vehicle location/tracking system data could be well used to make small nearly invisible changes, that could result significant improvements in how transit operated, especially in areas where there is a wide enough road allowance to permit bypass lanes. Also data from the AVL being used to trigger a greater depth of study for certain areas within the line.

    The real question in my mind is how to make some of this a priority at the council level as well, so that quality of service does not have to always become a question of mode, when the council and the public become involved.

    Like

  29. …and what is going on with the streetcar drivers on St. Clair?

    In other cities, I’ve watched bus drivers manage headway on their own. The bus driver knew when he’d pulled up right behind another bus on the same route (it was obvious). He would watch the passengers board the bus ahead. If that bus was full, with additional passengers waiting to board, he’d go in a convoy — but otherwise he’d then look at his watch and wait for the appropriate headway before departing the stop.

    It barely requires management; any idiot can do it. Except, apparently, the guys driving the streetcars on St. Clair… presumably there is some kind of messed-up incentive for them to travel in a convoy rather than managing headways?

    Like

Comments are closed.