How Should We Measure Transit Service Quality?

Introduction

The question of service quality has been a central thread on this site more or less since its inception.  It is not enough to have service on a street (or even in a subway or on a private right-of-way) if it shows up unpredictably, or can’t be used because it is overcrowded or short-turning before it gets to many riders’ destination.

For as long as I can remember, the TTC’s stock excuse for poor service was “traffic congestion” coupled with “it is impossible to provide good service with streetcars running in mixed traffic”.  When detailed information about vehicle movements on the transit system became available, it was quickly evident that congestion was only one problem.  Moreover, some bus routes on wide avenues exhibited service qualities almost indistinguishable from streetcars tethered to rails on narrow streets.

After a period when the Toronto supported more spending on transit to improve loading standards and hours of service, the city swung to the right treating transit service as a waste of taxpayer dollars.  Despite cutbacks that could throttle demand, transit riding continues to rise, and with it the problems of service quality.  Much of the service improvement we do see is funded not by subsidies but by fare revenue, not to mention by overcrowding.

The TTC has focused much effort the “soft” improvements — cleanliness, information systems and customer relations — but for the really important one — service they actually provide to riders — the jury is still out.  The situation is compounded by budget constraints of the Ford/Stintz era, of just getting by with trims around the edges, but with no sense of a plan to make substantial improvements.

The time is overdue for a clear direction on improving transit service.  The answer is not just to run more buses or build more subways, although service improvements are needed.  We must also run the buses and streetcars we have more reliably.

The common thread through measurement schemes is that a transit system must be viewed from the passenger’s point of view.  They are the people actually riding and telling their car-driving friends how good or bad transit is.  In Toronto, at least, the riders are also substantially paying for the service.

How should we measure how the system is performing now and in the future?

For those who do not want to read to the end, no, I do not have a grab bag of solutions, a “right way” to do things.  What we do need is a better understanding of how the system behaves at a detailed level — are there specific problems on individual routes that can be removed or at least lessened, and are there systematic problems with transit operations?

Some issues are external — there really is traffic congestion — but the question to answer is how we will deal with it.  Will transit priority really take precedence at a possible cost to other road users?  Some issues are internal — is there really enough service on the road, and could these vehicles be better managed?  What improvements will riders accept with glee — service reliability — and which will they regard as “nice to haves” that don’t address the underlying problem that “my streetcar never shows up when I need one”.

Detailed reporting together with measurements that riders can understand are essential to maintain the transparency and credibility of a transit agency.  One common element through this review of many systems and papers is that any measurements should be based on what the rider sees, not on management’s view and goals.  The purpose should not be to trumpet how good Toronto’s transit is, but to find how to make it better.

Acknowledgements

Transit service quality has been a topic for others, notably Jarrett Walker and his blog Human Transit. For starters, his articles on on-time performance are worth a read including:

Several transit systems cited in this article have their own approach to measuring and reporting on service quality.  Of these, the most extensive is found in London UK in part because monitoring their private sector operators requires detailed metrics people can understand and agree to.

A technical approach can be found in the Transportation Research Board’s Transit Capacity and Quality of Service Manual (2nd edition, 2003).  Although the metrics proposed by TRB are more complex than most systems would be likely to implement, the underlying discussion makes several important points about aspects of service quality.

The American Public Transit Association Service Quality Handbook (revised 2011) builds off of the TRB report.  It goes into great detail about the many factors affecting a rider’s perception of transit service, but sidesteps actually defining metrics for those factors.  Moreover, it spends a disproportionate amount of time on organizational, big-picture, issues and the managerial focus drifts a bit too far from day-to-day reality for my liking.

Additional papers of interest are listed at the end of the article.

What the TTC Does

Every month, the TTC Chief Executive Officer’s Report includes a “scoreboard” showing the behaviour of various transit operating factors relative to their targets.  Among these are reliability measures for the rapid transit lines, streetcar and bus systems.  A subset of this information is published in a daily report on the TTC’s website.

The TTC’s system target is that service should operate within ±3 minutes of the scheduled headway to be counted as “punctual”.  For reasons best known to the TTC, there is an inconsistency between the CEO’s Report and the Daily Report.  The CEO’s report sets the standard relative to scheduled times while the Daily Report claims that it is relative to scheduled headway.

These are not the same values.  Service that is on time will necessarily also meet the scheduled headway target, but not the other way around.  Riders don’t care that buses are on time, only that they arrive at the advertised spacing and, for infrequent routes, at the advertised time.  Every bus on a route may be 20 minutes late, but provide the expected level of service.  For such an important measure, the TTC should at least be consistent, and routes should be managed to the alleged target.

For rapid transit operations, the target is to have 96% of trips within the target headway range.  For streetcars and buses, the targets are 70% and 65% respectively.  These values were not set by some clearly understood formula, but rather they are based on historical values of the indices.  There has never been a discussion of what these targets mean in terms of service from the customers’ viewpoint, nor of the degree or type of change in operations needed to improve the values.

The underlying methodology of calculating these measures has not been published, but it is not hard to see that regardless of the numerical results, the values are a very crude way to measure system performance.

On the Subway

The subway target is 96% of trips ±3 minutes of scheduled headway, but:

  • Values are all-day averages combining observations at multiple locations.  The TTC’s calculation weighs peak period service for 2/3 of the consolidated index even though it represents a minority of the total service hours and trips provided.
  • Riders do not experience average service, but specific levels of quality (or lack thereof) at specific times and places.  The real question to be asked is “how often is a rider likely to encounter a problem in making their trip”, and this is not answered by a system-wide measure.
    • If a route like the Yonge Subway is measured at several points, there is a good chance it is running “normally” at many of them even though a major delay may foul service in a specific segment.  Good service at Wilson Station is of little use to someone trying to get through Bloor-Yonge, and conversely a delay at Bloor-Yonge may affect riders making a wide variety of origin/destination trips passing through that critical site.
  • Where frequent service is scheduled such as on the subway, a major disruption is required for headways to go beyond the 3-minute margin for an extended period.  Even if there is a delay, only the first train carries a wide headway while those behind it follow as close as the signal system will allow and are counted as “punctual” trips.
  • There is no measure of service quantity.  Trains are scheduled every 140 seconds at peak (25.71 trains/hour), but they are still “punctual” if they are 320 seconds apart (140+180).  A service less than half of what is scheduled (11.25 trains/hour) would get a 100% rating from the TTC’s methodology.  Service that runs at less than scheduled capacity, or which is overcrowded, cannot provide riders with the advertised headway.  The “headway” seen by a rider is related to the length of time needed to get on a train, not for the first train to appear in a station.

On the Surface

Surface routes are quite another matter.  The targets here are 70% for streetcars and 65% for buses.  Headways measured at several points on routes should not be outside of the three-minute margin.

  • Data for all routes, locations and time periods are lumped together in one index.  On a system-wide basis, 65-70% is not very impressive, but even this target can mask far worse service quality at specific locations and times.
  • The absence of forced vehicle spacing (as on the subway) plus wider headways means that surface routes can operate wildly outside of the headway targets.
  • Surface headways, especially outside the peak period, are long enough that ±3 minutes can be a relatively narrow band, but only for infrequent services.  A 20 minute headway on the timetable may range from 17 to 23 minutes and yet be considered “punctual”.  Running early is particularly bad for customers because they face a long wait if they miss a scheduled trip running “hot”.
  • With shorter headways, the six minute band makes bunching acceptable.  For example, on a 5 minute headway, as long as vehicles stay within the range of 2 to 8 minutes they are “punctual”.  (Alternating 9 and 1 minutes headways would be outside of the target range.)   If riders arrive at an even rate at a stop, four times as many will accumulate in an 8 minute gap as in a 2 minute gap, and the vehicle they board will be much more crowded.  The service will meet the TTC’s standard, but it is the 8 minute headway most riders will see.
  • As on the subway, there is no measure of service quantity.  Instead of 12 vehicles/hour (a 5 minute headway), the service could be 7.5 vehicles/hour (8 minutes apart) and stay within the target.  Alternately, there may be 12 vehicles in the hour, but half of them may short-turn at a location that is of no benefit to many would-be riders.  This leads to wider headways for many riders and uneven vehicle loads.  Nothing in the service targets makes any allowance for crowding nor for its corollary, pass-ups, where riders cannot board the first vehicle that arrives.

We are all familiar with the problem of a full bus immediately followed by an empty one.  From a rider’s point of view, bunched service consists of pairs of an overcrowded bus followed closely by an almost empty one regardless of what the printed timetable might say.  That empty bus is providing little real service and yet the TTC will count it in the route’s hypothetical capacity.

Very short headways can be little bonuses (a second bus coming just after you think you have missed one), but most riders arrive at stops in the big gaps, not the little ones.  Experienced riders will try to board the first vehicle that arrives even with a second one in sight.  If it is short-turned, at least they can drop back to the following vehicle.  Greater assurance of getting to their destination takes precedence over getting on a less-crowded vehicle.

If the target for “punctual” service is only 65%, this means that fully 1/3 of the headways provided to customers are outside of the six-minute standard window.  On a system wide basis, riders can expect to encounter one of these events on almost every round trip, especially if they transfer and are exposed to irregular headways more than once.

For the statistically minded, if the “punctual” trips are evenly distributed, and the probability of a trip being punctual is 2/3, then for a trip involving two vehicles, the cumulative probability is only 4/9, under 50%, that both will be “punctual”.  For a round trip involving a transfer each way, the probability is a scant 16/81 or just under 20%.  In other words, there is an 80% chance that a round trip involving a transfer between two bus routes will include at least one off-target gap in service. For a system designed around transfer connections, this is an appalling situation.

In practice, the actual distribution of “punctual” trips is not uniform.  A  notable issue, visible in many of the route analyses I have published here, is that headway adherence declines during evenings and weekends, and at the outer ends of routes where branching and/or short turns can leave wide gaps.  The proportion of “non punctual” service may be terrible at some times and locations, but this is masked in the service measures by all day, all system averaging.

Measuring for the customer or for the company?

Toronto badly needs a way to report on service that reflects daily rider experience, and customer service should focus on providing reliability and the advertised quality and quantity of service.

Since horse-and-carriage days, the simple measure of “did the buses run on time” provided all the information needed for many transit operations .  This is an observation at a single point, possibly a dispatch site where trip departures are monitored.

Hitting scheduled time points at major stops enroute could depend on an operation’s prevailing culture.  This could be a goal shared by management and operators as an integral part of customer service and reliability, or it may be common practice that mid-route time points are neither monitored nor respected.  Moreover, managing to headways requires that operators know where they are relative to nearby vehicles, but the information they are provided on vehicle consoles is relative to the schedule.

On routes with infrequent service (smaller transit systems and less important parts of the large ones), on time performance is the overwhelming requirement.  Passengers plan their trips around the schedule, and on time performance is vital to system usability.  Running early, a practice allowed by the TTC standard, is fatal because a rider may just miss their trip (particularly one calling for connections) and be faced with a long wait until the next bus.  Think of the difference between a GO bus running every hour and a Finch bus running (nominally) every few minutes.

The ability to accurately track transit vehicle movements by GPS is a comparatively recent phenomenon, but this is only a tool, not a substitute for actually caring that service is reliable from a rider’s viewpoint.  Indeed, the TTC implemented GPS not to manage service, but to drive a stop-announcement system mandated by a legal challenge about accessibility.

Despite a service target nominally based on headways, the TTC remains very much oriented to schedules because of the needs for operators (and by extension their vehicles) and their crew changes.  The goal of service management is to keep operators on time and, if possible, to maintain a decent headway over at least part of a route.  This typically brings short turns and ragged service to the outer ends of lines.

This is not the same as a scheduled operation where only half of the service goes to part of a route on a dependable basis.  Moreover, the extremities of a route may have substantial demands in their own right depending on residential, office or academic land uses and travel patterns.  Operators may be more or less on time and even on “punctual” headways at some central point on a route, but the service actually provided further out may be much below what is advertised.

Is there already a “standard” way to do this?

There is no industry standard per se although various attempts, some academic, some professional, have been made to create a framework.

A paper prepared for the TRB’s 2007 annual meeting [International Bus System Benchmarking: Performance Measurement Development, Challenges, and Lessons Learned; Randall, Condry and Trompet] observed that getting consistent data from the transit industry was quite challenging.

Service Quality

Very few common comparators were found across the member organizations in measuring service quality. While it was expected that more subjective indicators, in the areas of information, driver courtesy, comfort and cleanliness, etc, would vary, it was surprising to find little commonality in measurement of time-based performance.

Measurement of time-based performance is heavily influenced by the method of service operation. Many of the larger cities have much of their bus service operated on a frequency or headway rather than timetabled basis. Thus, such standard indicators as percent of trips on time are not recorded. Technology was a second important difference, with three of the benchmarking bus organizations fully equipped with AVL systems, which provide much better data in both quantity and quality.

Other indicators for measuring service to customers included lost kilometers – the most common data element recorded. But again, not all organizations record this data. Another common indicator, missed trips, is measured by only half of the bus organizations. [p7]

Looking through websites of various transit systems, it is not uncommon to find quality measures based on departure times at subway terminals.  The underlying assumption is that if the trains left on time, they will stay more or less on time over their journey.  This is a tenuous link to a rider’s viewpoint especially if there are points of congestion enroute, delays caused by breakdowns, or if crowding prevents riders from boarding the first arriving train.

Some systems measure time relative to the schedule, while others look at actual vs planned train spacing (headway) with an upper bound on acceptable deviations.

A related measure is the percentage of trips operated, although these tend to be reported on an all-day basis that smooths out problems with specific locations and time periods.  If the signal system permits trains to run much closer together than the scheduled headway, then service could be bunched almost like a surface route.

New York MTA

New York reports various performance factors with current and historical data online.  Measures available for New York City transit at the route level include:

  • On time performance at terminals.
  • A measure of wait assessment (headway) defined as the proportion of trips where the actual interval between trains is not greater than scheduled by more than 25%.
  • For bus routes, a measure of trips completed relative to scheduled.

Chicago

The CTA has a page dedicated to Performance Metrics with a lot of historical data.  (Example: September 2012)

Among measures of interest:

  • Number of rail system delays greater than 10 minutes.  This is an absolute count that  expresses major outages with a number riders can understand (delays per month) rather than average on time performance.  The important issue is that delays happen, not that, on average, most trips are on time.  Historical data series would be affected by major network changes (a new line, for example), but these are rare events.
  • Percentage of rail system that has slow orders.  This reflects a system with a serious backlog of infrastructure maintenance.  Slow orders delay riders and can cause bunching of trains if the scheduled service is close to the lower bound imposed by the signal system.  (Constant physical spacing of trains at lower speeds means wider headways.)
  • “Big gaps”.  Percentage of bus headways that are double the scheduled interval or over 15 minutes.  This metric is not subdivided by route, location or time of day.
  • “Bunching”.  Percentage of bus headways that are 60 seconds or less.

Back in 2006, the CTA defined performance measures and included the following important observation:

For service periods with headways 10 minutes or less:
Customers expect to board service shortly after arriving at stop/station.
In these periods, reliability means HEADWAY CONSISTENCY.

For service periods with headways 10 minutes or more:
Customers rely on schedules to time their arrival at the stop or station to avoid long wait times.
In these periods, reliability means SCHEDULE ADHERENCE.

In other words, one metric will not do when measuring service.  Most CTA customers traveled on services that run every 10 minutes or less, and even more when the filter is extended to 12 minutes.  (This may reflect service levels in 2006 without the effects of recent budget-induced cutbacks.)

A graph of running time distributions on page 10 warms my heart because it is exactly the sort of analysis I have been publishing here.  Without question, surface routes have big problems with running time reliability, although these vary by time of day.  One issue for CTA is on time departures from terminals, although they managed an improvement up to over 80% in January 2006.  This is also a problem in Toronto where uneven headways begin with terminal departures that may lie inside the TTC’s target 6-minute band, but which actually result in bunching that can travel the entire length of a route.

Dealing with route-level problems requires a route-specific approach where the sources of delay and uneven running times can be analyzed in detail.

Boston

The MBTA publishes a monthly scorecard with past versions available online.  Detailed breakdowns for rapid transit routes, and for the bus and commuter rail systems can be viewed by scrolling down.

  • Subway on time performance is measured near the terminal stations, and trains must be within 1.5 times the scheduled headway.  This is tighter than the TTC’s standard (it would yield a window 1’10” on either side of a 2’20” schedule at peak, or a 2’00” window either way on a 4’00” off peak service).  However, there is no sense of whether the trains remained acceptably spaced as they travelled along their routes.
  • The number of trips operated acts as a stand-in for vehicle reliability and availability.  As with many other metrics, this is an all-day figure and does not show whether all peak trains actually ran.
  • On the commuter rail system, a train is considered to be “on time” if it is not more than 5 minutes late at its destination.
  • Speed restrictions are measured as minutes of delay with no reference to the proportion of the system under slow orders as in Chicago.

Notable by its absence is any reference to surface operations beyond basic stats for the bus fleet and service.

Like Chicago, the MBTA distinguishes between the behaviour of riders on frequent and infrequent routes with the rapid transit lines measured relative to scheduled headways while commuter rail is measured relative to scheduled times.

San Francisco

San Francisco combines transit and traffic operations under a Municipal Transportation Agency.  However, most of the “service standards” the agency reports relate to transit services.

  • Both schedule and headway adherence are reported with greater weight given to routes with more riders.
  • Like Toronto, San Francisco has target load factors for vehicles, but they also report on the number of peak runs that are exceed these targets by 25%.  In Toronto, such problems vanish by averaging all riding over all peak trips whether anyone is on them or not.

The “on time” standard in San Francisco is +1/-4 minutes.  This type of uneven window is common on other systems where some degree of lateness is tolerated, but being early by more than a trivial amount is not acceptable.

On January 3, 2012, the SFMTA Board approved their 2013-18 Strategic Plan.  The City has a Transit First policy and this is reflected in the priorities of the plan.

The most noticeable improvements from this plan will include a faster and more reliable transit system, better bicycle and walking conditions for all age groups, easier access to taxis, more vehicle and ridesharing options, smarter parking solutions and more convenient payment and information options.

This is not just a transit policy, it is a transportation policy and this fundamentally changes the context in which transit quality discussions occur.

Goals of this plan include a reduction of bunching and gapping on the “rapid bus network” (defined as headways less than 2 minutes, or more than 5 minutes over the scheduled value).

Monthly reports on progress toward the goals are available online (November 2012).  On time performance and scheduled departure from terminals are both reported, and these numbers are not very pretty.  San Francisco is now using NextBus data to allow for automated collection of this information replacing on-street supervisor surveys used previously.

SF Muni also provides daily reports of service including vehicle and operator availability and details of major delays.  Given the relative size of Toronto and San Francisco, and the intensity of transit operations here, such a report would be considerably longer for Toronto.  The daily report does not include any line-level review of service quality.

However, detailed studies of some routes have been conducted under the Transit Effectiveness Project.  These are micro-level reviews of problem areas along routes in which community involvement is essential for understanding of local effects and acceptance of changes.  The intent is to fine-tune the operating environment of major routes so that travel times will be reduced and service quality improved.  An overview of the program was presented to the SFMTA Board in November 2012.

Washington DC

The Washington Metropolitan Transit Authority (WMATA) publishes a summary page and monthly reports (November 2012) of various performance indicators.  All operations are measured relative to schedule rather than on a headway basis.  The window for “on time” performance is +2/-7 minutes, but even with a fairly generous definition, WMATA’s bus network barely gets above the 75% mark on an all-day basis.  The rail network does better at around 90%.

I was amused to find an article on an advocacy group’s website (Greater Greater Washington) about the limitations of WMATA’s service quality measures and the fact that the generous window for “on time” could lead to badly bunched and gapped service just as it does in Toronto.  The writer’s preference was for a shift to London style of reporting that looks at headways relative to scheduled values.  This would measure service as riders care about it rather than from the management point of view.

London, UK

London is the granddaddy of transit systems (the Underground just celebrated its 150th birthday).  They have been carrying huge numbers of riders around a large, complex city for a very long time.  In recent years, much of their operations were contracted out to private companies, although famously the attempt to do this with the Tube system was a complete failure.  With many separate companies providing service, the ability to monitor and report on their performance is an essential part of system operations.

Standards developed in London have been extended throughout the UK where comparable needs to monitor private bus operations.  This is essential both for contract management and to establish a history of service provider quality and attention to improvement.  The national target for bus operations is that 95% of trips should depart from time points (locations where service should appear at a specific time) within a band of +1/-5 minutes.  The standards recognize that this may not be possible in all circumstances, but it is the target at which providers should aim.

This is further refined for terminals as:

  • Frequent routes (10 minute headway or better): In 95% of cases there will be 6 or more buses per hour, and no gap of greater than 15 minutes will occur.
  • Less frequent routes, 95% of the trips should depart within the +1/-5 minute window of the advertised time.

At midline timepoints, the rules are different:

  • Frequent routes are measured by the Transport for London yardstick of “excess wait time” (described below)  The degree to which waits (i.e. headways) are longer than planned should not exceed 1¼ minute.
  • On less frequent routes, 70% of the trips should depart within the +1/-5 minute window of advertised times.

Penalties are visited on companies that fail to meet the standards.

Excess Wait Time is calculated based on the difference between the expected wait (one half of the advertised headway, on average) and the actual wait time.  If the schedule says a bus should appear every 6 minutes, then the expected wait is 3 minutes.  A bus arriving in a 10 minute gap will contribute 4 minutes of excess wait time.  Buses running close together are not counted and this may actually under-report the effect of the lateness.  (That 10 minute gap could be followed by 3 buses, but most passengers will experience the longer-than-expected wait.)

One scheme proposed to “fix” this and penalize service for wide gaps is to use the square of the excess wait time.  In this case, that 4 minutes would become 16.

The underlying math works like this.  If riders arrive at a stop more or less uniformly, then the number of waiting riders is a function of the gap between buses.  The longer the gap, the more the riders.  Their waiting time is itself a function of that gap.  Squaring the wait adjusts the effect to give more weight to long gaps than to short ones.

TfL reports on its bus operations in some detail with reports subdivided by Borough.  (Scroll down below the list of boroughs for definition of the measures used.)  Historical values for service quality and service operated are available for every route although these are summarized at a 4-week level rather than showing the range of daily fluctuations.  A summary produced each quarter includes measures such as the ratio of average to scheduled wait times and percentage chances of having to wait 10, 20, 30 minutes or more for what should be a “frequent” service.  These data are not subdivided by time or location, although it should be embarrassment enough to have a “frequent” route with less than 90% of the service matching that description.

For less-frequent services, schedule adherence takes priority because people expect service to arrive according to the advertised timetable.  The treatment here is completely different from that for frequent services and measures include percentage chances that a trip is within the on-time window (+2/-5 minutes), the chance that a bus will be missing, the chance that a bus will be early, and the chance that it will be late.  Very late buses (over 15 minutes) are treated as “early” for the next scheduled trip, and in some cases, a “late” trip indicates that a bus is missing.  These measures are much more meaningful for less frequent services than the headway-based measures used on frequent lines.

One obvious, but unanswered, question is what happens when a line is sometimes “frequent” and other times “less frequent”.  If this model were applied in Toronto, then there would have to be distinctions in the measurement and management regimes depending on the level of service.  Moreover, routes with branches could be “frequent” on the common section, but “infrequent” elsewhere.

For the underground, TfL reports a wide variety of measures and breaks these down to individual lines.  (Period 6, 2012)  One important concept used here is the “journey time”, a value calculated by actually traveling on the system and measuring the time required for various standard trips.  This will include station access time (can be affected by construction, congestion, out of service escalators, etc), platform time (can be affected by headways and by train capacity), and travel time (can be affected by slow orders or service problems enroute).

“Reliability” of devices such as escalators includes those out of service for planned maintenance because this is the view riders have of the system.  They really don’t care that an escalator or elevator is under scheduled maintenance, only that the station has become less accessible than expected.  To put it another way, if we have to shut off an elevator for two months a year for regular maintenance, then it is not available anywhere near 100% of the time.

This brings me to another observation about how various systems report problems.  In some cases, they are subdivided between “chargeable” (our fault) and “non-chargeable” (not our fault) events and only the former are reported.  This may give an idea of how often service is interrupted for preventable reasons, but this gets tricky when scheduled maintenance isn’t counted.

Management wants to know what problems they might better control, but riders don’t care when they face a long walk up or down stairs.  When accessibility is considered a right, the management decision to stretch out repairs by scheduling only one crew to work 40 hours a week could be seen as not making a “best effort” to keep the system accessible.

As on the surface network, the underground reports show the effect of monitoring contractor performance in the (now-abandoned) private sector arrangement.  Delays caused by track, switching and signals are reported as these show the degree to which lack of maintenance can affect service quality.

Finally, there is a “Lost Customer Hours” measure which includes all events (except scheduled service outages for repairs) where service is delayed for more than 2 minutes regardless of the cause.  The detailed breakdown of causes by line is interesting because there are wide differences from one underground line to another reflecting fleet and infrastructure conditions.

Fleet and Infrastructure Issues

I have omitted most references to fleet and infrastructure related measures in the survey above because the primary interest here is on service quality.  However, fleet and infrastructure have their effects including:

  • Trains that break down in service cause delays and gaps when they are removed from the line.
  • Trains that are not available for service cause actual capacity to be less than planned or advertised.
  • Track that is in poor condition requires slow orders that annoy passengers, cause backlogs of trains on busy sections of a route, and limit the minimum headway possible due to constraints of a fixed block signal system.
  • Signals and switches that fail frequently can cause significant service disruptions up to complete closing of sections of a route.

Fleet numbers tend to be reported on two common bases:

  • Mean mileage to failure, and
  • Availability for service.

The mean mileage to failure numbers vary somewhat from system to system, and are probably best read as an historical track within each operation (or type of equipment) rather than as a comparison between systems.  The reason for this is that some operators have different rules about what constitutes a “failure” and might, for example, not include minor incidents such as a jammed door provided that the delay was short and the train remained in service.  This comes back to the concept of a “chargeable” incident I mentioned earlier.

Availability for service means just what it says, but this requires more than the scheduled number of buses at each garage.  If there is a probability that, say, five buses will fail in service on a garage’s routes, then there need to be spares available to replace those buses that have gone bad order.  This is challenging on a system such as the Toronto streetcar network where, unless a route is currently shut down for track work, the working fleet is too small to provide for extras.

Spare vehicle pools need to be subdivided between those vehicles that are available, but not used unless a change-off is needed, and those that are in the shop for minor or major repairs.  A high requirement for maintenance spares could indicate that a class of vehicles is not as reliable as it should be, or that there are “problem children” that rarely get out of the shop.  Either way, the capital investment in equipment is not producing the service it should, and it may require a disproportionate amount of maintenance staff and cost to keep such vehicles on the road.

The Transportation Research Board’s Quality of Service Manual

This discussion refers to Part 3: Quality of Service in the manual.

The TRB observes that there is a lack of standardization within the transit industry, and proposed the adoption of a scheme of “Level of Service” (LOS) comparable to that used in highway planning.  The A-to-F levels of service are well understood by highway engineers (and by some politicians) at least because they have a common foundation throughout the industry.  A road is a road more or less anywhere although one could argue that the standards by which the performance of a road is measured could be quite subjective depending on one’s overall goals.

The TRB distinguishes between “performance” — how well a service attains some goal — and “service quality” — how the service is perceived by a rider. Service measures represent the passenger’s point of view, the actual experience, and they “should be  relatively easy to measure and interpret” [definitions, ch. 1].

Having proposed a standard way to express “quality”, the TRB promptly abandons prescription of industry-wide standards.  LOS values depend on local factors — a city must make rational decisions about what each level and factor means.  Just as a headway variation may be acceptable in one city’s standards, it may be totally off the mark in another city.  However, “local options” can lead to problems both in industry comparisons and with localized values that award relatively high grades for performance matching political or budgetary constraints rather than a true goal of improved transit.  City Councillors and transit managers do not like to get a report card full of “D”s, and there can be pressure to tweak any standard to “improve” reported performance.

This is obviously counterproductive.  Better that a city says “yes, we know our system is only running at level C most of the time, but that’s what we chose to implement”.  Such a statement rarely comes out of any politician’s mouth.

There is nothing wrong with local standards as long as they are recognized for what they are.  The TTC has standards for its service design, and these have fluctuated depending on prevailing political winds over the past decades.  It is easy to say “we meet our standards” when those standards can be adjusted to circumstances.

This discussion addresses only fixed route networks as that is most applicable to large urban systems like the TTC. Demand responsive systems have a separate proposed set of metrics, but they are beyond the scope of this article.

Service quality measures are divided into two main groups: (1) availability and (2) convenience and comfort.

The proposed system recognizes that a transit route has different components which need different quality metrics. These are transit stops, route segments/corridors and the network of which a route is part. A full list showing possible ways one might measure a transit system or service appears on page 3-4 of the document. These factors are subdivided into eight groups including availability, service monitoring, travel time, and capacity.  I will not attempt to work through every one of them.

The report notes that those values which are of more interest for internal management of a transit system are more likely to be tracked than those of interest to passengers. This is partly caused by US government reporting requirements that focus on the management side of transit, and partly by the obvious self-interest of agency management groups. Moreover, the ability to track fine-grained service quality automatically is still not widely available in North American systems, and certainly didn’t exist when the data collection procedures of many systems were developed.

Availability comprises four key factors.  Quoting from the TRB document:

  • Spatial availability: Where is service provided, and can one get to it?
  • Temporal availability: When is service provided?
  • Information availability: How does one use the service?
  • Capacity availability: Is passenger space available for the desired trip?

These are amazingly simple questions, but transit services organized around budgets may do poorly on some or all of these factors.

Comfort and convenience factors include:

  • How long is the walk? Can one walk safely along and across the streets leading to and from transit stops? Is there a functional and continuous accessible path to the stop, and is the stop ADA accessible?
  • Is the service reliable?
  • How long is the wait? Is shelter available at the stop while waiting?
  • Are there security concerns—walking, waiting, or riding?
  • How comfortable is the trip? Will one have to stand? Are there an adequate number of securement spaces? Are the vehicles and transit facilities clean?
  • How much will the trip cost?
  • How many transfers are required?
  • How long will the trip take in total? How long relative to other modes?

Service delivery factors include:

  • Reliability: how often service is provided when promised;
  • Customer service: the quality of direct contacts between passengers and agency staff and customers’ overall perception of service quality;
  • Comfort: passengers’ physical comfort as they wait for and use transit service; and
  • Goal accomplishment: how well an agency achieves its promised service improvement goals.

Note that service reliability comes in as its own point, and as part of an agency’s credibility in providing and improving service. Staff may be friendly, stations may be clean and buses may be only comfortably full. If the service is unreliable — even worse if it is demonstrably less reliable than an agency claims — then all the fine words about goals and  communications goes for nothing and may even be counterproductive.

I will leave to the dedicated reader a detailed review of this document, but will give a few examples of the use of letter grades for Level of Service metrics.

The report notes that consolidation of many measures into a compound index may simplify life for readers (not to mention managers and political overseers with limited attention spans), but in the process vital detail is lost.

“Although indexes are useful for developing an overall measure of service quality, the impact of changes in individual index components are hidden. A significant decline in one aspect of service quality, for example, could be offset by small gains in other aspects of service quality.” [pg 3-23]

I would go even further and stress that even individual metrics, if summed across routes and/or different operating periods,  will mask problems. A way is needed to perform anaylsis at the detailed level, but to report it on a summary basis, possibly by saying “X percent of the detailed metrics fall below target and here are the really poor performers”.

A simple example of LOS metrics applies to headways and their effect on perceived convenience of service.

LOS Avg.  Veh/h Comments
    Hdway
    (min)

A   <10    >6   Passengers do not need schedules
B   10-14  5-6  Frequent service, passengers consult schedules
C   15-20  3-4  Maximum desirable time to wait if bus/train missed
D   21-30  2    Service unattractive to choice riders
E   31-60  1    Service available during the hour
F   >60    <1   Service unattractive to all riders

Many TTC routes operate at LOS “A” on paper, but the actual service on the street may be quite different. Also, the level even on major routes may fall into the “B” and “C” ranges at off-peak periods especially late evenings and weekends.

The LOS metric for the planned service must be combined with other measures of actual service operated. [See schedule and headway adherence metrics on p 3-47 and 3-48.]

Schedule adherence is expressed as the probability that a rider will encounter an off-schedule vehicle ranging up to level F.  At this level, at least one transit vehicle will be late every day if a round trip involves four segments (one transfer connection each way). These probabilities flow directly from actual measurements of service operations as I discussed in an example on the TTC much earlier.  If less than 75% of trips are on time, then the probability is very low that a rider will make four trips (there and back again with a transfer each way) without hitting a late vehicle.  What riders see all the time is a dysfunctional route or system.

For headway adherence, a different scheme is used, but again it is driven by real data. In this case, the co-efficient of variation is calculated and this can be directly related to the probability that a given headway will be at least 1.5 times the scheduled one. Once this probability exceeds 50%, then most service is running in bunches.  The actual headway experienced by riders is much worse than advertised with attendant problems of uneven vehicle loading and the almost inevitable short-turns driven by a desire to get vehicles back “on time”.

The schema proposed by TRB does not appear to have been implemented anywhere given both its complexity and the need for local agreement on definitions of service levels.  However, the underlying discussion contains many useful guides and should prompt questions of any transit agency about just what it should be trying to measure and how it might achieve this task.

Other Papers of Interest

Critical Measures of Transit Service Quality in Various City Types

This paper reports on a 2005 study in Gyeonggi Province, Korea, which includes the capital, Seoul, but also many smaller cities with varying characteristics.  The intent was to discover the type of factors that make transit attractive (or not) to people in these cities and whether there was any major difference by type of city (population, industrialization, etc.)  The sample size is fairly large overall (2,397) although it is smaller for each individual city.

Factors which generally rated highly were those related to service level and reliability, fares and the “friendly” factor (staff interactions, accessibility, courtesy of other riders).  Notably, the factor “Reliable trains/buses that come on schedule” ranked high in importance (9 out of 10), but low in satisfaction (2 out of 10).

There is not much “news” here beyond learning that concerns about transit are similar on the other side of the world, but the methodology of determining what is both important and well or poorly done is useful in focusing improvement (and in keeping what is already good) where it will have the greatest effect.

Valuing Transit Service Quality Improvements

This 2011 paper from the Victoria (B.C.) Transport Policy Institute reviews the factors that affect transit’s attractiveness and how these might be applied.  Of particular interest is the notion that making a trip more comfortable (less crowded, more convenient) can produce comparable improvement to reductions in travel time.

This is not surprising when one considers that transit riders assign a high penalty value to unpredictable events such as waiting for a bus or transferring between routes, and their perception of a journey is affected by how easily they can board and comfortably ride.

This has important implications for planning since time costs are a dominant factor in transport project evaluation. Conventional evaluation practices tend to ignore qualitative factors, assigning the same time value regardless of travel conditions, and so undervalue service improvements that increase comfort and convenience. Yet, a quality improvement that reduces travel time unit costs by 20% provides benefits equivalent to an operational improvement that increases travel speeds by 20%. [p. 2]

The paper uses a metric of “dollars” in the sense that any factor of transit service has a real or perceived cost (the cost of delay, the cost of congestion, etc.) and in some cases, riders may be willing to pay more to improve attributes of the service.  This methodology fails, in my view, to recognize that riders do not directly bear the cost of whatever service they may use, and moreover, the value of an improvement must be considered in the context of the ability to pay.  The actual funder of much of transit (especially capital) is the general public through some form of taxation, not the individual who may or may not benefit.  I as a heavy user of transit may benefit from and value better service, but someone who commutes by auto and regards transit as something for “other people” will not place the same value on service-related spending.

That said, there is much in the Victoria paper worth reading and in its encouragement of a wider view of transit’s attractiveness than factors measured only by expenditures.

Understanding Bus Service Reliability : a practical framework using AVL/APC data

This 2006 paper written by Laura Cham as part of her Masters program at MIT reviews in great detail the analysis of route operations using data from automatic vehicle location and passenger counter systems.  Boston’s “Silver Line”, a nominally BRT implementation, is reviewed in detail.  Cham finds many of the same problems we have seen in analyses I published here including variations in terminal departure punctuality as a major source of unreliable service.

This is a paper for those who want to read about analysis of real-world service data in great detail.

39 thoughts on “How Should We Measure Transit Service Quality?

  1. Does the TTC have a way to count the number of passengers waiting at a stop or currently on board a vehicle? The driver maybe the only one, other than a supervisor on the scene, to know the approximate numbers. However, it looks like the driver may not be able to object if told to short-turn (remember the Ford football team emergency). Too bad the cameras on (or outside) a vehicle couldn’t be used to confirm a situation, but they can’t be looked at or relayed in real-time.

    Steve: The TTC is installing passenger counters in some of its buses, but a 100% sample is not needed for planning purposes. Just enough buses to move around between routes to get a snapshot of some routes each day. The video outputs of the cameras are not available in real time. Aside from privacy issues, the basic problem is that bus data systems only communicate with transit control every 20 seconds, and then only with a short burst of data, not a continuous stream.

    Like

  2. Do they still have checkers Steve? I used to see people sitting on buses with the handhelds on every run, all day long noting people getting on and off and where.

    Woodbine South & O’Connor come to mind. Also they actually put checkers on the temp 13 Neville Park route around 05 when the Queen tracks were rebuilt. They were seeing if a permanent here route would be viable. I guess it’s expensive to send two or three humans times the number of vehicles on that one route for the entire day. I have not seen these checkers for a number of years but it should have been very accurate.

    Do they still exist?

    Steve: Yes, but their function will gradually be replaced by the automatic counters in many cases.

    Like

  3. The TTC prepared a YouTube video to explain short turns. The video says the TTC is trying to reduce short turns.

    The 3 reasons given for short turns in the video were congestion, road construction and emergencies. I suspect there are some other reasons.

    In December, I and other passengers were kicked off a westbound 512 streetcar at Lansdowne. There didn’t seem to be long waiting times between streetcars in either direction. The short-turn streetcar stayed in the loop while I boarded a following streetcar.

    This month, by looking at NextBus, I “saw” another westbound 512 streetcar go into the Lansdowne loop. It seemed to stay there even though there was a gap developing going in the other direction. It waited in the loop until a pair of eastbound cars finally passed and followed them although not too closely. (Using NextBus to track short turns seems tricky as short-turned vehicle seem to suddenly disappear in the loop to reappear later upon leaving the loop.)

    The video did not mention short turns to put drivers back on schedule.

    Steve: Yes, I thought that video was, to be generous, not entirely honest about what was going on. The example you cited shows one particularly galling problem — that cars re-entering service do not necessarily fill gaps, but may come out to join or create a parade.

    Vehicles can disappear from NextBus when the operator flags them as “out of service”. They are actually still in the open data feed, but NextBus does not display them. A related problem exists with short turns that go off route. If the off-route section is not defined as being part of the route (this is a function of how TTC sets it up in their system), then the vehicle disappears from the public display.

    Like

  4. “The TTC is installing passenger counters in some of its buses, but a 100% sample is not needed for planning purposes.”

    Do passenger counters take into account that some consecutive buses have very unequal loading (when the empty bus bunched with the full bus in front)?

    Steve: I am not sure whether you are talking about automatic or manual ones. In the case of automatic counting, every bus on the route should have APC equipment to get a 100% sample and deal with just the type of problem you mention. However, it is not necessary to equip every bus in the fleet, only enough to send out on a sample of different routes each day.

    Manual counters who ride the buses are not on every vehicle because there simply are not enough of them. The problem then is whether the subset of traffic they see is a statistically valid representation. This can be corrected, to some extent, by making observations on several days and averaging the results. However, when some service changes have been challenged, one day counts have been presented to defend the TTC’s position, and this implies that budget took priority over accuracy.

    Another type of count involves counters sitting at curbside (this is also done in the subway). In this case every vehicle/train is counted with an estimate of the on-board load. However, this can overstate the ratio of capacity to demand when the counts include vehicles running with lighter loads either because of bunching or because they will short-turn. On average, a route may not be overloaded, but that’s not the “average” experience of riders who are on the full cars.

    Like

  5. I just want to clarify on point raised by W. K. Lis. We, as TTC Operators, can object to a Supervisor’s request to short turn the vehicle, as the CIS Supervisor has no way of knowing the load situation. If the vehicle is fully loaded, in bad weather, we do have obligation to point this out to the Supervisor. We know that we can be overruled in this situation, but at the very least, we have advocated for our passengers! We are, obviously, not aware of the total situation on the line, but the Supervisor is not aware of the load situation on our vehicle.

    We, as Operators, are called upon to provide vehicles for shuttles (due to subway or streetcar delays) or for shelter buses (as in the Rob Ford football team case). These are deemed “emergency situations” and if we are mid-route we have to follow the directive of the Supervisor. We can point out the load situation, but the Supervisors also have to follow the directives from Transit Control.

    One has to observe the tracking system in operation to get a sense of the 20-30 second updating. You can go from -2 to +2 in that time frame! We call it “TRUMP Jump” and it is a regular occurrence for us. This is just one of the challenges that we regularly face each and every day.

    Steve: That sort of jump should not occur if GPS is being used to track and constantly update position and schedule variation. It was the kind of thing the old “signpost” system triggered regularly because CIS could not accurately track vehicles and had to make a correction from time to time. (You can imagine what this did to attempts at data analysis!) Is CIS still using signpost info for schedule monitoring?

    Like

  6. At some point in a city’s growth, it gets too large for any solution to work.

    There is a maximum number of passengers per minute the system can move, before traffic failures and people problems interfere with service.

    For simplicity let us consider just one aspect: service along Yonge subway. At rush hour, service is already being provided at a maximum level. The trains are three minutes apart, and run full. A passenger who doesn’t board the first train because of capacity won’t have better luck on the next train, as it will also be full.

    Steve: A quick correction. The scheduled headway is 2’20” and occasionally a “gap train” will be inserted to compensate for delays. If they have not yet been dispatched, they may be inserted at peak times such as hitting Bloor Station southbound at the height of the AM peak. This clears off the platform and reduces delays for following trains. The minimum headway is a function of the signal system, train length and track geometry. YUS will eventually be able to run trains closer together at least over part of its length, but the operational challenges and sensitivity to problems grow the closer one gets to theoretical maximum service.

    This idea is too late, but perhaps Toronto should allow no further permits for high density housing. If Toronto stops the high growth now, we will still continue to see the effect of what has been started for perhaps ten years, when you consider how couples grow into families and their children become passengers, who also grow up and continue this cycle.

    This is just the usage metric for one transit line, albeit the central one in Toronto.

    As regards headway, and buses which get together and stay that way, the problem is multifaceted. As a rider, it seems some drivers know exactly how to prevent their vehicle from being the congested one in the bunch. Many drivers are perfect in all aspects of their job; that suggests that many are also problem drivers, and no written rules about how to do their job better will matter.

    Your essay compared headway to schedule times, and discussed how different large transit cities measure their success (or opportunities to improve). I would like to see a two-way radio service provided to drivers, so they can just communicate messages to fellow drivers. GoTransit has this, and it seems effective.

    Messages such as “this is the 84D trip 3, and a truck is stopped blocking traffic both ways along Sheppard near Oakdale.” Or perhaps, “this is the 98 bus, and I’ve got a blind passenger who needs the last bus going to Arrow Road. Can you hold up please?”

    As I write, I’m thinking one of the best messages would be “this is the 54 trip 3 to Orton Park, and I don’t have room for any more passengers. I’m here at Bayview and Eglinton. I have left 40 people behind at the bus stop, cursing me.”

    I know they can say all of these things to Control, but watching the drivers, they resist ever calling someone in administration; perhaps there are repercussions.

    I will watch the comments here. Perhaps you, and other readers of stevemunro.ca will have useful thoughts.

    Like

  7. “Despite a service target nominally based on headways, the TTC remains very much oriented to schedules because of the needs for operators (and by extension their vehicles) and their crew changes.”

    When I was a kid in Montreal’s South Shore the transit agency there has a fleet of small cars in which the relief driver would drive out to meet the bus and then the driver going off shift would drive the car back to the bus depot.

    I know like many things (like apartment building composting) Toronto has massive scale issues compared to other smaller cities. Still for “problem” bus and streetcar this might be something the TTC might want to pilot.

    Like

  8. I really do believe that the service quality depends on two factors: How long does it take for the bus/streetcar/subway to come along and pick me up, and how long it takes to get me to my destination. These are the two factors I am going to use, especially the latter, when deciding on what form of transportation (and maybe even which route) to take.

    I have never understood why the TTC has never used a drop back procedure for drivers, especially on the 501 Queen car. This would put the streetcar (or the bus or subway) back into service faster while providing time for a brief ‘rest’ period for the driver. I have seen streetcars at Long Branch take a long break, or the driver sitting onboard the streetcar reading a paper while the passengers wait in the cold or heat. This all slows down service – if the streetcar is not moving along its route, it is not providing any service. If it arrived, and then departed within five minutes, service would not suffer like it would when it is sitting there for 10 minutes.

    Steve: Yes, those very long layovers at Long Branch are quite annoying and are, in part, a function of operating a line 90 minutes (at least) long where the ops can certainly make a case for needing a break. I plan a series of articles on the Queen car next to see how the various measurement schemes used or proposed elsewhere would look when applied to one of our problem routes.

    Like

  9. To answer Steve’s question:

    “Is CIS still using signpost info for schedule monitoring?”

    The short answer is YES. The TRUMP units are still in use on all the surface vehicles (bus and streetcars) and the CIS consoles are still using the same colored “blips” moving on the lines. The CIS Supervisors can use GPS on a vehicle by vehicle basis to see “exact” location by switching into a window, but GPS is not in use to monitor a complete line.

    Steve: This explains a huge amount about line management.

    To answer David Berman’s question about using two-way radios to communicate. The current system uses the TRUMP to communicate with Divisional CIS or Transit Control (CIS Central) depending on the key pushed. There is no inter-bus capability. We are not to use the handsets unless the bus is secured and located in a subway station or off-street loop; otherwise we are required to use the boom mike (located on the dash board ahead of the steering wheel and the speaker on the TRUMP unit. We resist calling CIS, but not because of repercussions, rather because of the hassles of the hands free requirement (per the HTA). We (and I do) notify divisional CIS of unusual occurrences on the line, but you have to keep in mind the number if routes and the number of vehicles being monitored on each console.

    Like

  10. I gave up reading this stuff. Too tedious. Far too many reports are written that result in no improvement in whatever it is they are studying. What is needed is SUPERVISION. Perhaps what is needed is a method of paying the CEO and other management people that depends upon meeting certain criteria, in other words getting RESULTS. As far as schedule/headway goes the idea of running ahead of schedule should be done away with. There are many reasons a bus can be late but, there is NO excuse for being ahead of schedule.

    I remember when checkers rode the routes and had a hand written check sheet. Things ran great. Once they finished their work things went back to the usual BS. A number of years ago the TTC handed out little notebooks to regular riders who agreed to make notes on the vehicles they rode daily for a 30 day period. Within a week everything was running great! Because the operators knew they were being watched (thus Supervision) and they ran on time (did not leave terminal early) and did a great job. Once the 30 day period ended it was no time before it was the same old same old.

    Like

  11. I agree that the 501 is probably the worst case scenerio – but I have seen this on other routes where, say, a bus is supposed to depart a terminal/endpoint at 6:02p.m., 6:15p.m., 6:28p.m., and 6:41p.m. and passengers have to wait for one of the last busses because the first two disappered entirely.

    I recall taking the 123 Shorncliffe (or trying to) once, and I ended up being late for a meeting because of what I stated above. There was, according to a driver on the 110 run, an issue with construction on part of the route – which I was unaware of (living in the area, I have a good idea of what construction is going on in the area) nor was there any sign of this. So, there are cases where the TTC has to look at having some sort of “back-up” in order to keep things moving. The TTC cannot simply let things slip. For example, if there are real diversions (or at least construction that is actually slowing down service) the TTC needs a way of making up for this so to minimize the issue.

    Like

  12. When it comes down to it, each route will have a number of issues with it, and specific routes are more important simply because they carry more people. The TTC should identify priority routes, and then determine the specific issues facing each route. These issues should be listed in the CEO report, and progress on fixing them reported.

    For example, if a route has staffing issues (drivers not leaving when they are supposed to) then the fix is to train and potentially discipline. If the issue is on street parking, or a long left turn then maybe changes to the street or signals are required. If the drivers don’t have the information they need, then let’s get it for them, if it’s not frequent enough, let’s send it more often…can we do priority via GPS instead of on street loops? Can we get camera’s on busses to auto-ticket people illegally parked on transit routes? If not can we put camera’s along the route, and TTC notifies police if there are any potential illegally parked cars, or even just the drivers? The TTC should be driving these potential fixes to the system from the top – not waiting for politicians to get around to it.

    Ideally each route would have a detailed history of issues that have occurred and can be fixed, as the city is rebuilt these items should be part of the planning and EA’s for any future projects on the route.

    Each route should be looking at specific issues (including one-offs like accidents, construction etc.) as well as generic issues (which block is slowest to traverse? when? which stops do we spend a lot of time at? when is bunching most common?).

    Some of this would benefit from driver feedback (ie. if you polled all the drivers on a route as to the slowest section of their route, or ways to improve the route, you’d get useful information).

    Like

  13. We have to be careful in giving too much consideration to mere numbers. Statistics can be cooked to achieve whatever ends one desire. It is a common practice at airlines to pad schedules to improve their on time percentage. For example, Air Canada typically pads their domestic schedules like YVR YYZ by 20 minutes or so. This boast the on time percentage to about 70%. It is used to offset routes like YYZ LAX which has 50% on time performance. People read the headline number like 77% on time, but they do not consider the detailed numbers.

    When the metro switches to ATO, schedule adherence and service quality should improve dramatically. The only variables left are how fast can the doors be closed and how many times the emergency alarms can triggered. The first can be solved by hiring station assistants to help push people inside the train so that the doors can close. The latter can be solved by giving station masters and metro operators the same authority as the Toronto Police. This way they can remove problematic passengers off the train to the platform.

    The TTC should provide the data in these areas for the non metro routes; on time%, cancelled service and short turned service. With these three data sets, one can calculate the reliability of the route. I would also change the definition of on time performance. Anything beyond plus or minus 15 seconds should be considered not on time.

    The TTC is not as bad as everyone makes it to be. Buses get delayed too in Japan and people have missed connections because of this. At least in Toronto, the 39 bus comes every 3 minutes or so. In Japan, buses do not run any where near that frequency.

    Like

  14. How to judge service quality?

    On routes with frequency higher than or equal to 15 minutes, headway maintenance.

    On routes with frequency less than 15 minutes, schedule adherence.

    Done.

    Like

  15. Well, OK, of course then you have to look at travel time, to make sure that the adherence is not done simply by pure padding.

    Like

  16. Raymond: if you’re correct and the drivers behave themselves when and only when supervised, the thing to do is to have a system for reporting exactly which drivers are not behaving themselves. After a while files will be built up and the bad actors will be sacked, assuming there is any management at all.

    Like

  17. There are really two types of short turn: a ‘scheduled short turn” and a ‘spur-of-the-moment’ one. The latter are to deal with problems like accidents, operator issues, traffic or weather and are, presumably, decided on by Transit Control and Route Supervisors (with, as Gord reports, sometimes input from the operator). Then there are scheduled short-turns and these seem to be coming more common. My main route is 504 King and in recent months more and more eastbound cars are signed to either Church or Parliament. The ‘Church cars’ then do the loop via Richmond, Victoria and Adelaide and return to westbound service at Church. This has the effect of providing additional capacity to King West at the expense of King East. I raised this with the TTC and was told that they were trying to make the best use of a limited number of streetcars but agreed that they were ‘robbing Peter to pay Paul”. If they are short of streetcars when the Queen’s Quay and Lower Spadina lines are using buses it does not augur well for the time (May?) when all lines will be operational and the new streetcars will still not be in service.

    Like

  18. Upon a quick read, London and Chicago seem to have especially strong showings when it comes to measuring service quality. I’d be curious to know how TfL measures journey times — whether they’re using Oyster card data (since it has mandatory taps in and out) or actually sending people out on trips.

    And journey times should be getting easier to measure — if Presto can’t do it, you could still get much of the way by running “simulated” passengers through the system based on reported vehicle locations and using fixed transfer times. Ideally, known escalator/elevator outages would inflate those transfer times. Even if the daily simulation was limited to a basket of several dozen representative trips through the system (kept secret to avoid “teaching to the test”), the results might be more representative of real-world service promptness than anything currently in place. (I’d argue promptness and crowding are separate statistics, albeit closely related.)

    Given Andy Byford’s time in London, I’d expect he’d be familiar with the TfL stats, though perhaps they’ve evolved since he left. That raises the question of why Toronto’s service quality KPIs remain rather inadequate 8 months after being introduced. Do you get the sense he’s busy tackling other challenges, having a hard time getting through established TTC culture on these issues, or largely satisfied with the limited service quality measurement currently in place?

    Steve: I suspect trip times are measured with real people because Oyster times don’t include station access from the street to the point where one encounters a reader.

    As for Andy Byford’s efforts, I understand that he wants to put a more robust set of metrics in place, and the TfL scheme is one he is looking at. Just how much detail we will get, and whether the TTC will truly move to regarding headway and lack of bunching as paramount, remain to be seen.

    Like

  19. Echoing DavidC’s comments above it seems that we have a new streetcar route – King via Dufferin or Bathurst in the west and Church or Parliament in the east.

    I’ve also wondered to myself that if the TTC didn’t short turn cars on such a frequent basis if the service might actually be more reliable in the eyes of the rider.

    I personally measure the service on how long it would take to get on a 504 streetcar versus the time it takes to walk the mile or so to and from the office. Sadly I can walk the distance and never see a streetcar, never mind one that I could actually board!

    Steve: Yes, King seems to be being managed as a split route with the short turn destinations adjusted depending on how slow the running times are. Sometimes half of the service makes it to Broadview/Parliament, other times only to Church. Same with Dufferin vs Ronces in the west end. The problem anyone who rides the line knows is that demand beyond the short turn points is growing, and north-south service on the outer ends of the line can be quite spotty.

    Like

  20. Steve: I suspect trip times are measured with real people because Oyster times don’t include station access from the street to the point where one encounters a reader.

    Hmmm… most London Underground stations have gate lines at ground level only steps from the street. In central London, they typically occupy a storefront-sized space for the ticket office and barriers. With a few exceptions, the famously-long escalators are within fare-paid zones.

    There’d still be other limitations to using smart card data — someone who walks slowly, stops to buy a magazine (inside the fare-paid zone), or takes a less-than-optimal route could look like a underperforming trip. It’s a shame, because the data is being collected on so many trips, every single day.

    Like

  21. Steve says:

    “Vehicles can disappear from NextBus when the operator flags them as “out of service”. They are actually still in the open data feed, but NextBus does not display them. A related problem exists with short turns that go off route. If the off-route section is not defined as being part of the route (this is a function of how TTC sets it up in their system), then the vehicle disappears from the public display.”

    I have been watching “NextBus” recently observing the goings on of the Street car service, specifically 503, 504, 505, 508 and 510. On a particularly bad day on King there were probably more cars off the route than on it. I watched one car make the east to north turn at Church, another Church short turn no doubt. But it continued north past Queen; this could get interesting I thought. Next time I looked it was at Bloor. It continued along Davenport then disappeared north bound on Avenue Road. A gap bus I suppose that did not turn off its transponder. So sometimes the vehicles show when well off route and sometimes they don’t.

    On many days you would think that the TTC still ran the Parliament Car. There seems to be more cars on it than on Broadview. I just watched a Dundas car go north on Parliament, west on Carlton then north on Spadina. Now that is really providing useful service on Dundas. It has now been passed by 4 Spadina cars in the station. I wonder if they wanted to get a bad car out of the way? perhaps there is still some need for tail or storage tracks in loop.

    Perhaps it is time that the TTC looked at changing some of its long routes into 2 shorter routes, especially for the street car service. Running the 502 and 503 service on their lousy head ways is basically useless. Send both cars along the same route so there is almost decent service on one line. I would pick 503 to help with the east end of King. I would short turn the off peak at Queen so there would be a 10 minute service on Kingston Road and free up a car or 2. It would still make a connection with 22 Coxwell.

    I don’t know if it is possible or practical to split Carlton and Dundas in 2. College car anyone? There are 3 street cars and 3 buses on Parliament right now.

    In the west end I would use your plan to turn the Queen car at Humber and run CLRV’s on 508 at a 5 to 6 minute headway. This would free up some ALRV’s for King rush hour service.

    I agree that for service of 10 minutes or better headway, not schedule is more important. Another KPI should be what per cent of service actually makes it to its scheduled terminus. Also a measure of how many times the gap between vehicles is more than 2 or 3 headways would be good to know. If you have 6 timing points on a line and 4 of them are between standard short turns all the service could be turned at both short turns and the line would still, on average, be above the 65% criteria for buses. Perhaps it is time that the TTC asked the passengers what they consider important instead of what makes management look good.

    Pardon my ramblings but this topic is very good for venting on. Watching NextBus it is frustrating to see gaps move from one end of the line to the other and come back with no improvement.

    Steve: The problem of gap management shows up on spades when you look at a graphic representation of service. Gaps that travel back and forth across the city with no attempt to fix them stick out like sore thumbs. Another obvious problem is the absence of management of short-turns re-entering service to actually fill a gap. Of course, the idea of a graphic timetable has been around since 1885 (Marey’s oft reproduced Paris-Lyon railway graph), and I have published “as operated” versions of this graph for several routes. I have started again working on several sets of data for the Queen car, and I intend to report on its behaviour based on various metrics discussed in the article and comments.

    As for splitting routes, this does not always work depending on the O-D pattern. For example, how many people going to the Queen’s Park / Hospital / UofT district on College originate east of Yonge? There are no decent places to turn cars around on Dundas or College/Carlton, and operators don’t like places where they can’t take a break (remember the problems with the split Queen operation at Dufferin and Parliament).

    On Kingston Road, the TTC really should schedule a 10 minute off peak service and stop short turning 502s westbound at Church and eastbound at Woodbine Loop. This is a travesty of TTC service, but they have driven away almost all of the riding, so nobody much cares. And, yes, just one route 502 or 503, but not both, so that we might see reasonable headways and a bit more consistency in service. By the way, you assume that the TTC would send all of the 22 Coxwell’s up to Bingham Loop if the streetcar were cut back off-peak. Don’t count on it.

    Like

  22. I find that this discussion is starting to turn in an interesting direction!

    Benny Cheung:

    “I would also change the definition of on time performance. Anything beyond plus or minus 15 seconds should be considered not on time.”

    Nathanael:

    “How to judge service quality? On routes with frequency higher than or equal to 15 minutes, headway maintenance. On routes with frequency less than 15 minutes, schedule adherence.”

    and

    “Well, OK, of course then you have to look at travel time, to make sure that the adherence is not done simply by pure padding.”

    and Raymond:

    “if you’re correct and the drivers behave themselves when and only when supervised, the thing to do is to have a system for reporting exactly which drivers are not behaving themselves. After a while files will be built up and the bad actors will be sacked, assuming there is any management at all.”

    There seems to be a very real emphasis of “blame the Operator” going on here. If any of you have ANY seat time behind the wheel of a TTC surface vehicle, please feel free to assign blame! Schedule adherence of +/- 15 SECONDS – in dedicated ROW this would be difficult to maintain! In mixed traffic this is impossible! It can take 30 – 60 seconds to exit a bus bay – provided you get a motorist who actually understands the “yield to bus” law!

    How about taking a look at the (lack of) Service Planning Dept. who seem to believe that schedules from 20 years ago are still relevant? How about looking at the addition of traffic signals along roads that are experiencing additional traffic from massive condo developments. The vast majority of Operators to their job and do it well. I agree that there is a small minority who do take advantage and run hot, “soaking” the bus behind. CIS does do a reasonable job of monitoring the routes, but each console covers a large number if routes and vehicles.

    Steve: Considering that the subway does well to stay on time (or on headway) within a few minutes, keeping within a quarter-minute on the surface would be an astounding achievement.

    There is a problem with “soakers” but they are a minority. What is frustrating is to look at plots of actual operation and see the same pair of vehicles (bus or streetcar) follow each other like a train back and forth for several trips. That’s a case where CIS is not doing its job. You can even tell the off-days of the soaker by noting the days of the week on which this problem doesn’t happen.

    Headway management has been around for a long time, but the TTC seems to have forgotten how to do it. I remember decades ago having route Inspectors simply hold cars until they were a decent distance behind their leaders. I don’t buy the argument that the poor folks in CIS have too much work to handle — if that really is the case, then more staff are needed to adequately monitor and manage the service, and the tools really are needed on the street. It is disheartening to see two route Supervisors standing at King and John keeping track of the service with pen and paper. At least eastbound there is now a NextBus display in the shelter.

    On the labour/management front, yes there are bad apples, but they get away with it because of management’s indifference. Moreover, some routes have appalling reliability evenings and weekends. This implies a complete absence of line management. Possibly it’s a schedule problem, but I don’t think this is the issue.

    Like

  23. “Very short headways can be little bonuses (a second bus coming just after you think you have missed one), but most riders arrive at stops in the big gaps, not the little ones. Experienced riders will try to board the first vehicle that arrives even with a second one in sight. If it is short-turned, at least they can drop back to the following vehicle. Greater assurance of getting to their destination takes precedence over getting on a less-crowded vehicle.”

    This adage is true only for streetcars – with buses, there’s always a good chance the second vehicle will leap-frog the first, and the that first arriving bus is the one that ends up short-turned.

    This happened to me last night – I was waiting almost 20 minutes just after 7PM for a southbound 7 Bathurst near Sheppard. I boarded the first bus, which was packed. This was a big mistake because before Wilson, the second bus passed it, and was within sight – but too far forward once the driver got instruction to short turn at the ridiculous location of Roselawn Avenue.

    By the time the bus dumps everyone at Roselawn (actually, not Roselawn, but Ridelle, one block back), the other bus is three blocks ahead, going out of sight. We had to wait eight minutes for the next two-bus convoy to show up, one of which, the second in the batch, was of course, short-turning (I don’t think those passengers were able to get on our bus either!).

    The Roselawn short-turn location for southbound buses is a really bad location. There’s merely a shelter at Ridelle, surrounded by nothing except low-rise apartments and a church, and just too far short of Eglinton, where some of the passengers on my bus were headed.

    There was no congestion on the route that Sunday evening, likely the effects of earlier operational delays. Perhaps the TTC should abandon the idea of even trying to run a through Bathurst bus from Bloor to Steeles, as half the buses on the route these days don’t anyway.

    Steve: This is an excellent example of service that is being managed for the internal benefit of the TTC, but not for passengers. Customer service? At least if a short turn were contemplated, a good line manager would arrange a handoff of passengers from one bus to another.

    Like

  24. Many times down at St Joseph Hospital on The Queensway after 10 pm. Walked up to Dundas West Stn and did not have a streetcar pass me.

    Like

  25. Steve wrote:

    “There are no decent places to turn cars around on Dundas or College/Carlton, and operators don’t like places where they can’t take a break (remember the problems with the split Queen operation at Dufferin and Parliament).”

    Somebody remind me who transit is supposed to benefit, the customer or the operator. Yes, the operator deserves a break – but that does not mean that there are not options, or better solutions. And anyway, I do believe the Queen split was set-up to a certain extent. If the TTC really wanted to split up the route, it could have come up with a better choice.

    Steve: I agree that the TTC did a rotten job of implementing the split Queen service, and pissed off a lot of riders by inconsistencies in the degree of route overlap and allowing ops to forbid riders to use the cars to the furthest points. As designed, it should have at least been a Dufferin to Broadview overlap, but was in practice a Shaw to Parliament one. That said, large around-the-block loops are not ideal ways to turn back service because the vehicles spend so much time off route.

    Gord wrote:

    “There seems to be a very real emphasis of “blame the Operator” going on here. If any of you have ANY seat time behind the wheel of a TTC surface vehicle, please feel free to assign blame!”

    When it comes to the bad apples, yes blame them. In other cases, the “bad operator” may simply be having a bad day, may have had to deal with a lot of difficult passengers, may be trying to get back ‘on schedule’ without success, etc. to explain a lot of issues where people would ‘blame the operator’.

    I have encountered many good drivers/operators on the TTC. But the drivers are always the first ones to be blamed when the bus/streetcar is late when the problem may very well not be their fault.

    However, I can tell you there are drivers who are also mediocre, or may appear not to care about the passengers who are helping to pay for their salaries. For example, I have seen streetcar drivers pull a streetcar around the loop at Long Branch and then wait several minutes – reading a newspaper, drinking a coffee, etc. – before allowing passengers to board. Yes, the streetcar is not going anywhere for a few minutes, but as a passenger it would be a nice gesture to allow us to board and find a seat while the driver does this.

    Other drivers will allow passengers to board while reading their newspapers, etc. And this makes the other drivers, who in their own way are very understandable in doing this look bad – it’s more annoying at the time then afterwards in reality. However, it’s a consistency issue – why should one driver do something one way, and another driver do something different. In this sense, it’s the policy (or a lack of a policy) that needs to be fixed, not the operator’s actions.

    So yes there are times to “blame the operator” and times not to do so. And not all issues are going to be clear cut either.

    Like

  26. Summer before last, I needed to get home to Bolton without a car from Yonge-Eglinton. Mid-afternoon from Yorkdale, the GO bus passed Pearson Airport without stopping there, and let me out on Airport Road about one kilometer south of Derry Road and about 2 diagonal kilometers from the Malton GO station. The other passengers told me that this is the transfer stop, and to follow them. Trudging through a wild field and a couple of parking lots and crossing the GO tracks with my baggage, I arrived at the station and had a nice 2 hour wait for a bus to Bolton. When I say “nice wait”, I am being facetious. And, of course, when arriving in Bolton, I still had a 2-kilometer trek on foot to home. Total commute time: 5-1/2 hours. At least, I arrived the same day (see below). I cannot blame the bus drivers, they were faultless. Yes, 65-passenger luxury coaches running with 5 or 6 passengers!! Oh, btw, I need a hip replacement and knee surgery, that makes the hike just a little more sports-like.

    Second horror story is not so bad because I never actually got on the bus! On the GO website, I was checking times and routes for a bus from Bolton to Pearson Airport. Of course, this time I would take a taxi to the appropriate bus stop. Well, the best I could do for my departure on a Monday was 3 days and four GO buses. Umm, I would need more hotels en route to the airport than at the destination!

    This afternoon on Talk Back Toronto on CTV, a lady described trying to commute within Durham Region, that the various municipalities’ transit is unco-ordinated, a 15-minute trip by car takes 90 minutes by bus.

    Steve: There has been far too much emphasis on “cross border” integration with the TTC, and not enough attention to the local 905 systems and GO Transit. Part of this comes from parties who have “solutions” like a standard fare card or a takeover of so-called regional services by Queen’s Park. Actually spending money to provide better, integrated service and to get rid of the complexities and extra cost (to riders) of multiple fare systems never quite seems to be discussed. Attacking the TTC is a handy smokescreen, an alternative to looking at the larger picture and, dare I say it, the real implications of a “Big Move”.

    Like

  27. I don’t spend very much time on buses, due to my location, it’s mostly streetcar/subway. However I had a few points I wanted to mention in terms of (my) rider’s perception. The system has days ranging from lightning-in-a-bottle-brilliant days to post-apocalyptic-horrors of days. Overall though, my experience is positive, and I tend to travel during both peaks and throughout the day. Here are some route-specific observations

    502/503 :: Yes, these routes should be combined and full service should be restored to this line. I work on Kingston Road, and most of my clients don’t even realize that there is a streetcar (seriously—despite the trackbed). Perhaps in the future [re-]installing the track north from Upper Gerrard into Coxwell Station and eliminating the #22 bus (aside from overnights, perhaps).

    Steve: The layout and structure of Coxwell Station is such that putting a streetcar loop in it is not workable. Moreover, it’s overkill to build track on Coxwell north from Gerrard when the real problem is simply that the TTC runs lousy service on the 502, especially in the off-peak when they have no excuse like “we don’t have enough cars”.

    504 :: I have experienced the Church/Parliament or Dufferin/Roncesvalles short turns more often than I have ever experienced full service between Broadview Station & Dundas West Station on this line. Sadly, I do feel this would be the best candidate (besides the multi-serviced Queen line) for splitting into two routes; ideally Dundas West to King Stn (via Church), and Broadview to St. Andrew (probably Charlotte loop).

    Steve: This is a good example of the need for a proper OD study. King is a route with many overlapping demand patterns and it is important to understand them before arbitrarily splitting the route. Also, the Charlotte loop is a very congested area where King cars could be trapped unable to make a quick turnaround.

    505 :: A minor point of contention; has *ANYONE* ever used this streetcar while packed-to-capacity? I have never experienced it at any level approaching capacity.

    506 :: @Robert, I spend the majority of my time on the 506 and 512. There is no need, IMO to truncate this line into separate sections. When the TTC says “it’s all the fault of big bad traffic, and we can only operate streetcars properly in ROWs”, I say “506.” I have rarely experienced (key word to this discussion) wait times exceeding 15 mins, more often 5-10. I experience this line as predictable, comfortable and reliable — in the extreme East and the extreme West.

    Steve: I have not yet published an anaylsis of the 506 Carlton route, but preliminary work supports your point that it is comparatively well-behaved. There remains, however, a problem of headway reliability especially in the off-peak just as on many other routes.

    511 :: @Sean, Steve, is there any scheme in consideration to alter the terminal arrangement on this road? ie: to extend the 511 through Bathurst Station to St. Clair West Station permanently, once new streetcars become available (I understand there is some issue with wet weather and the Davenport escarpment in terms of doing this now) and consequently, terminating (a majority) of #7 buses at St. Clair West?

    Steve: This topic comes up from time to time, and the real issue here is that the place people on Bathurst want to get to is Bloor Street. Again this is a question of looking at the OD patterns and building route structures around them. When the TTC did operate some of the 7 Bathurst service into St. Clair West Station, it was very lightly used compared to the through service to Bloor.

    Decades ago when the streetcars came up to St. Clair, there were far fewer people living on Bathurst further north, and there was a heavy transfer move from the St. Clair car to the Bathurst car (which ran along Adelaide to Church and returned via King during the weekday) as an alternative to using the Yonge line. (If you go back far enough there was also a service from Lansdowne and St. Clair to downtown via Avenue Road and Bay.) The track layout partly preserves these old demand patterns, but that doesn’t mean we should try to resurrect them.

    512 :: My home stretch. I live in the strategically flawless location between St. Clair West Station and Oakwood Loop. IMO Oakwood Loop should have regular AM short-turns for Westbound cars; regardless of assertions to the contrary this does not occur. Service westbound in the morning is extremely low. This scheduled short turn should see cars returning eastbound, in *ADDITION* to regularly scheduled short-turns at Lansdowne Loop. By the time (any) streetcar reaches St. Clair West Station in the morning rushes, we have some precarious hangers on, on the roof, out the windows, etc; This car is crushed in the AM.

    I don’t feel I spend enough time on the 501/508 to comment, though in the highly random situations where I find myself in southern Etobicoke, I tend to find myself there for a long, long, long time … likewise entering/exiting the Beach. Perhaps route branches like Neville Park to Osgoode (via McCaul); Queensway to Queen Station (via Victoria ?); and Long Branch to Roncesvalles … These would be ‘customer-oriented’, as opposed to management-oriented.

    Steve: I have long argued that the merger of the 501 Queen and 507 Long Branch cars was a mistake, but the TTC clings to this arrangement. Simply restoring the break at Humber Loop is not a good idea because it is not a destination in its own right, and a lot of the service would never get that far west leaving a gap. There are various ways one might reorganize the service with a common theme being that the Long Branch cars need to come east at least to Roncesvalles so that there is a scheduled overlap with the Queen cars. Queen itself is far too long a route.

    Like

  28. Not all short turns are created equal.

    During the morning rush, bus 60 (Steeles West) operates quite well. During the evening rush and even after-rush, that route runs with very irregular headways, and it is often a challenge to board a westbound bus at Finch subway. Reason: some eastbound buses do not reach Finch station where crowds or riders are waiting, but short-turn at Yonge or at Bathurst and travel back west with a light load.

    Like

  29. Steve says;

    “On Kingston Road, the TTC really should schedule a 10 minute off peak service and stop short turning 502s westbound at Church and eastbound at Woodbine Loop. This is a travesty of TTC service, but they have driven away almost all of the riding, so nobody much cares. And, yes, just one route 502 or 503, but not both, so that we might see reasonable headways and a bit more consistency in service. By the way, you assume that the TTC would send all of the 22 Coxwell’s up to Bingham Loop if the streetcar were cut back off-peak. Don’t count on it.”

    No, I was not talking about running the 22A service but a Kingston Road Street car only on Kingston Road in base service. The 22 does a large loop around the old race track grounds so it would make a transfer connection with the street car.

    I agree that there are not any useful places for loops to split east west loop in 2 but there should be some thought given to taking cars off 1 route and sending them back on another like they do with the 508 in the a. m. rush as they return as Carlton cars to serve the U of T.

    On some routes the TTC’s schedule causes bunching. A few years ago I was having dinner on Eglinton near the Eglinton GO station and noticed that the Morningside buses ran in pairs, a short turn followed closely by the longer route. This went all in both directions all rush hour. When I got home I looked up the schedule and though there was equal service on both branches the long branch left 1 minute after the short turn branch. Perhaps this was to spread the load so that there would be room on the longer route’s buses.

    Like

  30. In fairness to the TTC, not all the problems with ‘customer service’ are their fault. The politicians manage to screw things up too. An example, Councillor McConnell (usually a very sensible councillor) is proposing at the Toronto and East York Community Council next week that the parking arrangements on King Street East should be changed and the result will be that the rush hour streetcar only lane between Sherbourne and Parliament will end. The TTC registered their objections, the traffic folk do not sound too keen and it will be interesting to see if this goes through. It will, as predicted by the TTC, cause further problems with the 504. As the operation of the 504 is clearly in need of study and improvement it might be a better use of Council time if they ordered a study of more (not less) parking restrictions and left-turn prohibitions and even insisted that the police enforced the restrictions that exist.

    Steve: Local Councillors have a bad habit of falling in love with parking for “local businesses” and this takes precedence over transit issues. The proposed change is in the “counterpeak” direction, but this ignores the fact that King is a busy street both ways, especially for transit vehicles. As for the transit lane, it was a joke the day it was introduced. It has never been enforced (that is physically impossible in some locations anyhow thanks to curb lane parking and standing), and it should be removed.

    Like

  31. On using Nextbus to see short turns, my website, TransSee continues to show the location of streetcars that have gone off route because of a short turn.

    It also can show you how late or early vehicles are compared to their schedule so you can see which ones are likely to short turn and see who is late and who is early in a bunch. (This feature must be enabled in the settings).

    Like

  32. RE: 502 and 503

    Could the lousy service I hear about on these routes be because that most of their route is shared with other routes? As in the TTC feels that the passenger volumes along these routes are being split?

    Steve: They have shared routes for decades, but at better headways when separate services each came at a reasonable interval. Today, each route has a peak scheduled frequency of 12 minutes and for outbound riding, there is a good chance that this will not actually operate past the peak point. 502 cars are short-turned eastbound from Victoria, and some 503 cars don’t operate at all or wildly off schedule because they are crewed from spare operators who, if available, may not take the car into service on time. This “quality” of service does not encourage use of the routes, and in the case of short-turns manages only to keep cars “on time” while denying service to riders. This is bad management for which there is no excuse.

    Like

  33. Gord, I certainly appreciate what you do everyday. In Montreal, if you love your car, you will stay far away from merging buses. Give it time and Torontonians will have the same thinking.

    Changing the definition of on time performance to +/-15 seconds is not unreasonable. If a vehicle is late 15 seconds for every stop on a 20 stop route, that is a 300 seconds delay at the end. From a passenger’s perspective, one should not play guessing game as to whether a vehicle has come or not. If I arrive at a stop at 11:28 AM and the schedule says that a bus should come at 11:27 AM, there should be a good chance that I have miss the bus. However, Nextbus may alleviate this problem somewhat.

    Steve: No, the situation you describe is a bus that gets 15 seconds later between every stop, and will be outside of your standard very quickly. When systems talk of being +/- x minutes, this is relative to the scheduled time (or headway) at each stop. Buses don’t add, say, three minutes lateness between each stop along the way.

    The TTC bus is usually the starting point for most users. If a bus is delayed or does not come, the whole trip is affected. This is why if there is a spot that consistently delays buses, the Public Works Department or the police should fix that. If congestion is a problem, then changing the KPIs will not alter the fact that transit is not attractive.

    I sincerely hope that once Presto rolls out, transit operators will find it easier to maintain schedules. With no need to check fares, transfers and no fare disputes, it should make each boarding faster.

    Steve: This is a real challenge. Some AFP implementations have actually slowed boarding because of technology problems — the speed of card verification versus the process used today. In Ottawa, Presto ran into big problems with slow and unreliable response times on card readers. We will see in the next few months if this has actually been fixed for more than low volumes of card usage.

    Like

  34. In your second-last reference, “Victoria (Australia) Transport Policy Institute”, the Australia reference is incorrect; the VTPI is based in Victoria, British Columbia.

    Steve: Ooops. Apologies to Todd Litman and the folks at VTPI for transplanting them across the Pacific Ocean. I have corrected the article.

    Like

  35. Steve:

    There has been far too much emphasis on “cross border” integration with the TTC, and not enough attention to the local 905 systems and GO Transit. Part of this comes from parties who have “solutions” like a standard fare card or a takeover of so-called regional services by Queen’s Park. Actually spending money to provide better, integrated service and to get rid of the complexities and extra cost (to riders) of multiple fare systems never quite seems to be discussed. Attacking the TTC is a handy smokescreen, an alternative to looking at the larger picture and, dare I say it, the real implications of a “Big Move”.

    Many of the 905 systems have a long way to go in improving their services, but I do not think that any transit solutions for the GTA are going to work effectively without the TTC and GO Transit improving their services to reach more of the GTA.

    An expansion of GO rail coverage is a great idea but there is a lot that can be done to improve on what the GO bus and train-bus services are offering. I think that a lot can be done for public transit in the GTA just with more frequent & reliable regular and “express” buses.

    I think that integration by itself should not be used at a cudgel to batter the TTC, but there are so many examples where the artificial barriers to operation (municipal boundaries) and barriers to demand (double fares) are really hurting transit use. Speaking as a Mi Way rider bound for Islington who sees the bus passing by all the passengers waiting at TTC stops… well, it’s not fair to them.

    cheers✓ Moaz

    Like

  36. Speaking of integration, back in 2010 MiWay’s 101 Dundas Express started running out to the Oakville North bus terminal off Trafalgar Road. However demand for the service was low and the trips were affected by traffic volumes on Dundas (which was 2 lanes wide, west of the 403).

    So to make a long story short, the 101 bus was cut back from Trafalgar Road to Ridgeway Drive (just west of Winston Churchill Blvd) because of a lack of demand … and yet Metrolinx and Halton Region want to build a BRT on Dundas from Kipling station out to Burlington.

    cheers, Moaz

    Steve: Yes, and the proposed service levels (as shown in the “Benefits Case Analysis”) are worse than on almost every TTC bus route when you get out to the Burlington end of the line. A good exercise in map drawing and keeping the former Mayor of Burlington (and former head of Metrolinx) happy.

    Like

  37. It sounds like TTC management

    (1) Does not know how to maintain headways (as perceived by passengers)
    (2) Does not try to maintain headways
    (3) Does not measure headways
    (4) Does not identify what is causing failure to adhere to headways (e.g. particular “soaker” drivers, traffic interference, long dwell times, whatever)

    This sounds like a management problem, and a very serious one. Also one which *ought* to be easy enough to fix given a little money and a boss who cares: you bring in someone from a transit system which does maintain headways and he or she makes it happen, retraining and replacing management employees as needed until the problem is solved.

    What’s missing here? The boss who cares?

    Steve: Actually they do measure headways, but seem content to hit targets that are set at historically low values. I have heard Andy Byford talk about the next stage in his renewal of the TTC being an attention to service quality (this is something other places he has worked do already), but the real challenge will be aiming high enough that passengers actually see a difference. As for the other issues, yes, these need to be addressed rather than simply chalking problems up to “congestion” even at times and locations when this is patently absurd.

    Like

  38. More of the same old same: “blame the operator”. How about updating the run times – what was good 25 years ago, doesn’t work today. Service planning needs to remove their heads from their behinds!! This us why I call it “lack of” service planning! We are faced with increased loading standards, reduced frequency, increased traffic congestion; and the TTC still expects that schedules from 25 years ago will still work! When the schedules don’t work, it is the operator’s fault!

    To all of the critics: spend some time where I sit; see if you can do better!! Put your money where your mouth is!! We, as operators, do the best that we can; we deal with inadequate service standards, traffic congestion, inferior equipment, and the increasing demands of the passengers. Add in mobility devices and strollers on the low floor buses, and you can see what we deal with every day.

    Like

  39. I am happy to report that at the last Toronto and East York Community Council meeting Pam McConnell proposed the ‘indefinite deferal’ of her proposal to allow parking on King Street East. Now to get the TTC to run a decent service on the 504!

    Like

Comments are closed.