The TTC’s Andy Byford recently committed to a major reduction in subway delays by 2019 thanks in large measure to the introduction of a new signal system on Line 1 Yonge-University-Spadina.
Can this level of reduction actually be achieved? To get a sense of this, we must look at the actual delay reports from the TTC to see the location and type of service interruption. If a problem only account for one quarter of all delays, then a three-quarter reduction in delays cannot possibly be achieved simply by eliminating this class of problem. Similarly, if half of the delays are on one line, and a new technology is implemented on another, then nothing has been done to address delays on the untouched part of the system.
The TTC reports overall reliability statistics in the CEO’s report, but these are (a) not subdivided by cause and (b) use a measure of service “reliability” that would give a perfect score even if a large proportion of the service were missing. As long as trains are close enough to each other, even if there is one gap leading the pack or all of the trains crawl along the line, they do not count against the metric the TTC used for years to report service quality.
The counts below were calculated by reviewing all of the TTC eAlerts for service interruptions on the four rapid transit lines for January 2016. Anyone who reads these alerts and rides the system regularly knows that some delays never make it to an alert, but this is the best we have for published information.
Type of Delay Line 1 Line 2 Line 3 Line 4 Passenger Alarm 6 16% 14 23% 2 18% Signals 11 30% 11 18% 3 60% Mechanical 1 3% 4 7% 8 73% 1 20% Track 3 5% Switching 1 2% Security 5 14% 6 10% 1 20% Police Investigation 3 8% 4 7% Fire Investigation 3 8% 8 13% 1 9% Medical 3 8% 4 7% Power Off 4 11% 3 5% Unauthorized at Track Level 1 2% Late Clearing Work Zone 1 3% 1 2% Unspecified 0 1 2% Total 37 61 11 5
The details of the incidents are in the following table.
This is only a one month sample, and there are bound to be somewhat different numbers for other periods, but this gives the flavour of the situation.
Signal problems, annoying though they may be, account for only 30% of the delay incidents on the Yonge line, and 18% on the Bloor line for January 2016. That said, these tend to be the most long-lasting incidents and so their effect is greatly out of proportion to their number. Moreover, the incident count is higher on the Bloor-Danforth line by a wide margin. The new signals on Yonge-University will certainly improve conditions on that route, but delays overall will not be a thing of the past.
If the TTC is going to shoot for improvements, then they must start reporting their delay statistics at a more granular level so the a link will be visible between the type of “fix” undertaken and the location and type of delay whose stats go up or down. This would be a much more useful measure both of service quality and of the effects of management programs to improve the quality of transit service.
I’m assuming the recent raccoon incident and the injured dog incident from a few weeks ago, would be in the “Unspecified” row.
Steve: Neither of these showed up because the dog injury was in December, and the raccoon was in February. The one incident in that row was a delay at Castle Frank that was reported as ending, but not starting, and without a cause.
A better metric would be “delay-minutes” – the total minutes trains are delayed by an incident. A signal failure tends to delay lots of trains by a long time, so signal upgrades should (hopefully) reduce this.
I would think the TTC would want passengers to come off the train as quickly as possible, without endangering them, so the trains can be kept moving, but it seems like they take a very long time to get going again. I’m curious what the process is when passenger alarms are activated and if this process can be improved?
Steve: First off, there are a lot of false alarms by people who misuse them, although that number is declining. There is no point in trying to offload a train when the most likely situation is a medical emergency (reported as a seperate description), and the issue is whether the ill passenger can get off of the train, of if this is not practical. For security situations (a fight on board, for example) again the situation is specific to the occurence. If you look at the table, you will notice that the ones reported simply as “PAA”s are fairly short in duration.
Reducing the number of delays doesn’t directly equal better service. You could shut down Line 4 and eliminate 100% of delays on the line or 4.4% overall in January.
Steve: This is precisely why I would like the TTC to report their stats on a disaggregated basis — if we don’t understand the types of delays that now occur, how can we possibly document that they have been reduced. Alternately, it is possible that the TTC is promising an “improvement” they cannot possibly achieve.
6 total passenger alarms only on Line 1 for the whole month of January seems very low….
However the point I want to make is that I suspect there is a domino effect from the delays. A signal-related delay will result in much more crowded trains and longer travel times. As a result there is a higher likelihood of a medical issue on a train or somebody pressing the PAA. I suspect that if there were fewer signal-related delays (or delays of any type really, although the signal ones are longer usually), there would be less delays in some of the other categories.
Steve: That is quite likely, but it would be good to see some correlation in the data. If you look through the list, most of the medical emergencies do not have an associated signal outage, or something else that would trigger overcrowding and delayed trains. The problem may lie in bad data, but I am using the TTC’s own alerts. If there is so much missing that these stats are seriously off, then we have quite another problem at work beyond the number and type of incidents.
Interesting that Line #2 has more passenger alarms yet Line #1 is more crowded during rush hour. (Perhaps be a sample size issue here) Or maybe there is no correlation between crowding and passenger alarms.
Steve: Or possibly line 1 was better behaved during January. I’m not sure I feel like digging through more data to establish a long-range pattern, but hope that the TTC will make this a standard report on their own. That is the purpose of this article.
They need to avoid making grand claims of what they will improve without underlying data to show which areas can be targeted and how they evolve over time. Remember that this claim was made in the same press announcement where the level of increase in City operating subsidy for 2016 was overstated by a factor of over four times actual.
Even better metric would take into account the number of people affected and for how long.
A signal failure at Union is going to impact more people than one at Kipling. Smoke on track might effect two directions where a signal failure may only effect one direction etc. Also time is important. We don’t need to be fixing signal failures at 1am in 5 minutes. But at 830am it is mission critical.
Steve if you filter the list by time of day or just downtown is there any different ratios that show up? It may be impossible to improve performance during rush hour if passenger alarms happen mostly during that time etc.
Steve: Read through the list for yourself. The events are all over the place in space and time. Where there is some pattern is that certain “problem” areas for signal failures and fires (smoke or smell of smoke) crop up. This would indicate issues that are not fixed “the first time” or which have an underlying reason for reappearing.
As I said in previous responses, my intent is to get the TTC to undertake this breakdown (which I believe they do already for internal use) as part of public reporting of system performance and quality. That way we can see if specific types of problems are being addressed and whether improvements are lasting.
Wow. The subways have issues every day.
Steve: So do the surface routes, but it’s spread over the entire system.
Are subway train delays on terminal approach captured and reported by the TTC?
Steve: No, because they are not “delays”, simply a side effect of bad scheduling.
I would concur with Tom West above — I think the key metric here should probably be passenger-minutes of delay, not the number of incidents per se. But I readily concede that calculating passenger-minutes would be a tricky process.
On Wednesday January 20, I boarded an eastbound train at Coxwell station around 6:30 if I recall. It took at least 40-45 minutes to get to Kennedy due to signal issues. This is listed in the incidents as starting at 15:00 with no end time. I can tell you that it was an ongoing issue. We were short-turned at Warden; the next train pulled in and stayed in the station with its doors open, after which another train short-turned, and then we finally proceeded fitfully to Kennedy. The delay started as far west as Main.
All this time, the PA was announcing “delays both ways, up to fifteen minutes“. If they had more accurately told us “delays of up to one hour” I would have used an alternate route. The irony is that I could have caught a 70 O’Connor from Toronto East General and arrived at Kennedy much, much quicker.
Here’s a tip: if you plan to board the subway at a crew-change station — which Coxwell is — and you see five or six crews hanging around — which was the case — take another route, the subway is running way, way late. I have now learned this lesson!
Other “delays” on the detailed report are trivial, like the first three in January. Yet they seem to be given the same weight as a delay that added five-ten minutes travel time to Victoria Park, twenty-plus to Warden, and forty to Kennedy — all through the afternoon rush hour.
Steve: The “weight” is entirely my doing. I simply wanted to count how many incidents of each type on each line were reported via eAlerts. The absence of an end time indicates that no “all clear” alert was issued, but in long-running cases there were ongoing reminders which I did not count as new instances.
I agree that the amount of time claimed in announcements can be laughable. A few nights ago, I chose to use the subway to travel a few stops because, it was claimed, delays up to 7 minutes would occur between the endpoints of the affected area. It took more than that just to go one stop. I bailed and took a surface route arriving at my destination only slightly later than planned. The TTC does nobody any favours when they understate the severity of a disruption.
Though, (subway, GO train) raccoons are using public transit a lot more often than they used to. I don’t see the need for a kerfuffle, they are behaving well and respectfully take only one seat.
Could the security incidents or medical emergencies count as part of the “yellow strip” category? I’ve been seeing many delays a few times during my trips such as signal related problems (Warden) and a man (who I saw on the 506 car that same day) assaulted a female passenger before police arrested him (Coxwell).
Steve: It is quite possible that there is overlap in usage, but one incident is one incident whatever it’s called. You can generally tell the serious ones by their length.
What is being done about the frequent signal delays around Warden Station?
With regards to the higher number of passenger alarms on line 2, I wonder if this is related to the design of the TR trains compared to the T1 trains. If you assume that some substantial number are false/accidental/tampering, then the “interface” of the alarm mechanism could have a substantial impact on the rate at which the strip is pushed. If this is indeed the case, then perhaps there are physical modifications that could be made to the T1s to decrease the false alarm rate.