There are signs all over the SkyTrain that urge riders to hold onto the handrails at all times. The other day four cars' worth of people were reminded why when the train screeched to a halt in what still seems like an impossibly short distance, sending standers staggering. Whether there was an obstruction on the tracks up ahead or the next train was just too close for comfort, I never found out. But it did get me thinking.
As I said in my Tunnel Visions review of the system, Vancouver's SkyTrain is run by robots - it's driverless and automated. It made me wonder, based on the information available to me... as an automated system, does the SkyTrain operate in concurrence with the Three Laws of Robotics?
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
This one is the closest to being realized in reality. SkyTrain platforms are studded with safety warnings against walking on the track, the cars themselves are equipped with further safety notices as well as emergency call buttons, and the doors will not close if they detect someone in the way - something that happens particularly frequently at Commercial/Broadway or Metrotown. Hell, they won't even open if the train is not at precisely the right position on the platform; I once spent ten minutes on a train stopped short well within Metrotown station, but its doors didn't open until it was cleared to move the final ten feet or so forward.
Given the current shape of the system, I'm not sure that much more than this can be done; even the quick deceleration I mentioned earlier falls into this, since hitting something on the rails would be a net negative for everyone on the train. People standing are expected to be holding the rails, and if they're not, they've taken their safety into their own hands.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
From what I understand, the SkyTrain system is monitored by humans from a central control area - I'm not sure if they can be driven remotely, but unless the cars all come with artfully concealed video cameras, it probably wouldn't be the best idea. Furthermore, considering liability issues I doubt the system would accept an instruction to close the doors on an obstruction or blithely drive over something on the tracks.
Nevertheless, the SkyTrain also most likely does not have a self-preservation drive - I would be highly surprised if it did. So the Second Law does not really work here.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Here we really get into the distinction between Asimovian robots and robots like the SkyTrain. Asimov's were mechanical men with the ability to manipulate and affect the world around them. The SkyTrain's effect on the world is limited to its capacity to take people where they need to go when they need to go there, and also whatever wind currents get kicked up by its passage. It's incapable of protecting itself, either - the state of the art is still woefully insufficent for the SkyTrain's digital avatar itself to make the case for the Evergreen Line.
So, conclusion: no. The SkyTrain is not fully Three Laws compliant. Fortunately, though, it's got no will or volition either - thus making its non-compliance far less threatening than if it was an urban rapid transit system that could think for itself.
You have left off a number of safety features: probably the most significant for the first law is the intruder alert system at stations. If any intrusion on the track is detected, the trains stop. While the obvious concern is people falling (or jumping) onto the tracks it will also protect against people who try to retrieve something. It is so sensitive it has been known to be set off by an empty Coke can.
ReplyDeleteTrains can be driven remotely - for example at the maintenance/storage facility - but when out on the line default to stop unless driven manually. Then they can only proceed at reduced speed. The only recorded collisions on the system occurred when under manual control.
Most of the on board safety measures you mention have nothing to do with the behaviour of the "robot" but the humans on board the train. The greatest risk is still the unpredictably of our fellow humans.