Brett Anderson

anthramen:

I have felt first-hand the very wrath of the Turkish ice-cream man.

deathbydvd:

Conan… What is best in life?

deathbydvd:

Conan… What is best in life?

philosophersdog:

wittgenspeakthesage:

castieltherebel:

conquerorwurm:

computeraidedenrichmentblog:

smokywarfare:

If the multiverse theory is true, then there’s a universe where it isn’t.

Multiverse theory doesn’t cover paradoxical situations

Except in the universe where it does

i’m having an aneurysm

To…

This really isn’t that hard to understand, I’m not sure why people have such a hard time with this.

classichorrormovies:

It Came from Beneath the Sea (1955)

classichorrormovies:

It Came from Beneath the Sea (1955)

laughingsquid:

Abyss Table, A Table That Resembles a Geological Cross Section of the Ocean
Going out tonight.

Going out tonight.

martymcflyinthefuture:

Today is the day that Marty McFly goes to the future!

martymcflyinthefuture:

Today is the day that Marty McFly goes to the future!

I expect that in a few years autonomous cars will not only be widely used but they will be mandatory. The vast majority of road accidents are caused by driver error, and when we see how much deaths and injury can be reduced by driverless cars we will rapidly decide that humans should not be allowed to be left in charge.

This gives rise to an interesting philosophical challenge. Somewhere in Mountain View, programmers are grappling with writing the algorithms that will determine the behaviour of these cars. These algorithms will decide what the car will do when the lives of the passengers in the car, pedestrians and other road users are at risk.

In 1942, the science fiction author Isaac Asimov proposed Three Laws of Robotics. These are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If the cars obey the Three Laws, then the algorithm cannot by action or inaction put the interests of the car above the interests of a human. But what if there are choices to be made between the interests of different people?

In 1967, the philosopher Philippa Foot posed what became known at “The Trolley Problem”.  Suppose you are the driver of a runaway tram (or “trolley car”) and you can only steer from one narrow track on to another; five men are working on the track you are on, and there is one man on the other; anyone on the track that the tram enters is bound to be killed. Should you allow the tram to continue on its current track and plough into the five people, or do you deliberately steer the tram onto the other track, so leading to the certain death of the other man?

Being a utilitarian, I find the trolley problem straightforward. It seems obvious to me that the driver should switch tracks, saving five lives at the cost of one. But many people do not share that intuition: for them, the fact that switching tracks requires an action by the driver makes it more reprehensible than allowing five deaths to happen through inaction.

If it were a robot in the drivers’ cab, then Asimov’s Three Laws wouldn’t tell the robot what to do. Either way, humans will be harmed, whether by action (one man) or inaction (five men).  So the First Law will inevitably be broken. What should the robot be programmed to do when it can’t obey the First Law?

This is no longer hypothetical: an equivalent situation could easily arise with a driverless car. Suppose a group of five children runs out into the road, and the car calculates that they can be avoided only by mounting the pavement, and killing a single pedestrian walking there.  How should the car be programmed to respond?

There are many variants on the Trolley Problem (analysed by Judith Jarvis Thompson), most of which will have to be reflected in the cars’ algorithms one way or another. For example, suppose a car finds on rounding a corner that it must either drive into an obstacle, leading to the certain death of its single passenger (the car owner), or it must swerve, leading to the death of an unknown pedestrian.  Many human drivers would instinctively plough into the pedestrian to save themselves. Should the car mimic the driver and put the interests of its owner first? Or should it always protect the interests of the stranger? Or should it decide who dies at random?  (Would you a buy a car programmed to put the interests of strangers ahead of the passenger, other things being equal?)

One option is to let the market decide: I can buy a utilitarian car, while you might prefer the deontological model.  Is it a matter of religious freedom to let people drive a car whose alogorithm reflects their ethical choices?

Perhaps the normal version of the car will be programmed with an algorithm that protects everyone equally and display advertisements to the passengers; while wealthy people will be able to buy the ‘premium’ version that protects its owner at the expense of other road users.  (This is not very different to choosing to drive an SUV, which protects the people inside the car at the expense of the people outside it.)

A related set of problems arise with the possible advent of autonomous drones to be used in war, in which weapons are not only pilotless but deploy their munitions using algorithms rather than human intervention. I think it possible that autonomous drones will eventually make better decisions than soldiers – they are less like to act in anger, for example – but the algorithms which they use will also require careful scrutiny.

Asimov later added Law Zero to his Three Laws: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” This deals with one variant on the Trolley Problem (“Is it right to kill someone to save the rest of humanity?”).  But it doesn’t answer the basic Trolley Problem, in which humanity is not at stake.  I suggest a more general Law Zero, which is consistent with Asimov’s version but which provides answers to a wider range of problems: “A robot must by action or inaction do the greatest good to the greatest number of humans, treating all humans, present and future, equally”.  Other versions of Law Zero would produce different results.

Whatever we decide, we will need to decide soon. Driverless cars are already on our streets.  The Trolley Problem is no longer purely hypothetical, and we can’t leave it to Google to decide. And perhaps getting our head around these questions about the algorithms for driverless cars will help establish some principles that will have wider application in public policy.

hoodoothatvoodoo:

Lili St Cyr
1953

hoodoothatvoodoo:

Lili St Cyr

1953

laughingsquid:

A 1949 ‘LIFE’ Magazine Chart of Highbrow and Lowbrow Tastes