Digitalization: Once Again “Ethics” and “Autonomous Vehicles”.

Guido Bruch commented on my article
#Digitalization – The “Ethics” of IT and “Artificial Intelligence“
(#Digitalisierung – Die „Ethik“ von IT und „Künstlicher Intelligenz“)
Here it is:


In the book “Silicon Germany”, the ethical dimension is discussed. Here is an example: a child runs onto the street from the right side. On the left, an elderly person is walking with a walking frame. Humans behind a wheel would decide either at random or consciously whom to run down if they cannot stop their car in time. But then, what to tell a machine to do? Should we always protect the child and thus select (and this with Germany’s past?) or should there be a random generator? I think this is what it is all about.

But I am sure these are all just theoretical questions, since the number of potential accidents is probably small. The topic would probably solve itself. Have there ever been these kinds of accidents?

Another question is what the car manufacturers will do if the request changes from country to country? For instance if some Golf countries say they want to protect the natives at all costs and instead sacrifice foreign labour?


Many thanks to Guido. His article gave me many associated and emotional ideas:

The problem is that a program that is supposed to follow a balancing ethics has to have an evaluation matrix with calculable rules that can evaluate the value of human life through a well-defined systematics, covering all cases in several dimensions.

In other words, you would suddenly have to evaluate and classify the age and gender, but also the education, function and social responsibility of a person and much more. And thus you would have to reach a “personal value number” that makes it possible to calculate the relation between an individual and the set of all humans. Similar to the mathematical value “bigger than“ for whole numbers.

(In theory, you will then not have room for “equal” or “equal-same”, because that would probably mean you need a random generator, after all.)

If you continue in this way, then you will also have to think about such a relation as “bigger” in terms of a set of humans at any time and decide about one of them being the most important and one the least important.

Totally mathematically. To me, this entire discussion seems absolutely useless, even if you will find it in the book Silicon Germany.

Incidentally, this has been regulated for persons living in the FRG a long time ago. We have a constitution that clearly states that all men are equal. Consequently, an absurd metric that would algorithmically determine the “value of a person” is not permissible.

So what “ethical” definitions should a machine follow?

Here is an example:
Guido writes – and immediately doubts – that one might come up with the idea of always saving the child. Then the first question would, of course, be: what to do if you have to decide between two children?

Quite apart from the fact that this would define a two-class society: children versus the rest of society. But then, how do we define “child”? By age, size, weight, maturity? And what about a mother as part of the “rest” if she is nine months pregnant? Perhaps she is two persons – one child and one rest? I also think the “rest” would vehemently oppose this kind of regulation.

Incidentally, the dilemma is very old. What to do if, shortly before the birth, they discover that the mother would die giving birth to a baby that might live? And that the mother can only be saved by killing the baby? This is a mental experiment that actually happens in real life. And, of course, you can extend it by telling people, for instance, that the mother has two more small children (and a husband…). Can this kind of thing be forced into a series of rules that a machine could work with? Of course, the answer is: no!

Here are a few examples with which I would like to show the absurdity:

Who is worth more?

  • The Federal Chancellor or the leader of the German Soccer Team?
  • A CSU county representative or an SPD federal representative?
  • An entrepreneur or a politician?
  • A German or a Frenchman (depending on where)?
  • An integrated citizen with white skin or a dark-skinned asylum seeker?
  • A young man or an elderly lady?
  • The person sitting in one car controlled by a robot or the person sitting in the car controlled by another robot coming from the opposite side?
  • Or, to be cynical: robot A has been installed by BMW. Potential accident cars are another BMW and a Mercedes. Should he ram the BMW or the Mercedes?

You can produce these examples in huge quantities. But to what end? Except in order to demonstrate how it does not make sense at all?
Let me give you a few seemingly harmless examples: cat against dog, which is of higher value? The strangling dog or the pedigree animal owned by the opera star? Or – just to top all absurdity: what should the car roll over if there is a choice between the “common German toad on its way” and a “run-away Greek tortoise”?
(Please note: when I ride my bike, it always hurts me to see all those run-over toads and in Greece all those run-over tortoises).

Most persons propose you could use a random generator for these decisions. After all, it would not really be activated very often, would it? This sounds rather pragmatic. Why not?

Incidentally, the great Isaac Asimov solves the problem quite easily in his SF work: as soon as a robot threatens to do damage to a human, the system will block itself following the three robotics rules, thereby being destroyed once and for all. But he, too, soon discovers that his proposal has a glitch (incidentally, it is from the 1940ies).
The glitch is:
What are the characteristics by which a computer identifies the human being? At one time, this seems to be the dialect of the Solaris “spacers“. Aurora “spacers” seem to have huge problem with it. And the same is true for the “settlers“. And consequently, the Solaris robots start killing intruders – even if those intruders are humans.

🙂 Here is what I propose: why don’t we take the Bavarian dialect as a determining factor whether or not someone is human? Even the big Bavarian party might want to make this idea part of its program…

However, when it comes to autonomous cars, the “laws of robotics” will not help either. It can only work if the car is driving empty :-).

No, the ethics commission for autonomous driving is nonsense – just like most ethics commissions and discussions.

To be sure, I would wish for an ethics commission for drones and war robots. However, the result of those commissions seems as clear as the fact that those who have all the power would ignore it anyway.

Another ethics commission I could easily imagine is one that answers the question if it should be allowed for private institutions (concerns such as google, amazon …, but also lobbyism as practiced in many sectors, up to private armies employed by some enterprises) to get such power as has never existed in society, in extreme cases even including psychological and even physical violence. In some cases, there should probably be a discussion about a violation of the “federal monopoly of violence”.

Except that it is absolutely clear to me that the result of such a commission could only be a clear “NO”, which would, however, be likewise ignored.

Ethics will not help when we try to solve our problems. Especially not if it is supposed to be generated by a commission. What we need is human wisdom. I always like citations by Bertrand Russell:

» All technological growth, provided said growth causes an increase, rather than a decrease of human happiness, will give us growth in wisdom.«

And, unfortunately, ethics will not at all help us to become wiser. On the contrary: it is more conducive to distracting us from wisdom.

In particular, ethics will not at all help with “autonomous systems”. It is my personal consolation that, as far as I know, there has not been a single event with track-bound traffic where “mental experiments” such as the trolley dilemma, ever happened. Consequently, there is no need to worry too much.

Perhaps the following ideas will help:

Railway tracks are made of iron. In the iron age, they were used in order to transport persons and products from A to B. The autonomous car is a result of IT. Consequently, it is a little more modern, quasi running on “tracks made of software and computers”.

And in doing so it makes use of an infra structure that was the only one to have established itself on a world-wide scale: flattened ways in concrete, also known as street network. That is why it can not only transport humans and goods on tracks along the line between A and B (A and B being fixed stations), but also between X and Y (X and Y are now variable end points that can be reached via street).

And in former times, the railway was twice redundant. First, it had to set a recovery for a “basically impossible” error. For cases when this failed, a second level was generated in order to avoid the maximum accident even for a twice “impossible” error.

Consequently, it is our task as engineers to guarantee the highest possible degree of error free roads. We should create a first “redundancy” that covers the “impossible error”. And then we should create an extra safety level, as the railway used to have it.

This is how you make errors as unlikely as possible. That is the mission!

The ethics discussion is intellectual onanism. It seems to me that politics try to deflect our attention from relevant and very uncomfortable questions about digitalization (incidentally, I ask those questions in my presentations). This is how it is abused as election campaign instrument. Politics want to make the citizen believe he has a high competence and responsibility. The goal is to win as many points in the electoral campaign.
RMD
(Translated by EG)

P.S.
There is only one instance when I remember an ethics commission coming up with a reasonable result. A few decades ago, there was a lot of discussion about the §217 (abortion). At the time, the ethics commission had the idea that abortion should continue to be illegal, but that there should be no punishment. As I see it, this was not a bad idea. After all, it also became the basis of the new abortion law.

But do you really need an ethics commission for this kind of thing? In his “Dreigroschenoper”, Bert Brecht says:

You must not punish too severely those who acted illegally! 
This will help the persons concerned, because there is no punishment. But it will not help with the decision making process. Because it will always happen in the hearts and heads of the parties concerned.

Twitter

Leave a Reply

Your email address will not be published. Required fields are marked *

Suche

Categories

Aktuelle Umfrage

Wie würden Sie die EURO-Krise meistern?

Ergebnisse anzeigen

Loading ... Loading ...

Quo vadis - Germania?

Düstere Zukunft: Es sieht wirklich nicht mehr gut aus. Dank wem?

Weltschmerz am Sonntag!

Offener Brief an einen Freund.

Zeitenwende: Das Ende der digitalen Welt?

Stoffsammlung zu meinen Vortrag - "Gedanken zur post-digitalen Gesellschaft"
SUCHE
Drücken Sie "Enter" zum Starten der Suche