Well, can’t argue that it’s not practical
It would technically be the fifth law.
Zeroth Law - A robot may not injure humanity or, through inaction, allow humanity to come to harm.
May not injure you say. Can’t be injured if you’re dead. (P.S. I’m not a robot)
The concept of death may be hard to explain because robots don’t need to run 24\7 in order to keep functioning. Until instructed otherwise,a machine would think a person with a cardiac arrest is safe to boot later.
Who can say that death is the injury? It could be that continued suffering would be an injury worse than death. Life is suffering. Death ends life. Therefore, death ends suffering and stops injury.
The sentence says “…or, through inaction, allow humanity to come to harm.” If they are dead due to the robots action it is technically within the rules.
Actually no! Lower numbered laws have priority over higher numbers, meaning that if they come into conflict the higher number law can be broken. While the first law says they can’t allow humans to come to harm, the zeroth law basically says that if it’s for the good of the species, they absolutely can kill or otherwise hurt individual humans.
Lower numbered laws have priority over higher numbers
That means this is the negative first law
It’s even better because
Tap for spoiler
A robot created the zeroth law to allow the killing of people to save humanity
This just reminds me I’m mildly irritated that robots in fiction have glowing eyes so often. Light is supposed to go into eyes, not come out of them!
Robots or any part of an automated production line with a camera typically has a light as well to either see in low light conditions or to ensure it always sees with a similar amount of light hitting the lense.
They addressed this on the Orville. The glowing dots were not eyes. The droid had sensors that did all the work. The “eyes” were an aesthetic addition.
Could we do that for people too, please?
Ooh imagine the chaos at some executive meetings where everyone’s evil eyes are blinding eachother.