Autonomous vehicles constantly interact with other road users which include pedestrians, bicyclists, and human or automated drivers. These inter actions are governed by programmes and the order of importance given to the programmes by the programmers. The vehicle, just like any human driver, comes across a wide range of scenarios. From day to day common tasks like evaluating the distance from the vehicle ahead to unavoidable, unlikely events such as hatching the trajectory when an accident is inevitable, the vehicle must possess the ability to take the right decision in the right time. Standards of the society in which the vehicle runs arbitrate the behavior and control algorithms of the vehicles in such circumstances rather than statistics or test-based features.
The question whether automated vehicles without free self-will can be counted on for displaying moral behavior is yet to be answered. Still, the society will view the activities of the vehicles through the ethical lens. In the unlikely event of damage or injury being caused by the vehicle, the control algorithms of the vehicle will be under critical inspection and examination in a court of law. Even the daily tasks which involve social interactions undertaken by the vehicle will determine their degree of societal acceptance. Thus, the control algorithms framed by the programmers must follow legal and ethical rules and regulations.
Daily driving does not pose any major challenge as the autonomous vehicle can drive smoothly while ensuring all traffic laws are obeyed. On the other hand, dilemmas where it is impossible to meet all the constraints, though rare, cannot be excluded. A case where the car must cross a double yellow line to avoid collision with some other vehicle is an example where the vehicle cannot meet all the constraints yet must arrive at a best plan of action. In such cases a mathematical decision fulfilling all the constraints is infeasible. A solution to this would be creating a hierarchy of constraints with higher weight age to the constraints that cannot be violated compared to the other constraints. The vehicle will now be governed by the deontological constraints and when a dilemma arises, the best course of action will be adopted by following a consequentiality approach. The three basic laws of Robotics proposed by Isaac Asimov state:
- A robot should not injure any one.
- A robot must obey commands.
- A robot must protect its self.
One of the key causes for deployment of automated vehicles is to decrease the number of fatalities and mishaps. Asimov’s laws can be well adopted to frame three laws apt for autonomous vehicles due to their emphasis on protecting human life.
- The vehicle must avoid collision with people on the road.
- The vehicle must avoid collision with another vehicle.
- The vehicle must avoid collision with any other object in the environment.
Leave a Reply