The role of boredom in accidents
From driving to work, boredom increases accidents of all sorts. In fact, anytime that someone is supposed to perform a task that requires any level of focus and instead acts in a somnolent manner, the outcome is seldom good. The problem is so serious and significant that you can find a wealth of articles on the topic, such as “Modelling human boredom at work: mathematical formulations and a probabilistic framework.” Whether an accident actually occurs (or was a close call) depends on random chance. Imagine actually developing algorithms that help determine the probability of accidents happening due to boredom under certain conditions.AI in avoiding safety issues
No AI can prevent accidents owing to human causes, such as boredom. In a best-case scenario, when humans decide to actually follow the rules that AI helps create, the AI can only help avoid potential problems. Unlike with Asimov’s robots, there are no three-laws protections in place in any environment; humans must choose to remain safe. With this in mind, an AI could help in these ways:- Suggest job rotations (whether in the workplace, in a car, or even at home) to keep tasks interesting
- Monitor human performance in order to better suggest down time because of fatigue or other factors
- Assist humans in performing tasks to combine the intelligence that humans provide with the quick reaction time of the AI
- Augment human detection capabilities so that potential safety issues become more obvious
- Take over repetitive tasks so that humans are less likely to become fatigued and participate in the interesting aspects of any job
AI can’t eliminate safety issues
Ensuring complete safety implies an ability to see the future. Because the future is unknown, the potential risks to humans at any given time are also unknown because unexpected situations can occur. An unexpected situation is one that the original developers of a particular safety strategy didn’t envision. Humans are adept at finding new ways to get into predicaments, partly because we’re both curious and creative. Finding a method to overcome the safety provided by an AI is in human nature because humans are inquisitive; we want to see what will happen if we try something — generally something stupid.Unpredictable situations aren’t the only problem that an AI faces. Even if someone were to find every possible way in which a human could become unsafe, the processing power required to detect the event and determine a course of action would be astronomical. The AI would work so slowly that its response would always occur too late to make any difference. Consequently, developers of safety equipment that actually requires an AI to perform the required level of safety have to deal in probabilities and then protect against the situations that are most likely to happen.