PhD in particle physics, researcher in artificial intelligence, NTNU. Board member, Norwegian Council for Digital Ethics.
Doctor, researcher on artificial intelligence in medicine, Oslo University Hospital and the University of Oslo. Board member, Norwegian Council for Digital Ethics.
A developer of artificial intelligence may be asked to create systems that spread “fake news”.
This is a debate post. Opinions in the text are at the writer’s expense.
If a hospital director asks a doctor to perform operations on healthy patients to embellish the statistics, the doctor may refuse and point out that improper treatment violates the Health Personnel Act.
Similar legislation does not exist for developers of artificial intelligence (AI). These only have ethical guidelines that are not binding. The time has come to give AI developers the same responsibility and protection as doctors.
There is no professional ethics
Doctors have such a big impact on our lives that we protect their profession through an authorization and call it a “profession”. Other professions include lawyers and engineers.
Violation of professional ethics has consequences. The doctor may lose his authorization and be prosecuted, but a doctor may not be fired for refusing to violate professional ethics. In addition, it is difficult for the employer to replace one doctor with another, as all doctors are covered by the regulation.
In artificial intelligence, the situation is completely different. There is no professional ethics for AI developers.
An AI developer may be asked to create systems that rank people based on ethnicity or appearance, monitor minorities, or spread “fake news”. All this is taken from reality.
If an AI developer refuses to develop unsound systems, he or she may lose a job or be replaced by another developer. AI developers can therefore end up in a split between commercial interests and the best interests of society.
Society’s interests vs. the employer’s
A new research article points to it the social dilemma where AI developers must weigh society’s interests against those of the employer.
A social dilemma is characterized by the fact that the best outcome for the community occurs if everyone cooperates, but no one does it because the cost to the individual is too high. We all know that we should fly less, but few are willing to sacrifice their own vacation.
The climate crisis is just one example of how social dilemmas cannot be solved at the individual level, but are dependent on coordinating mechanisms.
Professional ethics is one such mechanism that protects the individual from ending up in the split. The responsibility for ethically sound AI development must be placed at the societal level. With authorization or professional ethics, an AI developer who refuses to develop unethical algorithms cannot simply be replaced by another developer.
As long as AI developers do not have a professional ethic, ethically sound development of artificial intelligence can not be guaranteed.
Let us realize that AI is going to have a big impact on society, and act accordingly. We have already seen scandals such as electoral influence and discrimination.
Medical history contains atrocities involving humans as guinea pigs. We have therefore acquired a professional ethic for doctors. So far, we have avoided equally fatal outcomes related to AI. But it is urgent to establish a professional ethic for AI developers.
Over the centuries, medicine has developed into a mature field with evidence-based practice and experience with ethical dilemmas. The lightning-fast development of AI that affects our lives does not give us time to try and fail on the way to a safe practice.