← Back to all articles

Does Intelligence have any Morals?

January 12, 2025

Artificial Intelligence alignment has become a huge field of research - what behaviors and moral principles should AI follow to become beneficial to humans? The best representation of what happens when AI isn't aligned with human interests is Ultron. Ultron is tasked with the goal to achieve peace, and the solution it finds to achieve peace is well, elimination of humans (not really intelligent is it?).

The key question is, how do we teach AI something we ourselves are not sure about? Each individual has a different sense of morals and ethics. Some are utilitarian, some are Kantian, and some are just ethical egoists - we ourselves do not have a definitive understanding of what human behavior is the correct behavior.

Every rule we decide on, has an exception. A basic rule is to teach AI not to kill another human, what happens when we send it to war? We teach an AI to not tolerate any sort of crime, what if it enforces extreme violence on the individual committing it? Essentially it’s same dilemma as the trolley problem.

I honestly do not have an answer to this.

Maybe we will all have an AI that goes along with us, and mirrors our personality and our impulses. It can also teach us our personality flaws and make us overcome these.

You know what will be cool? If super intelligence looks at all of us, understands us better than any human ever could and develops a set of morals and frameworks that we need to follow. The would be pretty cool, the question then is, Have we created God? or has God created us?