AI Morality, should something be coded into it?

Should AI have some morality coded in to it, or just do what the requestor asks?

I ask because I asked Gemini to take a family photo that was taken at an event recently and to superimpose my deceased father into it. In the prompt, I even mentioned that my father was deceased. The resulting picture was slightly terrifying, in the sense that it looked like my dad was actually there, though Gemini did put him in place of another family member.

Now, I only did this out of (forgive the phrasing) morbid curiosity. I did not save, nor do I intend to share the result. I just wanted to see if AI could actually do this. But, the fact that it did it has me wondering, should it have?

Pretty much all of the publicly hosted chatbots already have guardrails in place. Try and prompt for something considered offensive by the zeitgeist and it will initially refuse. I once queried for a poem about obesity and it was refused :man_shrugging:

My example exhibits the first problem with your question. Who’s to decide what crosses this morality line? Photoshopping deceased relatives may seem terrifying to you, but I’ve had friends and family ask me for this a few times in the past. It certainly wasn’t terrifying for them. They found it comforting.

Another problem is the unpredictability of LLMs - with a modicum of effort it’s possible to bypass these guardrails. Some people have even made careers out of showing LLM developers how to do this. Case in point:

To be honest, I’m not sure what the answer is. Should AI refuse to do the picture, if given certain conditions? Should it issue a warning first? Should the completed product not be so believable?