DALL-E Users Can Now Upload and Edit Real Human Faces. What Could Possibly Go Wrong?

The company lifted its previous restrictions on uploading real human faces after building new detection and response techniques meant to prevent misuse.

We may earn a commission from links on this page.
Image for article titled DALL-E Users Can Now Upload and Edit Real Human Faces. What Could Possibly Go Wrong?
Photo: Stefani Reynolds (Getty Images)

OpenAI believes it’s ready to start letting DALL-E users edit images of real human faces, a possibility previously blocked over concerns of potential sexual and political deepfakes proliferating from the AI.

In a letter to users on Monday spotted by TechCrunch and shared with Gizmodo, OpenAI said it would reintroduce the ability to upload and edit real human faces to its advanced AI image generator after building new detection and response techniques meant to prevent misuse and ultimately minimize, “the potential of harm.” Users are reportedly still barred from uploading images of people without their consent as well as images they don’t have legal rights to.

Advertisement

“Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos,” OpenAI said in an email. “A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes.”

Advertisement

OpenAI said it made its filters, “more robust” at spotting and rejecting attempts to generate sexual, violent, or political content, while simultaneously working to reduce “false flags.”

Advertisement

DALL-E users wasted no time offering up their faces to the program. Here’s a few examples posted on Twitter.

Advertisement
Advertisement


As AI image generators go, OpenAI has taken a relatively conservative route to realistic human faces, likely in order to avoid unintentionally facilitating the spread of deepfaked pornographic images and other graphic content already uploaded by less restrictive alternatives like Stability AI’s Stable Diffusion.

Monday’s move builds off of previous experimentation with human-like faces at OpenAI. In June, the company said it would let researchers begin generating images of realistic looking humans belonging to non-real humans. Though these images don’t run into the same consent issues as authentic human faces since they don’t involve actual living people, OpenAI similarly put up guardrails to avoid DALL-E spitting out a cesspool of unwanted images or “deceptive content.”

Advertisement

Deceptive or not, many of the recent images created by users and posted online are nonetheless pretty creepy, with the photorealistic images of fabricated people straddling right along the furthest edge of the uncanny valley.

Advertisement
Advertisement

Updated 9/20/22 11:27 a.m. ET: Added email details from OpenAI.

Advertisement