DALL-E 2, an artificial intelligence system that can create photo-realistic images based only on a brief description, has been used by a photographer to edit his photos and is able to make an out of focus-image sharp.
Nicholas Sherlock, sent the images edited with DALL-E 2 to YouTuber Michael Widell, who was rightfully blown away with the technology’s capability. In fact, the video that Widell uploaded was titled “Will This New Invention be the Death of Photography?”
The images sent in by Sherlock show an out-of-focus ladybug that’s miraculously sharpened by OpenAI’s software. To fix the image, he erased the blurry area of the ladybug’s body and then gave a text prompt that reads “Ladybug on a leaf, focus stacked high-resolution macro photograph.”
This ingenious method is using DALL-E 2 for something that was not designed for but it could become a powerful and important tool for photographers.
Speaking to PetaPixel, Sherlock gave another example of a picture he edited on DALL-E 2, this time of an egret in a drainage ditch.
“DALL-E’s inpainting allows you to upload an image, erase an area of it using a brush, tell DALL-E what should go in that space, and it’ll paint it in for you,” he says.
“I erased the egret, and erased a space on the right side of the image, and told DALL-E to generate ‘baby elephant bathing, wildlife photography.’
“In this case, the results don’t bear close scrutiny, the elephant is a bit too sketchy, but this isn’t an inherent limitation of the technology and this will improve over time. They look fine at thumbnail size.”
Sherlock points to the water reflections of the baby elephant generated by DALL-E as a remarkable feat that is done “much better than I could have.”
Inpainting and Outpainting
DALL-E 2 provides the ability to “inpaint,” which is where editors can create subjects in an image from just a text command, as demonstrated above.
However, it can also “outpaint.” Sherlock gives this example of a tilt-shit photo where he wanted the crop to be a “little looser.”
“I expanded the size of the canvas in Photoshop to give it some transparent borders and uploaded that image to DALL-E. I tell it to fill it in with the prompt ‘A town in autumn, 35mm tilt-shift photography, Velvia.’ This matches the original image.”
Sherlock even sent a comparison picture that used Photoshop’s Content-Aware-Fill tool which was unsurprisingly unable to create a far wider scene, like the one DALL-E did.
Whether Adobe can create something like this for the wider photography community remains to be seen.