One thing AI is good at: Reversing someone else’s cropping

If you’re budget-limited, finding good stock photos can be hard. The big agencies are full of overly-staged images, too much gray, and a lack of diversity. After all of that, you may often find the “right” image, but it’s cropped too far in, requiring serious PhotoShop acrobatics, or continuing the long, arduous search for an alternative image that ultimately doesn’t quite have the same vibe as the original.

That is starting to change with AI.

While AI generators have gotten mountains of press and undeserved hype (especially regarding the generation of art from scratch), they are starting to have some niche commercial-ready uses.

Here is an example. Let’s say I’ve got an image where the model and lighting are good but I need the model horizontally centered on a wider canvas. This will require me to extend the canvas out to the left and fill it with something that looks like it belongs there, being careful to match textures, lighting, and style.

There are a number of tools out there that each have different strengths. PhotoShop’s “Content-aware fill” outpainting has seen improvements recently, but it is still “dumb” and doesn’t know what it’s looking at, relying instead on a small amount of pixels near your expansion zone, or selected by a paintbrush.

What it is getting good at is extending textures that would otherwise be tricky to make seamless by hand.

Here is the photo expanded by PhotoShop. You’ll see the texture of the wall is good but it completely falls apart trying to figure out what was behind the model, getting increasingly messy and repetitious as it moves from right to left:

DALL-E’s outpainting tool has you select a small area to expand and then asks for a text prompt of what should go there. For the left, I told it the following: “blurred painting on wall with light hardwood floors and modern furniture”. For the bottom corner, I told it to “extend the gray carpet”. While neither of these were followed to the letter, what it generated fit the space very well:

  • The painting was completed very impressively
  • The blur level and lighting were maintained perfectly
  • DALL-E improved on my assumption that the model was in a room with chairs and instead, was in front of a set table that should contain more bowls and serving containers
  • DALL-E also misinterpreted what I thought was the carpet, but it did fill that little gap in the lower left without the banding that Photoshop would have introduced
  • DALL-E gave me a little ceramic mushroom that I didn’t ask for that I love! 🍄

So that gave me most of what I wanted to extend this photo…

The next step was to bring this back into PhotoShop and use the content-aware fill to give me the rest of my wall. This was the final map of the replaced components:

…and here is the final re-cropped piece, ready to be plunked into the wide layout container with the model in the center:

While there are still plenty of ways to break this, I’ve been encouraged to find that this has been a repeatable process for a number of photos already and is honestly the first use-case for AI that has made the jump from purely experimental to an official tool in my toolbox.

Leave a Comment