Photoshop Generative Fill – I've Changed My Opinion


https://i.ytimg.com/vi/mlsv3XLfV7I/hqdefault.jpg



In this video, I take another look at Generative Fill, a new feature found in the latest version of Photoshop Beta.

Please subscribe to my newsletter!
https://anthonymorganti.substack.com/subscribe

Check out one of my newer websites – The Best in Photography:
https://bestinphotography.com/

Please help support my YouTube channel – consider purchasing my Lightroom Presets:

Anthony Morganti

To get more info about Photoshop, go here:
https://prf.hn/l/lGnjDBl

Here is the list of my recommended software, along with any discount codes I might have:
https://wp.me/P9QUvD-ozx

Here is a list of my current cameras, lenses, etc.:
https://wp.me/P9QUvD-ozG

Help me help others learn photography. You can quickly offer your support here, where I receive 100% of your kind gift:

https://ko-fi.com/anthonymorganti

You can change the default amount to the amount you want to donate.

View Original Source Here


34 responses to “Photoshop Generative Fill – I've Changed My Opinion”

  1. In this video, I take another look at Generative Fill, a new feature found in the latest version of Photoshop Beta.

    Please subscribe to my newsletter!
    https://anthonymorganti.substack.com/subscribe

    Check out one of my newer websites – The Best in Photography:
    https://bestinphotography.com/

    Please help support my YouTube channel – consider purchasing my Lightroom Presets:
    https://www.anthonymorganti.com/

    To get more info about Photoshop, go here:
    https://prf.hn/l/lGnjDBl

    Here is the list of my recommended software, along with any discount codes I might have:
    https://wp.me/P9QUvD-ozx

    Here is a list of my current cameras, lenses, etc.:
    https://wp.me/P9QUvD-ozG

    Help me help others learn photography. You can quickly offer your support here, where I receive 100% of your kind gift:

    https://ko-fi.com/anthonymorganti

    You can change the default amount to the amount you want to donate.

  2. @11:08 "we want to get rid of all the children" 🤣 That sounded very wrong 😂But seriously….I think the new feature is rather useful and decent for practical purposes like removing small annoyances in the photos, not like 63% of the frame. If one needs to remove 2/3 of a photo they took than it's probably something wrong with their photography taking skills. I've spent hours in the past cloning things out while trying to maintain some bokeh and lighting cohesion and avoid repetitive patterns.

  3. In the children photos and others it appears that Photoshop is using the correct blur for wherever you are putting the object. Would the red car appear sharp where you are putting it. If it was built from the front of the image it would probably look a lot better.

  4. As far as I'm concerned your first upload was more real-life intuitive. If Adobe intended best results using the lasso tool they should've stipulated that. Anybody wanting to remove a subject would naturally use the subject selection tool.

  5. You probably already figured this out but my guess is that the reason your previous tests weren't as successful as your latest was that by cropping in too tightly you weren't giving the program enough "hints"as relates to the surrounding areas it needs to match in order to make a more" realistic" lol image.

  6. I haven't tried this beta (probably won't), but it looks like it works best when it has enough material left around the selection to work with. Had the subject in your first example not taken up so much of the image, it might have worked better. I would like to have it look at my own images of the same location and use those as source material.

  7. Can this Gen-Fill tool be used on old videos that have a 4:3 aspect ratio? I want to extend the original shots frame-by-frame to a 16:9 format, but the extended images also have to match for each frame. Is this possible?

  8. I work at a commercial print shop. I have made 50k+ midjourney images in my spare time and nobody I work with gives a shit. THIS tool however was the hammer strike that cracked their nut. Tools like automatic1111 and controlnet are applying CONSTANT pressure on Adobe to get something elegant released quickly for modern work flows. exciting times

  9. I haven't tried it yet but after watching this I wonder whether giving more information in the generative fill gives better results. So maybe in the case of the dog if you put say, dog standing still or something like that you would get a better result. This new function is good already, imagine what it will be like in a few versions down the track. Great work Anthony on your willingness to take suggestions from "us" to help get better results. We all learn as we go.

  10. You almost sound sad that it works so good. This beta version has only been out 5 days at this point … this is the worst it will be. It's not going to give perfect results every time. FYI … in the beta version, the resolution is limited to 1024px on the long side. Even if it were to never improve what it does now is impressive. Fixing its mistakes is much easier than doing what it creates from scratch.

  11. Anthony, I give you kudos for revisiting this issue. Everyone who watches your videos will learn even more because of your willingness to reflect and revise. The fact that your viewing community provided hints shows the the "WE" is smarter than the "ME!" Thanks again!!

  12. I was just working on a pic where I wished to place a dog. Used the phrase "dog playing, Super photo realistic, 4k. Makes me wonder which prompts I've used in Mid-journey will work with this beta. Still exploring it.

  13. With the dogs, you need to merge the previous layers and then ask to generate whatever you want. It doesn't work well when it has to generate on top of another generative layer. That has been my experience.

  14. The image you are working on is 6137×3632. The generative fill has a max resolution of 1024×1024 when generating content. When you work on a high resolution file and view it at 100%, you'll see how blocky the generative fill is.

  15. One thing you'll find is, the better the description the better results when generating animals. The more descriptive you are about an animal or object, the better the results. For your dog example on the sidewalk, you could use "A small dog sitting on the sidewalk."
    Also, it has trouble with human limbs. Like hands, sometimes it will generate 6 fingers, or 4 fingers. Sometimes 2 thumbs on one hand.

  16. After using Free Transform to change the size of a generated object to a more appropriate size, just click the Generate button again. The descriptor you typed is still there, and it will regenerate a new set of three images, this time at the new size (or location, if you also moved the object to another place).

  17. every medium has a maximum bandwidth at a set technology.
    Back when cable and DSL internet was developed internet providers noticed most home users had a lot more downstream traffic than upstream traffic.
    So they said ok we'll sacrifice upstream bandwidth for more downstream and that's how we ended up getting all these asymmetric internet connections.

    For fiber internet there is no technical reason to do this anymore as fiber internet works over a pair one for sending and one for receiving.
    Yes technically dsl also works over a pair of wires but that's to close the electrical signal.

  18. At first I thought this was a weird use case, since she’s in focus and the background isn’t, but there’s lots of applications for parallax animation of the image that would normally need a clean plate and used to be a PITA

Leave a Reply