Colouring picture to Photoshop to AI

AI images


Experiments in AI

07 Apr 2026

3 favorites

4 comments

49 visits

Colouring picture to Photoshop to AI

To carry this experiment to the next level. I took the coloured image from Photoshop and run it through a couple of free AI websites using the image to image techniques. Having never used AI until now, this is all still very new to me. I don't think that I would become a regular user of AI image creation. You can't deny the results of AI imaging, they are truly amazing. I will upload the final version of this image in the day or two. But I believe that the capabilities of AI will cause a lot of skills and jobs to be lost in the coming future. Clothing designers will no longer need models as they will use AI images instead. Fashion photography could all be done in AI. Image retouching will become unnecessary. No need for wildlife photography. The list is endless. Having said all this, I still believe that AI has it's uses. I am a huge fan of Kayleigh's fantasy artwork here on Ipernity. Thank you for looking and for your kind comments.

08 Apr 2026

3 favorites

2 comments

47 visits

Colouring picture to Ai final image

This is the final image in this series. AI rendered on Chatgpt. Another website with free AI rendering. This one gives you the chance to edit your image after it has been rendered. I have also experimented with Rendering the AI directly from a greyscale colouring picture. The result was quite impressive. I may upload to Ipernity at a later date. Thank you for looking and for your kind comments.

11 Apr 2026

6 favorites

6 comments

58 visits

In The Woods

I have been messing around with AI for several days now. This is my attempt at a realistic image. My opinion of AI generators so far is this: As they work from text prompts, the results can be unpredictable even when your prompts are very specific. If you try to make too many changes at once, it generates something completely different to what you expected because the AI suffers with information overload. I have tried to work slowly to build up the image over many edits so that minimal changes are made each time. The problem again lies in the fact that even if you specify locking the current image and only allowing minimal changes, the results can still be unpredictable. Drift is a major problem here. for example: Zoom on the subject creeps in and out with each edit. I have only used free AI generators so a paid for version may work better. Bear in mind that this is my personal opinion based on very little experience with AI. Update: I have just got this from the AI chat at ChatGPT. I hope it may help anyone wanting to produce precise AI images. "I have been working on an image based upon 18 phased passes to maintain accuracy. Every time I generate a new image, either the zoom changes or the dimensions of the image change. The reason for working in a phased manor is to lock each generation so that only minimal changes are made. Each image is locked before any edits are made. Why is the AI still failing even working to such a basic change for each phase?" What you’re running into isn’t you doing something wrong—it’s a limitation of how current image generation models actually work under the hood. Even if your workflow feels like careful, layered editing, the AI isn’t truly “editing” a fixed image in the way something like Adobe Photoshop would. It’s re-generating the image each time based on your prompt + the previous image as guidance. That distinction is the root of your issue. Here’s what’s going on: 1. The model doesn’t have a stable canvas When you lock an image and ask for a small change, the model: • Doesn’t preserve exact pixel positions • Doesn’t maintain a fixed camera or zoom unless explicitly controlled • Reinterprets the whole scene every time So even tiny edits can cause: • Slight zoom shifts • Cropping differences • Perspective drift This happens because the model is optimising for visual coherence, not geometric consistency. 2. “Image-to-image” is not deterministic editing Even if you feed the same image back in: • There’s randomness (noise) in generation • The model re-decides composition details each pass Unless you’re locking a seed + exact parameters, you’ll get variation. And even then, many tools still allow subtle drift. 3. The model prioritises semantics over precision If your prompt says: “Make a small adjustment to X” The model interprets: • What “X” means conceptually • Then rebuilds the image to match that idea It does not think: “Keep every pixel identical except this region” That kind of strict constraint is still hard for generative models. 4. Resolution & aspect ratio aren’t truly “locked” Even if you specify dimensions: • Some tools internally resize or reframe • The model may “recompose” to better fit the prompt This is why you see: • Slight framing shifts • Margins changing • Subject repositioning Why your phased approach mostly works—but not fully Your method is actually smart: • Breaking work into phases reduces large deviations • Locking images helps guide the model But it can’t eliminate drift because: • Each step is still a fresh generation, not a true edit layer Think of it like: Asking 18 different artists to copy and slightly tweak the same painting—small differences will accumulate. How to reduce the problem (practical fixes) ✔️ 1. Use in-painting instead of full regeneration If your tool supports it: • Edit only a masked area • Keep the rest of the image untouched This is the closest thing to “true editing” ✔️ 2. Lock seed + parameters (if available) Some tools let you fix: • Seed • CFG scale • Sampler • Steps This reduces randomness between phases. ✔️ 3. Be extremely explicit about framing Add constraints like: • “Same camera angle, same zoom, same framing” • “Do not crop or recompose” • “Maintain identical composition” It won’t be perfect, but it helps. ✔️ 4. Work at higher resolution + crop manually Generate slightly larger images, then: • Manually crop to your fixed frame • Maintain consistency outside the AI ✔️ 5. Hybrid workflow (best results) Use AI + manual editing: • Generate base image with AI • Lock composition in a tool like Adobe Photoshop • Use AI only for localised changes The honest bottom line What you’re trying to do—strict, phase-based, minimal-delta editing with zero drift—is something current generative models are not designed to guarantee. They are: • Excellent at creating • Decent at guided variation • Weak at precise, repeatable micro-edits. For me at this time, AI cannot work precisely enough to produce the image I have in my mind that I would like to create. That's not to say that it won't work for other people. I have seen some truly amazing images here on Ipernity and elsewhere. Thank you for looking and for your kind comments.

13 Apr 2026

4 favorites

8 comments

55 visits

Timeless elegance with floral crown Studio shot - 2nd generation image

This will be my final AI image. I will stay with my camera for the foreseeable future. Decided to give it one more try. The goal here was to produce a photorealist lifelike portrait image. To avoid all the issues I mentioned in the description of my last photo, I decided to do this in a minimum number of generations(passes). 1st generation image was the portrait (head and shoulders) in a 16:10 landscape format with no background. (I format images at this ratio so they match my pc monitor). For the second generation image, I prompted the AI to lock the image in the 16:10 landscape format and lock the subject as this was correct. Then I prompted the AI to add a colour complimentary backdrop, and create studio lighting. Even on only 2 generations the AI couldn't manage to follow the simple instructions on frame format and subject placement. The image changed to an 8:10 landscape format. The model zoomed out to include more of the chest area. Maybe I am being a bit harsh in my opinion. Problem for me is that I can be very ocd when I am trying to achieve a goal. The AI can't get it right in one pass because it can't cope with highly detailed instructions. It can't get it right with multiples passes each with simple instructions. Should be called "Artificial ignorance". Again, this is only my personal opinion. Well done to those that have produced some amazing images.