
Companies like Adobe can easily develop proprietary datasets consisting solely of decent, well-tagged images, something which has reportedly already been shown to hugely improve results just as things stand.
But I don't really see Adobe being at the forefront of AI image generation (and in fact they have a stock image portfolio to protect). I see it more likely to be adopting AI for things like filters and retouching.
For most uses, inpainting AI could already do away with the need for most of the specialized selection tools in Photoshop. Inpainting 'selection' involves painting very roughly over the whole area you want to change, deliberately going over the edges of the area. Enter the right prompt and it will only change the parts you specify, 'different hat', that sort of thing. Again, the current datasets are full of garbage, the implementations are experimental, but it already all works pretty well. Adobe and others can do so much better.
I think ChatGPT's coding (in)ability is a great example of what commercial AI currently can and can't do. I've spent a lot of time on ChatGPT and to me it feels like spending some fun time skipping from one subject to another on Wikipedia, except it's edited by pathological liars. Conversationally, it's amusingly good/bad. In terms of factual accuracy though, it's a joke. So it's not a surprise that when it comes to something as unforgiving as code, it's basically useless most of the time.
The same is true if you look critically at the output of text-to-image applications, especially if you use artist names. It may accurately reproduce some aspects of an artists style, but it may also produce something historically linked to the artist which has no visible bearing at all on their actual work. But it's a picture, and if it's nice to look at, suddenly the rest isn't very important to most people. It's a hugely forgiving application of AI, which is why I think it's more likely to be immediately useful just as it is.
( ,
Sun 3 Dec 2023, 18:01,
archived)
But I don't really see Adobe being at the forefront of AI image generation (and in fact they have a stock image portfolio to protect). I see it more likely to be adopting AI for things like filters and retouching.
For most uses, inpainting AI could already do away with the need for most of the specialized selection tools in Photoshop. Inpainting 'selection' involves painting very roughly over the whole area you want to change, deliberately going over the edges of the area. Enter the right prompt and it will only change the parts you specify, 'different hat', that sort of thing. Again, the current datasets are full of garbage, the implementations are experimental, but it already all works pretty well. Adobe and others can do so much better.
I think ChatGPT's coding (in)ability is a great example of what commercial AI currently can and can't do. I've spent a lot of time on ChatGPT and to me it feels like spending some fun time skipping from one subject to another on Wikipedia, except it's edited by pathological liars. Conversationally, it's amusingly good/bad. In terms of factual accuracy though, it's a joke. So it's not a surprise that when it comes to something as unforgiving as code, it's basically useless most of the time.
The same is true if you look critically at the output of text-to-image applications, especially if you use artist names. It may accurately reproduce some aspects of an artists style, but it may also produce something historically linked to the artist which has no visible bearing at all on their actual work. But it's a picture, and if it's nice to look at, suddenly the rest isn't very important to most people. It's a hugely forgiving application of AI, which is why I think it's more likely to be immediately useful just as it is.

a lot of code is being written by people who barely understand what they're doing and just glue stuff from GitHub together with stuff found on StackOverflow...hence all the bloated, insecure crap that passes as apps these days.
Using ChatGPT to produce code isn't much different...
( ,
Sun 3 Dec 2023, 18:18,
archived)
Using ChatGPT to produce code isn't much different...

It's puking up garbage it's been fed a lot of the time.
( ,
Sun 3 Dec 2023, 18:34,
archived)

Shit code's being created either way, with the attitude that whatever doesn't get caught at the testing phase can always be fixed in an update...
( ,
Sun 3 Dec 2023, 18:41,
archived)

read through and understand the threads / learn to write a good question.
ChatGPT is googling it for you, with no thought for what happens if the parasite kills the host and no new questions get created on Stack Overflow for it to read on your behalf. And no uber-nerds to correct it on giving a bad answer.
( ,
Sun 3 Dec 2023, 22:47,
archived)
ChatGPT is googling it for you, with no thought for what happens if the parasite kills the host and no new questions get created on Stack Overflow for it to read on your behalf. And no uber-nerds to correct it on giving a bad answer.