b3ta.com board
You are not logged in. Login or Signup
Home » Messageboard » XXX » Message 11401768 (Thread)

# The answer to your first point is, basically all current public standard diffusion models are built on huge datasets that are riddled with garbage and noise.
Companies like Adobe can easily develop proprietary datasets consisting solely of decent, well-tagged images, something which has reportedly already been shown to hugely improve results just as things stand.

But I don't really see Adobe being at the forefront of AI image generation (and in fact they have a stock image portfolio to protect). I see it more likely to be adopting AI for things like filters and retouching.

For most uses, inpainting AI could already do away with the need for most of the specialized selection tools in Photoshop. Inpainting 'selection' involves painting very roughly over the whole area you want to change, deliberately going over the edges of the area. Enter the right prompt and it will only change the parts you specify, 'different hat', that sort of thing. Again, the current datasets are full of garbage, the implementations are experimental, but it already all works pretty well. Adobe and others can do so much better.

I think ChatGPT's coding (in)ability is a great example of what commercial AI currently can and can't do. I've spent a lot of time on ChatGPT and to me it feels like spending some fun time skipping from one subject to another on Wikipedia, except it's edited by pathological liars. Conversationally, it's amusingly good/bad. In terms of factual accuracy though, it's a joke. So it's not a surprise that when it comes to something as unforgiving as code, it's basically useless most of the time.

The same is true if you look critically at the output of text-to-image applications, especially if you use artist names. It may accurately reproduce some aspects of an artists style, but it may also produce something historically linked to the artist which has no visible bearing at all on their actual work. But it's a picture, and if it's nice to look at, suddenly the rest isn't very important to most people. It's a hugely forgiving application of AI, which is why I think it's more likely to be immediately useful just as it is.
(, Sun 3 Dec 2023, 18:01, archived)
# Thing is,
a lot of code is being written by people who barely understand what they're doing and just glue stuff from GitHub together with stuff found on StackOverflow...hence all the bloated, insecure crap that passes as apps these days.

Using ChatGPT to produce code isn't much different...
(, Sun 3 Dec 2023, 18:18, archived)
# ChatGPT is actually partly getting its data from exactly the sort of things you just described
It's puking up garbage it's been fed a lot of the time.
(, Sun 3 Dec 2023, 18:34, archived)
# That's exactly my point...
Shit code's being created either way, with the attitude that whatever doesn't get caught at the testing phase can always be fixed in an update...
(, Sun 3 Dec 2023, 18:41, archived)
# Stack Overflow can be great if you take the time to
read through and understand the threads / learn to write a good question.

ChatGPT is googling it for you, with no thought for what happens if the parasite kills the host and no new questions get created on Stack Overflow for it to read on your behalf. And no uber-nerds to correct it on giving a bad answer.
(, Sun 3 Dec 2023, 22:47, archived)
# I'm not dissing SO...
I've found answers to problems on there (although I've often had to filter out a lot of crap), but some people use it like those homework cheating websites, whilst being paid to supposedly know what they're doing.
(, Sun 3 Dec 2023, 23:05, archived)