b3ta.com links
You are not logged in. Login or Signup
Home » links » Link 1629124 | Random (Thread)

This is a normal post Economy of scale is going to the the issue here, I suspect.
Yeah, you can run train/inference across a distributed LAN, but then you have to deal with things such as maintenance, increased energy costs, security, redundancy, etc. Then there's the challenge of preventing obsolescence.
Christ, it was only a couple of years ago that these LLMs were struggling to draw hands, and now they're producing videos and images that are all but indistinguishable from reality. A local AI cluster isn't ever going to keep up with that kind of progress.

It's a bit like suggesting 'Why store all your data in a hosted cloud storage solution, when you could build your own, shitter, local cloud?'
(, Fri 9 Jan 2026, 7:07, Reply)
This is a normal post
You're probably right when it comes to training which is likely to remain centralised. Local inference (hopefully on-device) sounds right with less data transfer and latency. Smaller models have been shown to be just as good for everyday use.
(, Fri 9 Jan 2026, 9:59, Reply)