this post was submitted on 20 May 2024
33 points (90.2% liked)
Hardware
4930 readers
34 users here now
This is a community dedicated to the hardware aspect of technology, from PC parts, to gadgets, to servers, to industrial control equipment, to semiconductors.
Rules:
- Posts must be relevant to electronic hardware
- No NSFW content
- No hate speech, bigotry, etc
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm just glad to hear that they're working on a way for us to run these models locally rather than forcing a connection to their servers...
Even if I would rather run my own models, at the very least this incentivizes Intel and AMD to start implementing NPUs (or maybe we'll actually see plans for consumer grade GPUs with more than 24GB of VRAM?).
Bet you a tenner within a couple years they start using these systems as distrubuted processing for their in house ai training to subsidize cost.
That was my first thought. Server side LLMs are extraordinarily expensive to run. Download to costs to users.