theluddite

joined 1 year ago
[–] [email protected] 3 points 2 weeks ago

That would be a really fun project! It almost reads like the setup for a homework problem for a class on chaos and nonlinear dynamics. I bet that as the model increasingly takes into account other people's (supposed?) preferences, you get qualitative breaks in behavior.

Stuff like this is why I come back to postmodernists like Baudrillard and Debord time and time again. These kinds of second- (or Nth-) order "news" are an artifact of the media's constant and ever-accelerating commodification of reality. They just pile on more and more and more until we struggle to find reality through the sheer weight of its representations.

[–] [email protected] 13 points 2 weeks ago (6 children)

Really liked this articulation that someone shared with me recently:

here's something you need to know about polls and the media: we pay for polls so we can can write stories about polls. We're paying for a drumbeat to dance to. This isn't to say polls are unscientific, or false, or misleading: they're generally accurate, even if the content written around marginal noise tends to misrepresent them. It's to remind you that when you're reading about polls, you're watching us hula hoop the ourobouros. Keep an eye out for poll guys boasting about their influence as much as their accuracy. That's when you'll know the rot has reached the root, not that there's anything you can do about it.

[–] [email protected] 6 points 2 months ago

This article is a mess. Brief summary of the argument:

  • AI relies on our collective data, therefore it should be collectively owned.
  • AI is going to transform our lives
  • AI has meant a lot of things over the years. Today it mostly means LLMs.
  • The problems with AI are actually problems with capitalism
  • Socialist AI could be democratically accountable, compensate people from whom they use data, etc.
  • Socialists have always held that technology should be liberatory, and we should view AI the same way
  • Some ideas for how to govern AI

I think that this argument is sloppily made, but I'm going to read it generously for the purposes of this comment and focus on my single biggest disagreement: It misunderstands why LLMs are such a big deal under capitalism, because it misunderstands the interplay between technology and power. There is no such thing as a technological revolution. Revolutions happen within human institutions, and technologies change what is possible in the ongoing and continuous renegotiation of power within them. LLMs appear useful because we live under capitalism, and we think about technology within a capitalist framework. Their primary use case is to allow capitalists to exert more power over labor.

The author compares LLMs to machines in a factory, but machines produce things, and LLMs produce language. Most jobs involve producing language as a necessary byproduct of human collaboration. As a result, LLMs allow capitalists to discipline labor because they can "do" some enormous percentage of most jobs, if you think about human collaboration in the same way that you think about factories. The problem is that human language is not a modular widget that you can make with a machine. You can't automate away the communication within human collaboration.

So, I think that author makes a dangerous category error when they compare LLMs to factory machines. That is how capitalists want us to think of LLMs because it allows them to wield them as a threat to push wages down. That is their primary use case. Once you remove the capitalist/labor power dynamic, then LLMs lose much of their appeal and become just another example of for profit companies mining public goods for private profit. They're not a particularly special case, so I don't think that it requires the special treatment in the way that the author lays out, but I agree that companies shouldn't be allowed to do that.

I have a lot of other problems with this article, which can be found in my previous writing, if that interests you:

[–] [email protected] 118 points 2 months ago* (last edited 2 months ago) (6 children)

Investment giant Goldman Sachs published a research paper

Goldman Sachs researchers also say that

It's not a research paper; it's a report. They're not researchers; they're analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word "research" for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI "research" that's just them poking at their own product but dressed up in a science-lookin' paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I've written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would've noticed that it's actually junk science.

[–] [email protected] 3 points 3 months ago (4 children)

I have been predicting for well over a year now that they will both die before the election, but after the primaries, such that we can't change the ballots, and when Americans go to vote, we will vote between two dead guys. Everyone always asks "I wonder what happens then," and while I'm sure that there's a technical legal answer to that question, the real answer is that no one knows,

[–] [email protected] 9 points 3 months ago (1 children)

Very well could be. At this point, I'm so suspicious of all these reports. It feels like trying to figure out what's happening inside a company while relying only on their ads and PR communications: The only thing that I do know for sure is that everyone involved wants more money and is full of shit.

[–] [email protected] 16 points 3 months ago (8 children)

US Leads World in Credulous Reports of ‘Lagging Behind’ Russia. The American military, its allies, and the various think-tanks it funds, either directly or indirectly, generate these reports to justify forever increasing the military budget.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago)

I know that this kind of actually critical perspective isn't point of this article, but software always reflects the ideology of the power structure in which it was built. I actually covered something very similar in my most recent post, where I applied Philip Agre's analysis of the so-called Internet Revolution to the AI hype, but you can find many similar analyses all over STS literature, or throughout just Agre's work, which really ought to be required reading for anyone in software.

edit to add some recommendations: If you think of yourself as a tech person, and don't necessarily get or enjoy the humanities (for lack of a better word), I recommend starting here, where Agre discusses his own "critical awakening."

As an AI practitioner already well immersed in the literature, I had incorporated the field's taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial -- except that it reproduced the same technical schemata as the AI literature. I believe that this problem was not simply my own -- that it is characteristic of AI in general (and, no doubt, other technical fields as well). T

[–] [email protected] 5 points 3 months ago

Oh damn good to know. I do a lot of work with one of the UCs. We were happy to stop work during the grad student strike a few years ago and we'll be happy to do it again. Thanks for posting!

[–] [email protected] 9 points 4 months ago (1 children)

I've now read several of these from wheresyoured.at, and I find them to be well-researched, well-written, very dramatic (if a little ranty), but ultimately stopping short of any structural or theoretical insight. It's right and good to document the shady people inside these shady companies ruining things, but they are symptoms. They are people exploiting structural problems, not the root cause of our problems. The site's perspective feels like that of someone who had a good career in tech that started before, say, 2014, and is angry at the people who are taking it too far, killing the party for everyone. I'm not saying that there's anything inherently wrong with that perspective, but it's certainly a very specific one, and one that I don't particularly care for.

Even "the rot economy," which seems to be their big theoretical underpinning, has this problem. It puts at its center the agency of bad actors in venture capital becoming overly-obsessed with growth. I agree with the discussion about the fallout from that, but it's just lacking in a theory beyond "there are some shitty people being shitty."

[–] [email protected] 12 points 4 months ago (3 children)

I wish we had less selection, in general. My family lives in Spain, and I've also lived in France. This is just my observation, but American grocery stores clearly emphasize always having a consistent variety, whereas my Spanish family expects to eat higher quality produce seasonally. I suspect that this is a symptom of a wider problem, not the cause, but American groceries are just fucking awful by comparison, and so much more expensive too.

 

Because technology is not progress, and progress is not necessarily technological. The community is currently almost entirely links to theluddite.org, but we welcome all relevant discussions.

Per FAQ, various link formats:

/c/[email protected]

[email protected]

view more: next ›