I think it's also relevant that when I was growing up, people regularly changed between public and private depending on life circumstances, friend groups, etc. It was billed as a way to switch between people seeing your posts or not, NOT as a way to revoke or grant Facebook or any other entity any specific permission. It served a social function, and at a time when AI did not exist. They changed the meaning of that on us years after the fact and I have not seen any article address that. No teenager in 2011 was thinking of the private/public setting as consent for ai use, and none of these articles talk about pictures that were set to private after being public for a while. It’s bad faith
PotentiallyApricots
I really disagree with the idea that a person using any software which costs money or is corporate controlled is 'opposition' to free software or other movements against capitalism in tech.
Strongly agree. The failure to address these things for what they are just normalizes them.
Hopefully the next places will be more durable. It is still SAD and damaging when vibrant communities get destroyed though. I am more lamenting that.
People haven't adjusted yet to the reality that online social ecosystems matter, they affect so much in the real world. Decimating multiple online spaces in such a short time has consequences and i hate that a handful of random guys with no stake in any of it except money get to make decisions like that.
This sure is a take.
You have articulated exactly how I feel whenever I see that word in a headline haha.
I feel you're coming at this from an abstract angle more than how these things actually play out in practice. This isn't reliable software, it isn't proven to work, and the social and economic realities of the students and families and districts have to be taken into account. The article does a better job explaining that. There are documented harms here. You, an adult, might have a good understanding of how to use a monitored device in a way that keeps you safe from some of the potential harms, but this software is predatory and markets itself deceptively. It's very different than what I think you are describing.
Yeah, I just fundamentally don't think companies or workplaces or schools have the right to so much information about someone. But I can understand that we just see it differently.
An issue here for me is that the kids can't op out. Their guardians aren't the ones checking up on their digital behavior, it's an ai system owned by a company on a device they are forced or heavily pressured to use by a school district. That's just too much of a power imbalance for an informed decision to my mind, even if the user in question were an adult. Kids are even more vulnerable. I do not think it is a binary option between no supervision and complete surveillance. We have to find ways to address potential issues that uphold the humanity of all the humans involved. This seems to me like a bad but also very ineffective way to meet either goal.
Kids going to school cannot reasonably be expected to have the knowledge, forethought, or ability to protect themselves from privacy violations. They lack the rights, info and social power to meaningfully do anything about this. That's why it's exploitative and harmful. Edit: that's also to say nothing of the chilling effect this is going to have on kids who DO need to talk about something but now feel they have to hide it, or feel ashamed of it. Shit is bad news all around.
Sorry, I think I misread a part of your post. My mind knee jerk subbed in a similar argument I hear a lot. I will be honest that much of your essay went over my head, but I think that unregulated capitalism is a bigger enemy than other users who are seen as doing something wrong with their personal choices. I think it's good to encourage imperfect or incomplete adoption of positive things over all or nothing approaches. But i don't know what xmpp is, so I could be off base here in what you're actually talking about. Apologies