this post was submitted on 28 Feb 2024
-1 points (0.0% liked)

Privacy

31182 readers
1812 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 4 years ago
MODERATORS
 

This might also be an automatic response to prevent discussion. Although I'm not sure since it's MS' AI.

top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 6 months ago

Every single Capitalist model or corporation will do this deliberately with all their AI integration. ALL corporations will censor their AI integration to not attack the corporation or any of their strategic 'interests'. The Capitalist elite in the west are already misusing wokeness (i'm woke) to cause global geo-political splits and all western big tech are following the lead (just look at Gemini), so they are all biased towards the fake liberal narrative of super-wokeness, 'democracy'/freedumb, Ukraine good, Taiwan not part of China, Capitalism good and all the other liberal propaganda and bs. Its like a liberal cancer that infects all AI tools. Nasty.

Agree or disagree with that, but none of us probably want elite psychopaths to decide what we should think/feel about the world, and its time to ditch ALL corporate AI services and promote private, secure and open/free AI - not censored or filled with liberal dogmas and artificial ethics/morals from data to finetuning.

[–] [email protected] 1 points 6 months ago

I get Copilot to bail on conversations so often like your example that I'm only using it for help with programming/code snippets at this point. The moment you question accuracy, bam, chat's over.

I asked if there was a Copilot extension for VS Code, and it said yup, talked about how to install it, and even configure it. That was completely fabricated, and as soon as I asked for more detail to prove it was real, chat's over.

[–] [email protected] 0 points 6 months ago* (last edited 6 months ago) (2 children)

I think the LLM won here. If you're being accusational and outright saying its previous statement is a lie, you've already made up your mind. The chatbot knows it can't change your mind, so it suggests changing the topic.

It's not a spokesperson/bot for Microsoft, not a lawyer. So it knows when it should shut itself off.

[–] [email protected] 1 points 6 months ago

To add, i have seen this behavior the moment you get to argumentative so its not like its purposely singling some topics out.

[–] [email protected] 0 points 6 months ago (1 children)

The chatbot doesn't know anything. It has no state like that, your text just gets appended to it's text.

It has been prompted to disengage from disagreement or something similar. By a human designer.

[–] [email protected] 1 points 6 months ago

I don't know why the discourse about AI has become so philosophical.

When I'm playing a single-player game and I say "the AI opponents know I'm hiding behind cover, so they threw a grenade!", I don't mean that the video game gained sentence and discovered the best thing to do to win against me.

When playing a stealth game, we say "The enemy can't see you if you're behind cover", not "The enemy has been programmed to not take any action the player character when said player character is identified as being granted the Cover status".