DR_Hero

joined 1 year ago
[–] [email protected] 8 points 3 weeks ago

There's a much more accurate stat... and it's disgusting

[–] [email protected] 3 points 4 weeks ago (1 children)

I think he was just imminently concerned about their safety. Like the post suggests, many thought desperate times were coming and any rando in a maga hat might retaliate.

[–] [email protected] 2 points 1 month ago

It's a dream I considered many times. It can be cheaper* than land life.

[–] [email protected] 4 points 4 months ago

Collective mass arbitration is my favorite counter to this tactic, and is dramatically more costly for the company than a class action lawsuit.

https://www.nytimes.com/2020/04/06/business/arbitration-overload.html

A lot of companies got spooked a few years back and walked back their arbitration agreements. I wonder what changed for companies to decide it's worth it again. Maybe the lack of discovery in the arbitration process even with higher costs?

[–] [email protected] 1 points 8 months ago

The responses aren't exactly deterministic, there are certain attacks that work 70% of the time and you just keep trying.

I got past all the levels released at the time including 8 when I was doing it a while back.

[–] [email protected] 4 points 8 months ago (1 children)

Excuse me but, the fuck is wrong with you?

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago)

The reason that makes the most sense in one of the articles I've read is that they fired him after he tried to push out one of the board members.

Replacing that board member with an ally would have cemented control over the board for a time. They might not have felt his was being honest in his motives for the ousting, so it was basically fire now, or lose the option to fire him in the future.

Edit: https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html

[–] [email protected] 3 points 1 year ago (1 children)

Now I'm upset this wasn't the original haha

[–] [email protected] 8 points 1 year ago

I've definitely experienced this.

I used ChatGPT to write cover letters based on my resume before, and other tasks.

I used to give it data and tell chatGPT to "do X with this data". It worked great.
In a separate chat, I told it to "do Y with this data", and it also knocked it out of the park.

Weeks later, excited about the tech, I repeat the process. I tell it to "do x with this data". It does fine.

In a completely separate chat, I tell it to "do Y with this data"... and instead it gives me X. I tell it to "do Z with this data", and it once again would really rather just do X with it.

For a while now, I have had to feed it more context and tailored prompts than I previously had to.