FaceDeer

joined 6 months ago
[–] [email protected] 8 points 1 month ago

If the women felt threatened they could have simply not approached the Ukrainian soldiers.

[–] [email protected] 23 points 1 month ago (2 children)

Eh, there didn't seem to be any sort of implied threat or imbalance of power in the little snippet presented here. The old ladies approached the soldiers and asked for a lift, and the soldiers seemed honestly apologetic that they had no room to provide one.

It's quite interesting seeing the "depoliticization" of the general Russian population having this effect, when the Ukrainians moved in a surprising number seem to be just shrugging and going "new management, I guess." Will be interesting to see how the occupation goes if it's long-term.

[–] [email protected] 10 points 1 month ago

Looking forward to the "Waymo robotaxis become silent killers stalking the night" headlines once the fix is implemented.

[–] [email protected] 1 points 1 month ago

No, a summary is just a condensed version of some larger work. If the larger work contains bullshit then so can the summary, that doesn't stop it from being a summary. As you say, a summary accurately portrays the substance of that content. In this case there was content that said Alpha Centauri was 13 km from Earth, so the summary said that too.

This is really not complicated.

Companies burn obscene amounts of money on moonshots all the time, even ones that have no possibility of success.

If you think it has no possibility of success, sit back and relax as AI goes away.

[–] [email protected] 1 points 1 month ago (3 children)

They absolutely cannot reliably summarize the result of searches, like this post is about

The problem is that it did summarize the result of this search, the results of this search included one of those "if the Earth was the size of a grain of sand, Alpha Centauri would be X kilometers away" analogies. It did exactly the thing you're saying it can't do.

Any meaningful rate of failures at all makes them massively, catastrophically damaging to humanity as a whole.

Nothing is perfect. Does that make everything a massive catastrophic threat to humanity? How have we managed to survive for this long?

You're ridiculously overblowing this. It's a "ha ha, looks like AI made a whoopsie because I didn't understand that I actually asked it to do" situation. It's not Skynet coming to convince us to eat cyanide.

And this is all completely ignoring the obscene energy costs associated with making web searches complete and utter dogshit.

Of course it's ignoring that. It's not real.

You realize that energy costs money? If each web search cost an "obscene" amount, how is Microsoft managing to pay for it all? Why are they paying for it? Do you think they'll continue paying for it indefinitely? It'd be a completely self-solving problem.

[–] [email protected] 0 points 1 month ago (6 children)

I have genuinely found LLMs to be useful in many contexts. I use them to brainstorm and flesh out ideas for tabletop roleplaying adventures, to write song lyrics, to write Python scripts to do various random tasks. I've talked with them to learn about stuff, and verified that they were correct by checking their references. LLMs are demonstrably capable of these things. I demonstrated it.

Go ahead and refrain from using them yourself if you really don't want to, for whatever reason. But exclaiming "no it doesn't!" In the face of them actually doing the things you say they don't is just silly.

[–] [email protected] 1 points 1 month ago (1 children)

Then go ahead and put "science questions" into one of the areas that you don't use LLMs for. That doesn't make them useless in general.

I would say that a more precise and specific restriction would be "they're not good at questions involving numbers." That's narrower than "science questions" in general, they're still pretty good at dealing with the concepts involved. LLMs aren't good at math so don't use them for math.

[–] [email protected] -1 points 1 month ago (8 children)

Your comment is simply counterfactual. I do indeed find LLMs to be useful. Saying "no you don't!" Is frankly ridiculous.

I'm a computer programmer. Not directly experienced with LLMs themselves, but I understand the technology around them and have written program that make use of them. I know what their capabilities and limitations are.

[–] [email protected] 27 points 1 month ago (2 children)

I'm sure the Ukrainian soldiers are rather busy with important things of their own, but if they've got any spare bandwidth it'd be neat if they were able to help organize the Russian civilians a bit and keep this kind of lawlessness suppressed. Heck, if they're digging in for the long term they may end up needing to provide humanitarian aid for the people who chose to stay behind. That'll be quite the look.

[–] [email protected] 1 points 1 month ago

Our "intelligence" agencies already kill innocent people based entirely on metadata — because they simply live or work around areas that known terrorists occupy — now imagine if an AI was calling the shots.

So by your own scenario, intelligence agencies are already getting stuff wrong and making bad decisions using existing methodologies.

Why do you assume that new methodologies that involve LLMs will be worse at that? Why could they not be better? Presumably they're going to be evaluating their results when deciding whether to make extensive use of them.

"Mathematical magic tricks" can turn out to be extremely useful. That phrase can be used to describe all manner of existing techniques that are undeniably foundational to civilization.

[–] [email protected] 0 points 1 month ago (13 children)

Except it is capable of meaningfully doing so, just not in 100% of every conceivable situation. And those rare flubs are the ones that get spread around and laughed at, such as this example.

There's a nice phrase I commonly use, "don't let the perfect be the enemy of the good." These AIs are good enough at this point that I find them to be very useful. Not perfect, of course, but they don't have to be as long as you're prepared for those occasions, like this one, where they give a wrong result. Like any tool you have some responsibility to know how to use it and what its capabilities are.

[–] [email protected] 0 points 1 month ago (21 children)

I expect if you follow the references you'd find one of them to be one of those "if Earth was a grain of sand" analogies.

People like laughing at AI but usually these silly-sounding answers accurately reflect the information the search returned.

view more: ‹ prev next ›