CanadaPlus

joined 1 year ago
MODERATOR OF
[–] [email protected] 2 points 3 days ago

Bover kurwa! It's kind of impressive to me that this thing can cover many hundreds of kilometers.

[–] [email protected] 4 points 3 days ago* (last edited 1 day ago)

that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.

You overestimate how hard it is to get a conspiracy theorist to click on something. I don't know, it seems promising to me. I more worry that it can be used to sell things more nefarious than "climate change is real".

you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.

They used a purpose-finetuned GPT-4 model for this study, and it didn't go off script in that way once. I bet you could make it if you really tried, but if you're doing adversarial prompting then you're not the target for this thing anyway.

[–] [email protected] 2 points 3 days ago* (last edited 3 days ago)

The interaction between society and technology continues to be borderline impossible to predict. I hope less true factually beliefs are still harder to defend, at least.

[–] [email protected] 4 points 3 days ago (1 children)

I heard about the HIMARS, but does this extend to all American munitions?

[–] [email protected] 1 points 4 days ago* (last edited 4 days ago)

If it's outside Russia, sure. It's probably going to be something (or some things) in international space if it's a retaliation for something in international space. Or at least, it should be, because I don't really buy the "direct war with Russia would be fine" jerk.

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago)

Yeah, I don't get that. Federation is the option to have a hyper-custom server that does weird things, or to make your own server with blackjack and hookers if you don't like your current one, without losing access to community and content. Most people aren't nerds, though, so if you want plag-and-play an instance like lemmy.world is great.

If you want a small bubble you actually don't want federation.

[–] [email protected] 1 points 5 days ago

Mastodon is just an impenetrable mess from a UX perspective.

How does it compare to Lemmy?

[–] [email protected] 2 points 5 days ago

Great, so the perverse incentives aren't beatable then. Time to bug lawmakers, I guess?

On the bright side, Lemmy feels just about like Reddit to use, so that bodes well for us.

[–] [email protected] 1 points 5 days ago* (last edited 5 days ago)

Okay, so I'm going to tell you where the new Twitter is in the blue swirly.

I know, I know, easier said than done to actually guide them through, but if they're at that level it's just a different setting on the magic box.

[–] [email protected] 2 points 5 days ago (2 children)

Yeah, I feel like this should be surmountable. At worst, you skip the whole concept of federation and just tell them exactly where to sign up.

[–] [email protected] 3 points 5 days ago

Lol, I plead not American, I've never had to deal with the IRS.

[–] [email protected] 13 points 5 days ago* (last edited 5 days ago) (2 children)

I regret that the Democrats aren't abusing (to whatever degree doing exactly what they said counts) the shit out of that ruling. This is why the other guys win.

Bonus points if it's fucking with the justices personally. Telling the IRA to adjust someone's taxes upwards sounds like an "official action"...

 

A link to the preprint. I'll do the actual math on how many transitions/second it works out to later and edit.

I've had an eye on this for like a decade, so I'm hyped.

Edit:

So, because of the structure of the crystal the atoms are in, it actually has 5 resonances. These were expected, although a couple other weak ones showed up as well. They give a what I understand to be a projected undisturbed value of 2,020,407,384,335.(2) KHz.

Then a possible redefinition of the second could be "The time taken for 2,020,407,384,335,200 peaks of the radiation produced by the first nuclear isomerism of an unperturbed ^229^Th nucleus to pass a fixed point in space."

 

We have no idea how many there are, and we already know about one, right? It seems like the simplest possibility.

 

This is about exactly how I remember it, although the lanthanides and actinides got shortchanged.

 

Unfortunately not the best headline. No, quantum supremacy has not been proven, exactly. What this is is another kind of candidate problem, but one that's universal, in the sense that a classical algorithm for it could be used to solve all other BQP problems (so BQP=P). That would include Shor's algorithm, and would make Q-day figuratively yesterday, so let's hope this is an actual example.

Weirdly enough, they kind of skip that detail in the body of the article. Maybe they're planning to do one of their deep dives on it. Still, this is big news.

 

Reposting because it looks like federation failed.

I was just reading about it, it sounds like a pretty cool OS and package manager. Has anyone actually used it?

 

It's not really news after a decade, but I still think it's worth a look. This is something I think about sometimes, and it's better to let the actual scholars speak.

For whatever reason it's not mentioned as a candidate great filter very often even though nearly all the later steps on the path to complexity have happened more than once, and there's lots of habitable looking exoplanets.

Edit: To be clear, this says that just because life started early on Earth, doesn't really provide much evidence it's an easy process, if you allow that it could possibly be very unlikely indeed.

 

An interesting look at how America thinks about the conflict when cameras aren't pointing at them. TL;DR they see themselves 20 years ago, and are trying to figure out how to convey all the lessons that experience taught them, including "branches" and "sequels", which is jargon I haven't heard mentioned before. Israel is not terribly receptive.

Aaand of course, Tom Cotton is at the end basically describing a genocide, which he would support.

 

cross-posted from: https://lemmy.sdf.org/post/2617125

A written out transcript on Scott Aaronson's blog: https://scottaaronson.blog/?p=7431


My takes:

ELIEZER: What strategy can a like 70 IQ honest person come up with and invent themselves by which they will outwit and defeat a 130 IQ sociopath?

Physically attack them. That might seem like a non-sequitur, but what I'm getting at is that Yudowski seems to underestimate how powerful and unpredictable meatspace can be over the short-to-medium term. I really don't think you could conquer the world over wifi either, unless maybe you can break encryption.

SCOTT: Look, I can imagine a world where we only got one try, and if we failed, then it destroys all life on Earth. And so, let me agree to the conditional statement that if we are in that world, then I think that we’re screwed.

Also agreed, with the caveat that there's wide differences between failure scenarios, although we're probably getting a random one at this rate.

ELIEZER: I mean, it’s not presently ruled out that you have some like, relatively smart in some ways, dumb in some other ways, or at least not smarter than human in other ways, AI that makes an early shot at taking over the world, maybe because it expects future AIs to not share its goals and not cooperate with it, and it fails. And the appropriate lesson to learn there is to, like, shut the whole thing down. And, I’d be like, “Yeah, sure, like wouldn’t it be good to live in that world?”

And the way you live in that world is that when you get that warning sign, you shut it all down.

I suspect little but reversible incidents are going to happen more and more, if we keep being careful and talking about risks the way we have been. I honestly have no clue where things go from there, but I imagine the tenor and consistency of response will be pandemic-ish.

GARY: I’m not real thrilled with that. I mean, I don’t think we want to leave what their objective functions are, what their desires are to them, working them out with no consultation from us, with no human in the loop, right?

Gary has a far better impression of human leadership than me. Like, we're not on track for a benevolent AI if such a thing makes sense (see his next paragraph), but if we had that it would blow human governments out of the water.

ELIEZER: Part of the reason why I’m worried about the focus on short-term problems is that I suspect that the short-term problems might very well be solvable, and we will be left with the long-term problems after that. Like, it wouldn’t surprise me very much if, in 2025, there are large language models that just don’t make stuff up anymore.

GARY: It would surprise me.

Hey, so there's a prediction to watch!

SCOTT: We just need to figure out how to delay the apocalypse by at least one year per year of research invested.

That's a good way of looking at it. Maybe that will be part of whatever the response to smaller incidents is.

GARY: Yeah, I mean, I think we should stop spending all this time on LLMs. I don’t think the answer to alignment is going to come from through LLMs. I really don’t. I think they’re too much of a black box. You can’t put explicit, symbolic constraints in the way that you need to. I think they’re actually, with respect to alignment, a blind alley. I think with respect to writing code, they’re a great tool. But with alignment, I don’t think the answer is there.

Yes, agreed. I don't think we can un-invent them at this point, though.

ELIEZER: I was going to name the smaller problem. The problem was having an agent that could switch between two utility functions depending on a button, or a switch, or a bit of information, or something. Such that it wouldn’t try to make you press the button; it wouldn’t try to make you avoid pressing the button. And if it built a copy of itself, it would want to build a dependency on the switch into the copy.

So, that’s an example of a very basic problem in alignment theory that is still open.

Neat. I suspect it's impossible with a reasonable cost function, if the thing actually sees all the way ahead.

So, before GPT-4 was released, [the Alignment Research Center] did a bunch of evaluations of, you know, could GPT-4 make copies of itself? Could it figure out how to deceive people? Could it figure out how to make money? Open up its own bank account?

ELIEZER: Could it hire a TaskRabbit?

SCOTT: Yes. So, the most notable success that they had was that it could figure out how to hire a TaskRabbit to help it pass a CAPTCHA. And when the person asked, ‘Well, why do you need me to help you with this?’–

ELIEZER: When the person asked, ‘Are you a robot, LOL?’

SCOTT: Well, yes, it said, ‘No, I am visually impaired.’

I wonder who got the next-gen AI cold call, haha!

 

Maybe the classical era too, I don't know where the start year should be. It ends in the early modern period when bordering agriculturalists like the Russians start expanding.

In other places and times agriculturalists tend to displace nomads on arable land, probably because crop farming can support a lot more people (and therefore fighters) per area.

Any explanation needs to be valid across the whole period and rely on things the nomads had that the farmers didn't. Horse archery was not new by this period.

 

Well, from the perspective of Earth. Obviously there's a big delay on the way here, which is kind of the entire idea of the technique used.

Plus, without reading the paper I'm not sure what "briefly" means. Even if it was a short spurt this is crazy, though.

 

If ever. I don't know, are they part of the blackout? I thought they were.

view more: next ›