remram

joined 3 years ago
[–] [email protected] 12 points 2 weeks ago* (last edited 2 weeks ago)

I have never met anyone refer to "screen off" as "sleep".

https://en.wikipedia.org/wiki/Sleep_mode

The terms everybody else are using are: "sleep" = "suspend to RAM" = "S3" and "hibernation" = "suspend to disk".

[–] [email protected] 3 points 2 weeks ago (2 children)

If you're one of those people that think every product is better if there's "AI" on the box then sure. What you're describing is static analysis though, it is not new.

[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Ok so what do you call "sleep"? You've now listed suspending, sleeping, and hibernating as 3 different things.

[–] [email protected] 3 points 2 weeks ago (5 children)

Probably not. Obfuscation works, and might even depend on remote code being downloaded at either build time or run time.

There are a lot of heuristics you can use (e.g. disallowing some functions/modules) to check a codebase, but those already exist no AI required. Unless you call static analysis "AI", who knows.

[–] [email protected] 5 points 2 weeks ago (4 children)

Suspending to disk usually requires a password on resume.

[–] [email protected] 5 points 1 month ago

Keep in mind that a part of the filesystem will be reserved on creation. Here if I create a completely empty ext4 filesystem with:

truncate -s 230G /tmp/img
mkfs.ext4 /tmp/img
mount /tmp/img /mnt

Dolphin reports "213.8 GiB free of 225.3 GiB (5% used)"

screenshot

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

I feel you, but on the other hand if every single community member tries to help, even if they have no idea or don't understand the question, this is not great.

Anybody can ask Google or an LLM, I am spending more time reading and acknowledging this bot answer than it took you to copy/paste. This is the inverse of helping.

The problem is not "the loop"(?), your (LLM's) approach is not relevant, and I've explained why.

[–] [email protected] 0 points 1 month ago (2 children)

What was "the point"? From my perspective, I had to correct a fifth post about using a schedule, even though I had already mentioned it in my post as a bad option. And instead of correcting someone, turns out I was replying to a bot answer. That kind of sucks, ngl.

[–] [email protected] 0 points 1 month ago (4 children)

Did it write that playbook? Did you read it?

[–] [email protected] 1 points 1 month ago

Thanks, that sounds like the ideal setup. This solves my problem and I need an APT mirror anyway.

I am probably going to end up with a cronjob similar to yours. Hopefully I can figure out a smart way to share the pool to avoid download 3 copies from upstream.

[–] [email protected] 0 points 1 month ago (1 children)

Ubuntu only does security updates, no?

No, why do you think that?

run your own package mirror

I think you might be on to something here. I could probably do this with a package mirror, updating it daily and rotating the staging, production, etc URLs to serve content as old as I want. This would require a bit of scripting but seems very configurable.

Thanks for the idea! Can't believe I didn't think of that. It seems so obvious now, I wonder if someone already made it.

1
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

I am using unattended-upgrades across multiple servers. I would like package updates to be rolled out gradually, either randomly or to a subset of test/staging machines first. Is there a way to do that for APT on Ubuntu?

An obvious option is to set some machines to update on Monday and the others to update on Wednesday, but that only gives me only weekly updates...

The goal of course is to avoid a Crowdstrike-like situation on my Ubuntu machines.

edit: For example. An updated openssh-server comes out. One fifth of the machines updates that day, another fifth updates the next day, and the rest updates 3 days later.

view more: next ›