chrash0

joined 6 months ago
[–] [email protected] 0 points 1 month ago (4 children)

i’ve used Chezmoi for years now pretty successfully. works on my Mac and Linux machines. it probably could be made to work on Windows. i am transitioning to NixOS, but i’ll probably keep using it anyway, since i still have Macs for work (and because they’re great laptops don’t @ me). the only real downside is that it only works for the home folder, so i have to manually control stuff for /etc, but i generally prefer user configuration for most tools anyway.

i had messed around with Ansible for this in the past, but i didn’t really like it for this use case. it’s been a while tho so it’s hard to say why.

not to pile on, but you might also look at GNU Stow. i decided against it, but it’s there.

obligatory i s’pose: https://github.com/covercash2/dotfiles

[–] [email protected] 11 points 3 months ago (1 children)

yeah i see that too. it seems like mostly a reactionary viewpoint. the reaction is understandable to a point since a lot of the “AI” features are half baked and forced on the user. to that point i don’t think GNOME etc should be scrambling to add copies of these features.

what i would love to see is more engagement around additional pieces of software that are supplemental. for example, i would love if i could install a daemon that indexes my notes and allows me to do semantic search. or something similar with my images.

the problems with AI features aren't within the tech itself but in the surrounding politics. it’s become commonplace for “responsible” AI companies like OpenAI to not even produce papers around their tech (product announcement blogs that are vaguely scientific don’t count), much less source code, weights, and details on training data. and even when Meta releases their weights, they don’t specify their datasets. the rat race to see who can make a decent product with this amazing tech has made the whole industry a bunch of pearl clutching FOMO based tweakers. that sparks a comparison to blockchain, which is fair from the perspective of someone who hasn’t studied the tech or simply hasn’t seen a product that is relevant to them. but even those people will look at something fantastical like ChatGPT as if it’s pedestrian or unimpressive because when i asked it to write an implementation of the HTTP spec in the style of Fetty Wap it didn’t run perfectly the first time.

[–] [email protected] 2 points 3 months ago

there are language models that are quite feasible to run locally for easier tasks like this. “local” rules out both ChatGPT and Co-pilot since those models are enormous. AI generally means machine learned neural networks these days, even if a pile of if-else used to pass in the past.

not sure how they’re going to handle low-resource machines, but as far as AI integrations go this one is rather tame

[–] [email protected] 11 points 4 months ago

if it’s easier to pay, people spend more

[–] [email protected] 1 points 4 months ago (6 children)

it’s an analogy that applies to me. tldr worrying about having my identity stolen via physical access to my phone isn’t part of my threat model. i live in a safe city, and i don’t have anything the police could find to incriminate me. everyone is going to have a different threat model. some people need to brick up their windows

[–] [email protected] 2 points 4 months ago (8 children)

it’s not a password; it’s closer to a username.

but realistically it’s not in my personal threat model to be ready to get tied down and forced to unlock my phone. everyone with windows on their house should know that security is mostly about how far an adversary is willing to go to try to steal from you.

personally, i like the natural daylight, and i’m not paranoid enough to brick up my windows just because it’s a potential ingress.