wols

joined 1 year ago
 

On posts that I access through my home instance, i.e. any post present in the "Your local instance" feed, nothing appears in the comments section apart from the message "There are no comments", despite the UI suggesting that there are several.

When accessing the post's permalink in the web UI of the instance, the comments show up without issue, even when logged in.

To reproduce, log in to sync with a lemm.ee account, switch to local feed and click on any post that appears to have comments. For example: https://lemm.ee/post/35476369

For some reason I haven't been able to replicate this on any other instance (tried with .world and .ml, both of which don't seem to have this issue)

[–] [email protected] 1 points 3 months ago

Ditto on the no text part. That is an accessibility failure that's way too widespread.
Sometimes I'm afraid to even push a button: does this delete my thing, or does it do some other irreversible change? Will I be able to tell what it did? Maybe it does something completely different, or maybe I'm lucky and it does in fact perform the action I'm looking for and which in my mind is a no-brainer to include?

And it's infected interpersonal communication too - people peppering their messages with emojis, even professional communications. It not only looks goofy, but is either redundant (when people just add the emoji together with the word it's meant to represent - such a bizarre practice) or, worse, ambiguous when the pictogram replaces the word and the recipient(s) can't make out what it depicts.
The most fun is when it's a mix - the message contains some emojis with accompanying translation, some without.

[–] [email protected] 2 points 3 months ago

I don't share the hate for flat design.
It's cleaner than the others, simpler and less distracting. Easier on the eyes, too. It takes itself seriously and does so successfully imo (nice try, aero). It feels professional in a way all the previous eras don't - they seem almost child-like by comparison.

Modern design cultivates recognizable interactions by following conventions and common design language instead of goofy icons and high contrast colors. To me, modern software interfaces look like tools; the further you go back in time, the more they look like toys.

Old designs can be charming if executed well and in the right context. But I'm glad most things don't look like they did 30 years ago.

I'm guessing many people associate older designs with the era they belonged to and the internet culture at the time. Perhaps rosy memories of younger days. Contrasting that with the overbearing corporate atmosphere of today and a general sense of a lack of authenticity in digital spaces everywhere, it's not unreasonable to see flat design as sterile and soulless. But to me it just looks sleek and efficient.
I used to spend hours trying to customize UIs to my liking, nowadays pretty much everything just looks good out of the box.

The one major gripe I have is with the tendency of modern designs to hide interactions behind deeply nested menu hopping. That one feels like an over-correction from the excessively cluttered menus of the past.
That and the fact that there's way too many "settings" sections and you can never figure out which one has the thing you're looking for.

P S. The picture did flat design dirty by putting it on white background - we're living in the era of dark mode!

[–] [email protected] 5 points 6 months ago (1 children)

I want to preface this with the mention that understanding other people's code and being able to modify it in a way that gets it to do what you want is a big part of real world coding and not a small feat.
The rest of my comment may come across as "you're learning wrong". It is meant to. I don't know how you've been learning and I have no proof that doing it differently will help, but I'm optimistic that it can. The main takeaway is this: be patient with yourself. Solving problems and building things is hard. It's ok to progress slowly. Don't try to skip ahead, especially early on.
(also this comment isn't directed at you specifically, but at anyone who shares your frustration)

I was gonna write an entire rant opposing the meme, but thought better of it as it seems most people here agree with me.
BUT I think that once you've got some basics down, there really is no better way to improve than to do. The key is to start at the appropriate level of complexity for your level of experience.
Obviously I don't know what that is for you specifically, but I think in general it's a good idea to start simple. Don't try to engineer an entire application as your first programming activity.

Find an easy (and simple! as in - a single function with well defined inputs and outputs and no side effects) problem; either think of something yourself, or pick an easy problem from an online platform like leetcode or codechef. And try to solve the problem yourself. There's no need to get stuck for ages, but give it an honest try.
I think a decent heuristic for determining if you have a useful problem is whether you feel like you've made significant progress towards a solution after an hour or two. If not, readjust and pick a different problem. There's no point in spending days on a problem that's not clicking for you.

If you weren't able to solve the problem, look at solutions. Pick one that seems most straight forward to you and try to understand it. When you think you do, give the original problem a little twist and try to solve that. While referencing the solution to the original if you need to.
If you're struggling with this kind of constrained problem, keep doing them. Seriously. Perhaps dial down the difficulty of the problems themselves until you can follow and understand the solutions. But keep struggling with trying to solve little problems from scratch. Because that's the essence of programming: you want the computer to do something and you need to figure out how to achieve that.
It's not automatic, intuitive, inspired creation. It's not magic. It's a difficult and uncertain process of exploration. I'm fairly confident that for most people, coding just isn't how their brain works, initially. And I'm also sure that for some it "clicks" much easier than for others. But fundamentally, the skill to code is like a muscle: it must be trained to be useful. You can listen to a hundred talks on the mechanics of bike riding, and be an expert on the physics. If you don't put in the hours on the pedals, you'll never be biking from A to B.
I think this period at the beginning is the most challenging and frustrating, because you're working so hard and seemingly progress so slowly. But the two are connected. You're not breezing through because it is hard. You're learning a new way of thinking. Everything else builds on this.

Once you're more comfortable with solving isolated problems like that, consider making a simple application. For example: read an input text file, replace all occurrences of one string with another string, write the resulting text to a new text file. Don't focus on perfection or best practices at first. Simply solve the problem the way you know how. Perhaps start with hard-coded values for the replacement, then make them configurable (e.g. by passing them as arguments to your application).

When you have a few small applications under your belt you can start to dream big. As in, start solving "real" problems. Like some automation that would help you or someone you know. Or tasks at work for a software company. Or that cool app you've always wanted to build. Working on real applications will give you more confidence and open the door to more learning. You'll run into lots of problems and learn how not to do things. So many ways not to do things.

TLDR: If it's not clicking, you need to, as a general rule, do less learning (in the conventional sense of absorbing and integrating information) and more doing. A lot of doing.

[–] [email protected] 1 points 7 months ago

The point is not the difference between a fake memory and a real one (let's grant for now that they are undistinguishable) but the fact that positive experiences are worth a lot more than just the memories they leave you with.

I may not know the difference between a memory of an event that I experienced and a memory of an event I didn't experience. Looking back on the past, they're the same.
But each moment of pleasure that I only remember, without having experienced it, was essentially stolen from me. Pleasure is a state of consciousness and only exists in the present.

[–] [email protected] 2 points 8 months ago (1 children)

Even better, Obsidian notes are stored directly in folders on your device as plain text (markdown) files.
It's all there, nothing missing, and no annoying proprietary format.

Not only can you keep using them without the Obsidian application, you can even do so using a "dumb" text editor - though something that can handle markdown will give you a better experience.

[–] [email protected] 7 points 10 months ago (1 children)

Honestly, their comment reads like copy pasta. That first paragraph is chef's kiss.
I initially thought they weren't being sincere, something something Poe's law...

(' v ')/

[–] [email protected] 3 points 10 months ago* (last edited 10 months ago)

The main difference is that 1Password requires two pieces of information for decrypting your passwords while Bitwarden requires only one.

Requiring an additional secret in the form of a decryption key has both upsides and downsides:

  • if someone somehow gets access to your master password, they won't be able to decrypt your passwords unless they also got access to your secret key (or one of your trusted devices)
  • a weak master password doesn't automatically make you vulnerable
  • if you lose access to your secret key, your passwords are not recoverable
  • additional effort to properly secure your key

So whether you want both or only password protection is a trade-off between the additional protection the key offers and the increased complexity of adequately securing it.

Your proposed scenarios of the master password being brute forced or the servers being hacked and your master password acquired when using Bitwarden are misleading.

Brute forcing the master password is not feasible, unless it is weak (too short, common, or part of a breach). By default, Bitwarden protects against brute force attacks on the password itself using PBKDF2 with 600k iterations. Brute forcing AES-256 (to get into the vault without finding the master password) is not possible according to current knowledge.

Your master password cannot be "acquired" if the Bitwarden servers are hacked.
They store the (encrypted) symmetric key used to decrypt your vault as well as your vault (where all your passwords are stored), AES256-encrypted using said symmetric key.
This symmetric key is itself AES256-encrypted using your master password (this is a simplification) before being sent to their servers.
Neither your master password nor the symmetric key used to decrypt your password vault is recoverable from Bitwarden servers by anyone who doesn't know your master password and by extension neither are the passwords stored in your encrypted vault.

See https://bitwarden.com/help/bitwarden-security-white-paper/#overview-of-the-master-password-hashing-key-derivation-and-encryption-process for details.

[–] [email protected] 11 points 11 months ago

That number is like 20 years old.

Today it's around 60 billion.

[–] [email protected] -1 points 11 months ago* (last edited 11 months ago)

There's no need for something that complex.
Someone with access to a chess engine watches the game and inputs the moves into the engine as they're played. If there's a critical move (only 1 or very few of the options are winning/don't throw the game) they send a simple signal to let him know. That can be enough to give you an advantage at that level. If you really want, you could send a number between 1 and 6 to represent which piece the engine prefers to move, but it's likely not necessary.

That said, all the evidence he actually did anything like that is at best circumstantial (mostly statistical evidence supposedly showing how unlikely his performance was given his past performance and rating at the time, as well as known instances of past cheating by him - though the only confirmed ones were several years ago when he was still a kid and online rather than in person).

[–] [email protected] 12 points 11 months ago

Extra steps that guarantee you don't accidentally treat an integer as if it were a string or an array and get a runtime exception.
With generics, the compiler can prove that the thing you're passing to that function is actually something the function can use.

Really what you're doing if you're honest, is doing the compiler's work: hmm inside this function I access this field on this parameter. Can I pass an argument of such and such type here? Lemme check if it has that field. Forgot to check? Or were mistaken? Runtime error! If you're lucky, you caught it before production.

Not to mention that types communicate intent. It's no fun trying to figure out how to use a library that has bad/missing documentation. But it's a hell of a lot easier if you don't need to guess what type of arguments its functions can handle.

[–] [email protected] 5 points 1 year ago (3 children)

The point is that you're not fixing the problem, you're just masking it (and one could even argue enabling it).

The same way adding another 4 lane highway doesn't fix traffic long term (increasing highway throughput leads to more people leads to more cars leads to congestion all over again) simply adding more RAM is only a temporary solution.

Developers use the excuse of people having access to more RAM as justification to produce more and more bloated software. In 5 years you'll likely struggle even with 32GiB, because everything uses more.
That's not sustainable, and it's not necessary.

view more: next ›