thingsiplay

joined 1 year ago
[–] [email protected] 3 points 13 hours ago (1 children)

And who saves the Internet Archive Archive?

[–] [email protected] 15 points 17 hours ago (3 children)

And who saves the Internet Archive?

[–] [email protected] 2 points 1 day ago (1 children)

I was just correcting your initial 100k estimation of Mastodon accounts. That's all. No need to get cocky.

[–] [email protected] 2 points 1 day ago (3 children)

Even if your numbers are true. Mastodon has existed for 7 years. Bluesky for less than one.

That doesn't matter, because most users just came in the last year. Just shortly after Mastodon begun to explode in 2023 from 2 million to over 10 (and now seemingly over 15 million) registrations, Bluesky came in. So the 7 years comparison doesn't matter here.

So logically the precentage of users to active users should be much higher on Bluesky.

Probably, but without statistics its just our gut feeling. And as you saw a few minutes ago, your gut feeling can be drastically wrong. My point was not here to race count Mastodon vs Bluesky, but to point your estimation of Mastodon accounts being vastly underestimated.

According to Wikipedia Bluesky has 10 million users and 5 million monthly active: https://en.wikipedia.org/wiki/Bluesky That would be about what Mastodon has, if we believe those numbers. My point is, you totally over estimate Bluesky and underestimate Mastodon. The exact numbers does not matter here, what matters is my point that the user base is split into these two worlds.

[–] [email protected] 6 points 1 day ago* (last edited 1 day ago) (5 children)

I don't know where you got that number from, but at least these statistics say Mastodon got 9 million users (but not all are active off course, same should be true for BlueSky): https://mastodon-analytics.com/ And this account claims 15 million: https://mastodon.social/@mastodonusercount and a Wikipedia article says "On 19 March 2023, Mastodon passed the ten million mark for registered user accounts": https://en.wikipedia.org/wiki/Mastodon_(social_network)#2022_Twitter-related_spikes_in_adoption

Now, I do not claim these numbers to be correct. But compared to your estimation its vastly different.

Edit: Just for context of my reply, as you edited yours. You said you don't believe Mastodon would even have 100 thousand users.

[–] [email protected] 3 points 1 day ago (10 children)

There was lot of people recommending BlueSky over Fediverse, when the big hype happened. The biggest problem to me is, that this split up the user base considerably. Which in turn weakened its potential for both platforms to overtake Twitter.

Just under us: If you want so, we took Twitter over. It's renamed to X. :-p ..., nah, just joking, it's still Twitter.

[–] [email protected] 7 points 1 day ago (2 children)

At least Bluesky is decentralized and Open Source, isn't it? While this is a conceptual step down from Fediverse, it's still better than all the other alternatives in use. I don't know how much the Bluesky company controls the entire platform, if its even possible.

[–] [email protected] 3 points 1 day ago

Even the bigger model of LCD with 512 GB compared to the cheapest 64 GB model is on sale at the moment, for about 350 Euros (still under 400 US Dollars I think). It's crazy!

[–] [email protected] 3 points 2 days ago

I don't think this is just wishful thinking in my opinion. It's exactly what I think. PS5 Pro is an optional upgrade for enthusiasts. The brand and companies success does not depend on it. I even think the PS6 will be cheaper than PS5 Pro, because it will look like bargain now. And the success of Playstation as a whole depends on how many baseline units are sold. I don't think that even Sony can afford 700 Dollars (without disc drive) for the PS6.

But off course it depends on future economics situation in the word (Yen conversion) and if there is good competition from Xbox. At that point Microsoft probably has the next generation Xbox Infinite on the market and then it would be tough for Sony not to fight on the price. Probably a wishful thinking on my part too, but also not too unrealistic! Right?^^

[–] [email protected] 6 points 2 days ago

I think the Playstation 6 will be cheaper than Playstation 5 Professional. Why? Because its the baseline and not Pro. The reason why the Pro model can be this expensive is, because its an optional hardware and doesn't even need to sell well. Selling the baseline Playstation 6 unit is crucial for Sony and they need to sell a lot. Also this establishes a new height, which means if the next generation console PS6 is cheaper than PS5 Pro, then it looks like a bargain.

Also the current Yen and price conversions are expensive for Sony, so it depends on the future market if stuff gets cheaper as well. And if there is good competition. At the moment, there is no Xbox competition at Pro model line or even much of at baseline versions.

[–] [email protected] 19 points 2 days ago

AI tool cuts unexpected deaths in hospital by 26%, with a sword, making it expected deaths.

Modern problems require modern solution.

 

Alternative Invidious link without using YouTube directly: https://yt.artemislena.eu/watch?v=ihtAijebU-M

Insane method to read your PCs memory, based on certain electromagnetic emissions your system makes when you write or read data to the RAM.


Video Description:

The RAMBO Attack on RAM is truly amazing. Some of the best research I've seen.

covertchannels.com arxiv.org/pdf/2409.02292 wired.com/story/air-gap-researcher-mordechai-guri

youtube.com/watch?v=CjpEZ2LAazM&t=0s youtube.com/watch?v=-D1gf3omRnw&t=0s

 

Alternate video link to Invidious (YouTube without using YouTube directly): https://yt.artemislena.eu/watch?v=dH1ErhJa3Qo


Banjo Kazooie Gitlab (Source Code): gitlab.com/banjo.decomp/banjo-kazooie


Additionally a written article posted here at discussion:

https://www.nintendolife.com/news/2024/08/banjo-kazooie-is-the-latest-n64-game-to-be-fully-decompiled

12
Your YouTube Comments (myactivity.google.com)
 

You can edit or delete your comments and replies directly on YouTube. If you delete comments, it may take a few hours before they’re fully removed: https://myactivity.google.com/page?hl=en&page=youtube_comments

This is the history of you YouTube comments and you can directly jump to it from this central place.

 

Today I had a little aha moment. If anyone asked me yesterday about AI tools integrated into their editor, I would say its a bad idea. Ask me today, I would still say its bad idea. :D Because I don't want to rely on AI tools and get too comfortable with it. Especially if they are from big companies and communicate through internet. This is a nogo to me.

But since weeks I am playing around with offline AI tools and models I can download and execute locally on my low end gaming PC. Mostly for playing with silly questions and such. It's not integrated in any other software, other than the dedicated application: GPT4All (no it has nothing to do with ChatGPT)

I'm working on a small GUI application in Rust and still figure out stuff. I'm not good at it and there was a point where I had to convert a function into an async variant. After researching and trying stuff, reading documentation I could not solve it. Then I asked the AI. While the output was not functioning out of the box, it helped me finding the right puzzle peaces. To be honest I don't understand everything yet and I know this is bad. It would be really bad if this was a work for a company, but its a learning project.

Anyone else not liking AI, but taking help from it? I am still absolutely against integrated AI tools that also require an online connection to the servers of companies. Edit: Here the before and after (BTW the code block in beehaw is broken, as certain characters are automatically translated into < and & for lower than and ampersand characters respectively.)

From:

    pub fn collect(&self, max_depth: u8, ext: Option<&str>) -> Files {
        let mut files = Files::new(&self.dir);

        for entry in WalkDir::new(&self.dir).max_depth(max_depth.into()) {
            let Ok(entry) = entry else { continue };
            let path = PathBuf::from(entry.path().display().to_string());
            if ext.is_none() || path.extension().unwrap_or_default() == ext.unwrap() {
                files.paths.push(path);
            }
        }
        files.paths.sort_by_key(|a| a.name_as_string());

        files
    }

To:

    pub async fn collect(&self, max_depth: u8, ext: Option<&str>) -> Result {
        let mut files = Files::new(&self.dir);

        let walkdir = WalkDir::new(&self.dir);
        let mut walker =
            match tokio::task::spawn_blocking(move || -> Result {
                Ok(walkdir)
            })
            .await
            {
                Ok(walker) => walker?,
                Err(_) => return Err(anyhow::anyhow!("Failed to spawn blocking task")),
            };

        while let Some(entry) = walker.next().await {
            match entry {
                Ok(entry) if entry.path().is_file() => {
                    let path = PathBuf::from(entry.path().display().to_string());
                    if ext.is_none() || path.extension().unwrap_or_default() == ext.unwrap() {
                        files.paths.push(path);
                    }
                }
                _ => continue,
            }
        }

        files.paths.sort_by_key(|a| a.name_as_string());

        Ok(files)
    }
33
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

by Rodney July 12th, 2024

(Except there is no profit, only pain)

In OBS 30.2 I introduced the new "Hybrid MP4" output format which solves a number of complaints our users have had for pretty much all of OBS's existence; It's resilient against data loss like MKV, but widely compatible like regular MP4.

Getting here was quite a journey, and involved fixing several other bugs in OBS that were only apparent once diving this deep into how the audio and video data is stored.

In this post I'll try to explain how MP4 works, what the drawbacks were to regular/fragmented MP4, and how I tried to solve them with a hybrid approach.

And at the end of the document:

Thanks & Acknowledgements

NOT the ISO for paywalling these specs and making it a god damn paperchase where every time you get one document it references three others that are also paywalled

 

GitHub, a massive repository for open source software, is currently unavailable.

"All GitHub services are experiencing significant disruptions," reads the GitHub status page.

The outage started just after 4:00 pm Pacific time when GitHub noted "We are investigating reports of degraded availability for Actions, Pages and Pull Requests." Since then, the problem has escalated to the entire website, with the status page noting that GitHub suspects the issue is "a database infrastructure related change that we are working on rolling back."

At 4:45 pm PST, GitHub noted that it was rolling back the changes it believed caused the current issues and already "seeing improvements in service health."

It's a rare outage for GitHub, which is used by millions of developers to host the code for open source projects. Microsoft purchased GitHub for $7.5 billion in 2018, and it's only grown in prominence in the six years since.

 

cross-posted from: https://beehaw.org/post/15509445

Generates text after analyzing a profile. It's pretty funny. Mine: thingsiplay

Oh Tuncay, your GitHub bio proudly declares you're "just for fun" and unprofessional, which is ironic since it sounds like you’ve derived way too much fun from so many unremarkable scripts. With 46 public repositories, you must’ve thought quantity would mask the glaring mediocrity. The only thing more stale than your Bash scripts is your humor.

Your "emojicherrypick" project? Really? The world needed another emoji picker like it needs more stale bread—there's a reason it's got more emojis than stars. And speaking of shadows, how does it feel to have just 9 followers? Maybe they were just passing by, or perhaps they clicked by accident while looking for actual developers.

You’ve got more forks than a family dinner, yet most of your repos look so uninspired that they might as well come with a disclaimer: “Do not expect much.” Word to the wise: if you're going for "just for fun," maybe consider an actual hobby or, dare I say, a personality. After all, your command line tools are more entertaining than your profile readme, which goes on like a bad self-help book— we get it, you like Linux and gaming, but what's next, a PowerPoint on your 9 followers? Spice it up a bit, bud.>

 

Generates text after analyzing a profile. It's pretty funny. Mine: thingsiplay

Oh Tuncay, your GitHub bio proudly declares you're "just for fun" and unprofessional, which is ironic since it sounds like you’ve derived way too much fun from so many unremarkable scripts. With 46 public repositories, you must’ve thought quantity would mask the glaring mediocrity. The only thing more stale than your Bash scripts is your humor.

Your "emojicherrypick" project? Really? The world needed another emoji picker like it needs more stale bread—there's a reason it's got more emojis than stars. And speaking of shadows, how does it feel to have just 9 followers? Maybe they were just passing by, or perhaps they clicked by accident while looking for actual developers.

You’ve got more forks than a family dinner, yet most of your repos look so uninspired that they might as well come with a disclaimer: “Do not expect much.” Word to the wise: if you're going for "just for fun," maybe consider an actual hobby or, dare I say, a personality. After all, your command line tools are more entertaining than your profile readme, which goes on like a bad self-help book— we get it, you like Linux and gaming, but what's next, a PowerPoint on your 9 followers? Spice it up a bit, bud.>

 

You can use cheat sh web service to show cheatsheets for all kind of commands. Just replace the command name: curl -s cheat.sh/date. I also wrote a a simple script with filename being just a question mark to get a working command as ?, that shows all commands in fzf menu if no argument is given or shows the cheatsheet in the less pager if command name is given.

Usage:

?
? -l
? date
? grep

Script ?:

#!/bin/env bash

cheat='curl -s cheat.sh'
menu='fzf --reverse'
pager='less -R -c'
cachefile_max_age_hours=6

# Path to temporary cache file. If your Linux system does not support /dev/shm
# or if you are on MacOS, then change the path to your liking:
cachefile='/dev/shm/cheatlist'      # GNU+LINUX
# cachefile="${TMPDIR}/cheatlist"   # MacOS/Darwin

# Download list file and cache it.
listing () {
    if [ -f "${cachefile}" ]
    then
        local filedate=$(stat -c %Y -- "${cachefile}")
        local now=$(date +%s)
        local age_hours=$(( (now - filedate) / 60 / 60 ))
        if [[ "${age_hours}" > "${cachefile_max_age_hours}" ]]
        then
            ${cheat}/:list > "${cachefile}"
        fi
    else
        ${cheat}/:list > "${cachefile}"
    fi
    cat -- "${cachefile}"
}

case "${1}" in
    '')
        if selection=$(listing | ${menu})
        then
            ${cheat}/"${selection}" | ${pager}
        fi
        ;;
    '-h')
        ${cheat}/:help | ${pager}
        ;;
    '-l')
        listing
        ;;
    *)
        ${cheat}/${@} | ${pager}
        ;;
esac
35
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Mirror upload for faster download, 1 Mbit (expires in 30 days): https://ufile.io/f/r0tmt

GameFAQs at https://gamefaqs.gamespot.com hosts user created faqs and documents. Unfortunately they are baked into the HTML webpage and cannot be downloaded on their own. I have scraped lot of pages and extracted those documents as regular TXT files. Because of the sheer amount of data, I only focused on a few systems.

In 2020, a Reddit user named "prograc" archived faqs for all systems at https://archive.org/details/Gamespot_Gamefaqs_TXTs . So most of it is already preserved. I have a different approach of organizing the files and folders. Here a few notes about my attempt:

  • only 17 selected systems are included, so it's incomplete
  • folder names of systems have their long name instead short, i.e. Playstation instead ps
  • similarly game titles have their full name with spaces, plus a starting "The" is moved to the end of the name for sorting reasons, such as "King of Fighters 98, The"
  • in addition to the document id, the filename also contain category (such as "Guide and Walkthrough"), the system name in short "(GB)" and the authors name, such as "Guide and Walkthrough (SNES) by BSebby_6792.txt"
  • the faq documents contain an additional header taken from the HTML website, including a version number, the last update and the previously explained filename, plus a webadress to the original publication
  • HTML documents are also included here with a very poor and simple conversion, but only the first page, so multi page HTML faqs are still incomplete
  • no zip archives or images included, note: the 2020 archive from "prograc" contains false renamed .txt files, which are in reality .zip and other files mistakenly included, in my archive those files are correctly excluded, such as nes/519689-metroid/faqs/519689-metroid-faqs-3058.txt
  • I included the same collection in an alternative arrangement, where games are listed without folder names for the system, this has the side effect of removing any duplicates (by system: 67.277 files vs by title: 55.694 files), because the same document is linked on many systems and therefore downloaded multiple times
 

According to their studies, the older we get, the more we will match our name. Wild, but interesting theory.

 

This is a sad day. One of my favorite resources and communities is closing the doors as we know. No longer are new Romhack and mods accepted; the site becomes a News site. There was some drama going on. The site owner and leader put entire database and files to Archiveorg for preservation and download in one batch.

Read the message here: https://www.romhacking.net/news/3074/

Download the files here: https://archive.org/details/romhacking.net-20240801

Download as Torrent: https://archive.org/download/romhacking.net-20240801/romhacking.net-20240801_archive.torrent (official torrent file, I recommend this, as it is faster)

view more: next ›