WalnutLum

joined 9 months ago
[–] [email protected] 1 points 1 day ago

You should play voices of the void then. Game is chock full of random spooks with lots of very quiet and relaxing downtime, so they hit pretty hard when they happen.

[–] [email protected] 6 points 4 weeks ago

I had to correct someone that got a bunch of upvotes about this the other day.

These increases are permanent. They're not going to average out at +2 C then peter back down.

[–] [email protected] 8 points 1 month ago (2 children)

Firefly needs to hurry up and make a human-rated capsule instead of cargo farings.

I have high hopes for a company that can set up a rocket almost from scratch in 24 hours.

[–] [email protected] 1 points 1 month ago

I hope shepherd gets a mention in this series eventually

[–] [email protected] 1 points 1 month ago

I was never able to get appreciably better results from 11 labs than using some (minorly) trained RVC model :/ The long scripts problem is something pretty much any text-to-something model suffers from. The longer the context the lower the cohesion ends up.

I do rotoscoping with SDXL i2i and controlnet posing together. Without I found it tends to smear. Do you just do image2image?

[–] [email protected] 1 points 1 month ago (2 children)

Coqui for TTS, RVC UI for matching the TTS to the actor's intonation, and DWPose -> controlnet applied to SDXL for rotoscoping

[–] [email protected] 0 points 1 month ago (4 children)

All the models I've used that do TTS/RVC and rotoscoping have definitely not produced professional results.

[–] [email protected] 5 points 1 month ago (1 children)

Isn't the emperor's tower and all the surface guns oriented toward the second option?

Seems like it's a little of both

[–] [email protected] 11 points 2 months ago

The OSI just published a resultnof some of the discussions around their upcoming Open Source AI Definition. It seems like a good idea to read it and see some of the issues they're trying to work around...

https://opensource.org/blog/explaining-the-concept-of-data-information

[–] [email protected] 1 points 3 months ago

Yes of course, there's nothing gestalt about model training, fixed inputs result in fixed outputs

[–] [email protected] 7 points 3 months ago (1 children)

I suppose the importance of the openness of the training data depends on your view of what a model is doing.

If you feel like a model is more like a media file that the model loaders are playing back, where the prompt is more of a type of control over how you access this model then yes I suppose from a trustworthiness aspect there's not much to the model's training corpus being open

I see models more in terms of how any other text encoder or serializer would work, if you were, say, manually encoding text. While there is a very low chance of any "malicious code" being executed, the importance is in the fact that you can check the expectations about how your inputs are being encoded against what the provider is telling you.

As an example attack vector, much like with something like a malicious replacement technique for anything, if I were to download a pre-trained model from what I thought was a reputable source, but was man-in-the middled and provided with a maliciously trained model, suddenly the system I was relying on that uses that model is compromised in terms of the expected text output. Obviously that exact problem could be fixed with some has checking but I hope you see that in some cases even that wouldn't be enough. (Such as malicious "official" providence)

As these models become more prevalent, being able to guarantee integrity will become more and more of an issue.

[–] [email protected] 3 points 3 months ago (2 children)

I've seen this said multiple times, but I'm not sure where the idea that model training is inherently non-deterministic is coming from. I've trained a few very tiny models deterministically before...

view more: next ›