person594

joined 1 year ago
[–] [email protected] 4 points 11 months ago

That isn't really the case; while many neural network implementations make nondeterministic optimizations, floating point arithmetic is in principle entirely deterministic, and it isn't too hard to get a neural network to run deterministically if needed. They are perfectly applicable for lossless compression, which is what is done in this article.

[–] [email protected] 0 points 1 year ago (1 children)

Let's just outlaw racism too while we're at it!

[–] [email protected] 2 points 1 year ago (1 children)

Have you tried looking between two Casimir plates?