Replies: 4 comments
-
I played with this a bit, best I could do is perlin2 going from 13ns to 28ns. Everything else got approximately the same slowdown. I don't think this is worth it. Edit: Oops, just noticed one of my attempts actually got 23ns. I still don't think it's worth it though. Do you know how to compare the quality of the noise? I tried playing with FFTs to generate images like shown in the paper some time ago but all of mine apparently had horrible artifacts even when doing stock noise implementations rather than our tweaked ones. Makes me think I wasn't doing something right. If we can get actual objective comparisons of the noise quality we can start experimenting with alternatives to find a good time/quality trade. |
Beta Was this translation helpful? Give feedback.
-
I would try this link for 2d fft analysis: https://www.cs.unm.edu/~brayer/vision/fourier.html I'd also use the If TEA results in the elimination of directional bias in the generated results, I'd consider that to be worth the ~2x slowdown. As a second option, adjusting the generation modules to take a |
Beta Was this translation helpful? Give feedback.
-
fyi permutation tables can be improved themselves by duplicating the gradients within the gradient table, with a slight offset. So instead of 32 unique gradients, use 4 copies of 8 gradients arranged like so
|
Beta Was this translation helpful? Give feedback.
-
You don't need a cryptographic hashing algorithm, TEA and XTEA are worthless here if you are actually doing a loop inside them (costs way too much, and we don't care about cryptography). You don't need an actual PRNG either, you need an avalanche mixer. This is the kind of stuff you want to look at when choosing one : https://research.neustar.biz/2012/02/02/choosing-a-good-hash-function-part-3/. Its hard to find them however, since A: it takes a lot of experimentation to get them working, and B: they are typically a part of another algorithm, so you have to dissect them from some one else's PRNG. There was a post about this a while back on SO, here is the kind of stuff I'm talking about: https://stackoverflow.com/questions/45218741/how-to-create-a-custom-murmur-avalanche-mixer I've had success with Murmur3 style avalanche mixer index hashing, its about as fast as you can get with some of the best avalanche properties I could find. PRNGs typically don't since they have bad bit aliasing issues, given two numbers with very similar values in bits they will output very correlated results (ie, i and i+1, something you'll see a lot when indexing in noise algorithms). This is an example of the bit bias effect when you use a LCG or XORSHIFT with perlin noise. Avalanche mixing stops this by making the number similarity of inputs uncorrelated. An example of a 2d (or 4d if supplied with 32 bit numbers) avalanche mixer to produce a 64 bit number is this (from the SO post)
Murmur style hashing typically involves a large prime number multiplier (in this case 0x9ddfea08eb382d69, convert to decimal and put it into wolfram alpha if you don't believe me) whose bit values vary randomly, IE, don't use prime numbers like 0xFFFFFFFFFFFFFFC5, which are prime, but whose bit values are highly regular. You then successively xor, multiply, then xor and shift that same number. The shift is important as well, typically ends up being some sort of function of the total number of bits entered, but I'm unsure exactly, I would just fool around with it. To test the "randomness" of the avalanche mixer one can super sample the gradient grid to see if visual artifact appear. A murmur style avalanche mixer should appear uncorrelated even up to 2^number of input bits in total number of index points generated at once (so 2^64 by 2^64 grid for the above Hash128to64) You can then do one of two things with this, use it to index to a set number of gradient values, or generate the gradients on the fly. In my testing I found very little difference in gradient value generation verses fixed gradient value choice in the 2D case, but I may have made some mistake, and because you use sin() and cos() it ends up being a whole lot slower than just choosing from 4 to 8 gradient values. in the 2d case, Instead of generating the x and y values from a single number, a theta value that is then passed to sin and cos to give you your gradient direction, you can have your hash represent either your x or y value on your gradient, with either range from -1 to 1. Now you need to generate your y value, and on the unit circle, since we know that |
Beta Was this translation helpful? Give feedback.
-
https://www.csee.umbc.edu/~olano/papers/GPUTEA.pdf
As covered in the above linked paper, positions could be hashed using TEA or XTEA as a hasher instead of a lookup through
PermutationTable
.Pros of this would be improved noise quality, by eliminating the directional bias of the existing
PermutationTable
hash method.A downside would be a possible reduction in generation speed, due to the calculations involved compared to a fairly simple multiple-lookup in
PermutationTable
. This would need benchmarking to see how much of an impact there would be.Beta Was this translation helpful? Give feedback.
All reactions