Advertisement

Double to float C++

Started by May 13, 2024 04:27 PM
181 comments, last by JoeJ 6 months, 1 week ago

Is this true? …

In this paper, we will be quantizing the kinematic and gravitational time dilation by casting them to a lesser-precision floating point number.

The non-exponent bit count $n$ includes the number of mantissa bits $m$, plus one sign bit.

We generally used $n = 100$, except for the kinematic and gravitational time dilation, which uses a lesser, various precision (e.g. $n = 24$).

For subnormal numbers such as those used here, the smallest step size that can be represented is $\epsilon = 2 \times 2^{-b}$, where $b$ is the largest exponent value (e.g. $2^7 - 1 = 127$ in single-precision floating point numbers, $2^{10} - 1 = 1023$ for doubles).

For instance, we use $1023$ where $n = 100$, and $127$ otherwise (e.g. where $n = 24$).

As for $m$, it governs how many places there are after the decimal point.

I stole your bit twiddling ideas. see figs 1 and 2

https://github.com/sjhalayka/quantum_path_decomposition/blob/main/bezier_escape.pdf

Advertisement

Haha, well… i did some bit twiddling too - the ‘serious’ kind ; )

I have moved all my geometry processing data to 10 or 16 bits, to save memory.
The positions in the image are 3 x 10 bits packed in a single int32 vs. the former vec3 needing 4 floats for alignment.
A common trick for making assets smaller.
But it also works for processing the geometry. I do it for everything. Positions, normals, cross and wave fields.
It caused me some issues like zero normals. But no real loss of quality. : )

Looks awesome!

So, i'm baffled from the power of quantization too now.

Wavefield with all quantizations off:

Noisy, but it worked good enough.

All quantizations on:

Why is it better with LESS precision ???

And why is there this weird diagonal, which caused me some artifacts so i've noticed this at all?

(The yellow means a lower magnitude btw, where the field is weakly defined.)

Anyway. Now i need to investigate… which sucks. Why can never something just work.

Weird, man. You should try out Boost multiprecision!

Advertisement

OK, do I have this right?

numeric_limits<… >::min() gives the minimum subnormal number (e.g the first value above zero).

numeric_limits<… >::epsilon() gives the minimum supernormal number (e.g the first value above one).

P.S. What is the green stuff in your pictures?

taby said:
P.S. What is the green stuff in your pictures?

I want a regular mesh, so each polygon has the same size and is ideally a quad.
To do this, i first calculate curvature aligned crossfield:

You see those crosses are aligned to the curvature of the cube edges, and they also want to align to adjacent crosses so the field is smooth.
You see it works although the initial iso surface geometry is noisy and low quality, because the cube does not align to the global voxel grid.
The crossfield tells the orientation of the edges the final remesh should have, but not yet the size of polygons.
For that i use the green wavefield. Each of the two lines of a cross is quipped with a sine wave, and the lines are linked with adjacent crosses, picking the colinear line, not the perpendicular one.
If you put a chalk on a wheel and roll it, the chalk draws a sine wave, which corresponds with the distance the wheel has traveled. So i have wheels rolling from one vertex to the next, following the lines of the crossfield, drawing a wave field with the chalk. Using the interference of both wheels crossing any cross, i can then use the local maxima of the sine wave to tell where a remesh vertex should appear.

So what you see is basically an interference pattern of regular, orthogonal waves. I use iterative solvers to match the waves to adjacent vertices, so it tends to generate a regular grid. But due to curvature the space can shrink or expand, which causes triangles to insert or remove new edges where needed.
The classical example are the 8 singular vertices of a tessellated cube, deformed into a sphere. At those vertices 3 quads meet, not 4 like anywhere else:

You can also see those spots as yellow regions in my first image above. Becasue my algorithm minimizes distortion, it automatically generates those 8 spots at the expected positions of cube corners. This is actually a good test for crossfield solvers, as bad ones would generate the singularities at random positions, causing more distortion.

Those singularities are as odd as black holes. They curve the parametric space. If two motorcycles start beside each other and drive parallel to the edge grid, one passes the singularity from the left but the other from the right, their paths would become right angled to each other, no longer parallel. Although both drive just straight, they would cross each others path.
If they would hit the singularity exactly, they would get stuck forever at it. Because our singularity is a sink.

But there can also be singularities which are like sources, those with 5 or more quads touching them.
If the driver hits one of those, it will be cloned, and multiple copies of him will drive in multiple directions, like in parallel universes.
So maybe, white holes - if they exist - would need to generate many worlds or parallel universes.

Sometime soon i'll start work on 2D fluid sim on the surface of my 3D meshes. And fluid sim is all about preventing sinks or sources, so that's an interesting problem. I need to flatten curved space locally, using distortion and scaling to tame the singularity paradox. Maybe it will refine my philosophy about the universe… :D

Did you steal that idea from Perelman? :P

taby said:
Did you steal that idea from Perelman? :P

Would YOU try to rob a guy who rejects a million dollars? :D

Well, it turned out the noise from my images was a visualizaton bug.

But the diagonals were actually a slight compiler bug.
Here is the fix using a ‘temp’ array:

std::complex<float> wave[2] = {0.f,0.f}; 
std::complex<float> temp[2] = {aCell->extWavefield[aVI*2+0], aCell->extWavefield[aVI*2+1]}; 
WaveField::TransportCrossWave (wave, cNorm, dstCFbasisVectors, parentCross, 
	temp, //&aCell->extWavefield[aVI*2], // why accepted by MSVC although wrong type?
	dstPos, srcPos, 0);//elementSize);

extWavefield was formerly a std::vector of complex numbers, and the function takes a pointer to complex numbers.
Here is the signature, see srcWF:

void WaveField::TransportCrossWave (std::complex<float> transportedWave[2],
			const vec &dstNormal, const vec dstCFbasisVectors[2], 
			const vec &alignedSrcCF, const std::complex<float> *srcWF,
			const vec &dstPos, const vec &srcPos, const float scale)

With the quantization changes, extWavefield is no longer a std::vector, but a custom struct which overloads the [] operator, doing the unpacking of the quantized data on demand:

	struct CpComplex32
	{
		//...
		const std::complex<float> operator [] (const int i) const
		{
			return QuantizedToReal(quantizedComplex[i]);
		}

MSVC accepts the pointer to the returned result, but then only the first element accessed by the function is right. The second is random bits.
Imo they should not do this. It should generate a compile time error.

Clang gives an error as expected.

Initially it always feels annoying if compilers are more strict. But on the long run it's always better. I would have found the bug instantly instead after many hours.

This topic is closed to new replies.

Advertisement