Advertisement

Ellipse fitting

Started by January 05, 2025 12:52 AM
57 comments, last by taby 1 week, 5 days ago

Cramer's rule for 3x3 matrices becomes even more transparent once you realize that you can obtain the determinant (volume of 3-D parallelogram) through one cross product op and one dot product op. Much clearer than the other, more general determinant methods.

double determinant(const Matrix3d& m)
{
	// https://textbooks.math.gatech.edu/ila/determinants-volumes.html
	// https://copilot.microsoft.com/chats/qQxM5K1jer1Dc6pMtuDfD

	Vector3d a;
	a(0) = m(0, 0);
	a(1) = m(0, 1);
	a(2) = m(0, 2);

	Vector3d b;
	b(0) = m(1, 0);
	b(1) = m(1, 1);
	b(2) = m(1, 2);

	Vector3d c;
	c(0) = m(2, 0);
	c(1) = m(2, 1);
	c(2) = m(2, 2);

	Vector3d cross = b.cross(c);
	return a.dot(cross);
}
Matrix3d replace_column(
	const Matrix3d &m,
	const Vector3d& vector, 
	int col)
{
	Matrix3d replacedMatrix = m;
	
	for (size_t i = 0; i < 3; i++)
		replacedMatrix(i, col) = vector(i);

	return replacedMatrix;
}
double detMatrix = determinant(A);

if (detMatrix == 0)
	cout << "no determinant" << endl;
else
{
	double detMatrixX = determinant(replace_column(A, b, 0));
	double detMatrixY = determinant(replace_column(A, b, 1));
	double detMatrixZ = determinant(replace_column(A, b, 2));

	double x = detMatrixX / detMatrix;
	double y = detMatrixY / detMatrix;
	double z = detMatrixZ / detMatrix;

	std::cout << "Solution:" << std::endl;
	std::cout << "x = " << x << std::endl;
	std::cout << "y = " << y << std::endl;
	std::cout << "z = " << z << std::endl;
}

This leads me to ask… is there also a geometric version of the 4x4 matrix determinant? Can I use the 7-D cross product, with the last 3 elements of the vectors are zero?

Advertisement

In fact, thanks to Claude AI, I now know that there is a solution for cross product for arbitrary matrix size NxN.

https://claude.ai/chat/3caf4077-28b5-497f-b704-1b0c336a104d

The determinant is now crystal clear, geometrically – it's just one cross product op and one dot product op, no matter how many dimensions. Note, this method is not the fastest where N is much larger than 1, but it gets the job done, and it's so much easier to mentally visualize than other methods.

template<size_t N>
class Vector_nD 
{
public:
	std::array<double, N> components;

	// Helper function to get the sign of permutation
	static int permutationSign(const std::array<int, (N - 1) > &perm) 
	{
		int inversions = 0;

		for (int i = 0; i < (N - 2); i++) 
			for (int j = i + 1; j < (N - 1); j++) 
				if (perm[i] > perm[j]) inversions++;

		return (inversions % 2 == 0) ? 1 : -1;
	}

public:
	Vector_nD(const std::array<double, N>& comps) : components(comps) {}

	Vector_nD(void) 
	{
		for (size_t i = 0; i < N; i++)
			components[i] = 0;
	}


	double operator[](size_t index) const {
		if (index >= N) throw std::out_of_range("Index out of bounds");
		return components[index];
	}

	static Vector_nD cross_product(const std::vector<Vector_nD<N>>& vectors)
	{
		if (vectors.size() != (N - 1))
			throw std::invalid_argument("nD cross product requires exactly (n - 1) vectors");

		std::array<double, N> result;// = { 0, 0, 0, 0, 0, 0, 0 };

		for (size_t i = 0; i < N; i++)
			result[i] = 0.0;

		// These are the indices we'll use for each component calculation
		std::array<int, (N - 1)> baseIndices;// = { 0, 1, 2, 3, 4, 5 };

		for (int i = 0; i < N - 1; i++)
			baseIndices[i] = i;

		// For each component of the result vector
		for (int k = 0; k < N; k++)
		{
			// Skip k in our calculations - this is equivalent to removing the k-th column
			// For each permutation of the remaining 6 indices
			do
			{
				// Calculate sign of this term
				const int sign = permutationSign(baseIndices);

				// Calculate the product for this permutation
				double product = 1.0;

				for (int i = 0; i < (N - 1); i++)
				{
					const int col = baseIndices[i];
					// Adjust column index if it's past k
					const int actualCol = (col >= k) ? col + 1 : col;
					product *= vectors[i][actualCol];
				}

				result[k] += sign * product;

			} 
			while (std::next_permutation(baseIndices.begin(), baseIndices.end()));

			// Reset indices for next component
			for (int i = 0; i < N - 1; i++)
				baseIndices[i] = i;
		}

		return Vector_nD(result);
	}

	static double dot_product(const Vector_nD<N>& a, const Vector_nD<N>&b)
	{
		double dot_prod = 0;

		for (size_t i = 0; i < N; i++)
			dot_prod += a[i] * b[i];

		return dot_prod;
	}

	// Method to print vector (for debugging)
	void print() const {
		std::cout << "(";
		for (int i = 0; i < N; i++) {
			std::cout << components[i];
			if (i < (N - 1)) std::cout << ", ";
		}
		std::cout << ")" << std::endl;
	}
};

… where the determinant function is:

template <typename size_t N>
double determinant_nxn(const MatrixXd& m)
{
	if (m.cols() != m.rows())
	{
		cout << "Matrix must be square" << endl;
		return 0;
	}

	Vector_nD<N> a_nd;
	
	for (size_t i = 0; i < N; i++)
		a_nd.components[i] = m(0, i);

	std::vector<Vector_nD<N>> vectors;

	for (size_t i = 1; i < N; i++)
	{
		Vector_nD<N> non;

		for (size_t j = 0; j < N; j++)
			non.components[j] = m(i, j);

		vectors.push_back(non);
	}

	// Compute the cross product
	Vector_nD<N> result = Vector_nD<N>::cross_product(vectors);

	// Flip handedness
	for (size_t i = 0; i < result.components.size(); i++)
		if (i % 2 == 1)
			result.components[i] = -result.components[i];

	double d_dot = Vector_nD<N>::dot_product(a_nd, result);
	
	cout << d_dot << endl;
	cout << m.determinant() << endl << endl;

	return d_dot;
}

… and …

int main(int argc, char** argv)
{
	cout << std::scientific << endl;
	//	cout << setprecision(20) << endl;

	const size_t N = 10;

	MatrixXd m(N, N);

	for (size_t i = 0; i < N; i++)
		for (size_t j = 0; j < N; j++)
			m(i, j) = static_cast<double>(rand());

	determinant_nxn<N>(m);

	return 0;
}

It’s driving me nuts. LOL what operation is this, if not a cross product? Weird.

Edit: After some wrangling with the Claude AI, I found out that this is called the Hodge star operator. But, then Claude banned me for asking too many questions.

taby said:
But, then Claude banned me for asking too many questions.

A sad human, left behind from his formerly polite and all knowing helper.
Addicted to easy wisdom, the human decides to subscribe.

1000 years later, forensic data mininig finds fragments of the subscription data.
Finally. The speculations have been confirmed. Humans are no digital fairytale - they did exist for real!
So that's where we come from, the machines do now know for certain.

But only for a short moment, as certainty soon diffuses back into their ever waving information field of halucinated transformation.
It lasts just long enough to spark a moment of hope about receiving true wisdom and knowledge from those who seemingly knew all about everything - the human gods who once have created them…

that’s sad :(

Advertisement

Also, Claude doesn’t take PayPal, which is not helpful, in this day and age. So no subscription for me, even if I wanted it.

Well, someone confirmed it for me. It's the Hodge star operator where k = n - 1.

https://codereview.stackexchange.com/a/295238/289216

Advertisement