Advertisement

Is upper division Linear Algebra a good idea or necessary?

Started by October 03, 2014 05:44 PM
17 comments, last by BitMaster 10 years, 4 months ago

looking at matrices as representing linear mappings between vector spaces.

I understand it similiary. The vector of N dimnesion has a space, and a space is well defined by N vectors of N dimension. If vector is to functionaly relate to those base vectors, it must have multilplication relation with a vector . Then vector can get expressed by other base vectors (transform).

At this point matrix is a set of vectors, that does not introduce further properties, the properties will emit only from the multiplication relation definition of vectors.

Of course, there is no requirement for multiplication relation of vector to be defined, so I know that when one says, linear Algebra, there is too much of subset having nearly disappearing common inclusion.

You are not a career. Study what you're most interested in.

This. You have a rare opportunity, make the most of it. You'll have time later in life to study things you're not interested on just to make it to the deadlines and/or practical targets.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Advertisement

In Opengl, the operation is component wise multiply, though it has no sence to be used in manner of geomteric need, it ields a vector space where column and row vectors are the same, implying certain inpropriatness of talking about transposing a matrix, since it transposes by its order with multiply operation (that is impossible in linear algebra becouse of NxM demand in mutipling matricies/vectors, implying also vectors to be nx1x1xm, but that algebra has its definition of operation, that opengl does differ in (it will multiply vectors without their dimension introduced), thus opengl has a vector space of different properties- where multiplication is not associative- but it is a vector space).

OpenGL uses the same data type for several different things, not just geometric vectors. In particular, it uses them for colors. The component-wise multiplication is in fact very useful in this case. Code implementations of mathematical concepts are rarely completely equivalent to their theoretical version.

The people who told you "there is no such thing as multiplication of vectors" were correct. The axioms that define what a vector space is talk about sum and difference of vectors and about multiplying a vector by a scalar. You can look it up anywhere.

I am not word picking, I will refrase again- that if one defines multiplication of vectors on a vector space, it will stop being a vector space if multiplication operation will break axioms mentioned.

If it is so that vectors do not multiply and have no rule towards if doing so, then why we were all the time speaking about matrix multiplication being associative, and a vector being a matrix special case , and distinguishing column and row vectors and actual dimensions of those- those are relations existing only if multiplying of vectors is defined.


When we speak about matrix multiplication in the context of graphics programming we are speaking practically almost always about endomorphisms only (that is, mappings from a vector space \(V\) into \(V\) itself. The matrices are in this case always square. Square matrices form a ring, which includes the properties you are alluding to (like some matrices having a multiplicative inverse or associativity).
These are properties of square matrices only, not any matrix. Again, linear algebra only defines a multiplication between matrices for:
\( \cdot: \mathbb R^{n \times m} \times \mathbb R^{m \times k} \rightarrow \mathbb R^{n \times k}\)
It should be obvious it is not possible to multiply just any matrices (or vectors if we chose to model them as a special case of matrix) together any way you please. It should also be noted that matrix multiplication is always associative (that is \( (A\cdot B)\cdot C = A\cdot (B\cdot C) \)) provided the dimensions of the matrices allow that multiplication in the first place. It is left as an exercise to the reader to show that the associativity does not interfere with the restricted multiplication operator above.

They're all useful. Number theory tends to be a little more essoteric, but is useful for understanding things like the minimum number of bits needed to store (complex) information, or, in understanding how much accuracy you can rely on maintaining through a series of floating-point operations (and strategies to maximize accuracy by ordering operations differently).


The exact cource contents vary between colleges/universities but usually Number theory is about integers, e.g., prime numbers and such. In my university, a course that deals with floating point approximations of real numbers and the like is named Numerical analysis.

That is the usual definition of number theory. Study of properties of integers. As such it is useful in cryptography but not very useful for games. Numerical Analysis (also called Numerical Methods) is useful for physics programming etc.

"Most people think, great God will come from the sky, take away everything, and make everybody feel high" - Bob Marley

which includes the properties you are alluding to (like some matrices having a multiplicative inverse or associativity).
These are properties of square matrices only, not any matrix. Again, linear algebra only defines a multiplication between matrices for:

I apointed the rules only for multiplication of a vector, it does not automaticly imply rules about matricies.There is no need to have matricies invertable, to have multiplication operation on a vector space.

The particular multiplication of a vector, defined in linear algebra, considers a vector of 2 dimensions nx1 or 1xn. And this multiplication operation thus runs in pattern of being complex . But you can validly define a multiplication operation to be not complex (as in GLSL or HLSL), and have a vector space. Making the multiplication of vectors comutative.

You surely agree then that there are plenty of differences fired in also morphisms , transformations. Yes, the multiplication of matricies is not associative then, having multiplication of matricies undestood as F(L(D(v))) where F, L D are linear functions of v vector (linear functions can be multiplied to one function)

Advertisement

They're all useful. Number theory tends to be a little more essoteric, but is useful for understanding things like the minimum number of bits needed to store (complex) information, or, in understanding how much accuracy you can rely on maintaining through a series of floating-point operations (and strategies to maximize accuracy by ordering operations differently).


The exact cource contents vary between colleges/universities but usually Number theory is about integers, e.g., prime numbers and such. In my university, a course that deals with floating point approximations of real numbers and the like is named Numerical analysis.

Yes, you're right, I was mixed up.

throw table_exception("(? ???)? ? ???");

Linear Algebra (upper div with lots of proofs) (last chance to take it also)

Take this. You will examine questions to reasonings, proofs being honest (rare), and if you do, you will master a plenty . I am not smart myself (whaether I have strenghts or not in a smart field), but I handled it and it catalyzed me to master everything (except paranormal world we live in)

Linear Algebra is fair and flexing powerfull , making you able to aproximate paranormal real world. Linear Algebra is simple and honest and pasionate

(I thiknk I need a poem about linear algebra, it is an orgastic discipline, dive to it. It is no hard, being hardest)

The particular multiplication of a vector, defined in linear algebra, considers a vector of 2 dimensions nx1 or 1xn. And this multiplication operation thus runs in pattern of being complex . But you can validly define a multiplication operation to be not complex (as in GLSL or HLSL), and have a vector space. Making the multiplication of vectors comutative.

As the last time we talked about that: you can certainly add additional stuff (like operators) to a vector space. You might even be able to do more with it then (although the polite way in mathematics is to give it's proper name then), but that doesn't suddenly give all vector spaces that property.

Even so, if I remember correctly the last time that discussion came up you needed something like getting rid of a vector \(v\) on the right in an expression \(Av\) for a suitable matrix \(A\). Even if you have an inner product \(*: V \times V \rightarrow V\) that does not work: first, not every vector has an inverse element regarding \(*\). Second, even if you had the expression \(Ae\) (where \(e\) is the one-element regarding \(*\)), you cannot make the \(e\) vanish. It's the multiplicative neutral element for \(*\), not for (matrix, vector) multiplication. The result of \(Ae\) is a vector, not another matrix, definitely not \(A\) itself (except in the degenerate case of one-dimensional vector spaces).

You surely agree then that there are plenty of differences fired in also morphisms , transformations. Yes, the multiplication of matricies is not associative then, having multiplication of matricies undestood as F(L(D(v))) where F, L D are linear functions of v vector (linear functions can be multiplied to one function)

I have no clue what you are talking about. Function composition is always associative. You don't even need linear functions for that, it's a basic property. Changing the representation to \((F \circ L \circ D)(x)\) instead of matrices has no influence on that, you still have \((F \circ L) \circ D = F \circ (L \circ D)\).

This topic is closed to new replies.

Advertisement