Advertisement

Implicit Euler

Started by November 21, 2002 04:51 AM
11 comments, last by lusinho 22 years, 2 months ago
Hi, I''m trying to learn about Implicit Euler integration so I can implement it in my own physics engine. I have found some documents that mention it but they never go as far as giving details of what the equations are or how to implement it. I was wondering if someone could point me to some docs or books or even give a brief explanation of what it is and how I could go about implementing it. I am familiar with Euler (explicit), mid-point, Runge-Kutta and verlet integration methods. Thanks for your help, lusinho
I'm not sure what you mean by "implicit Euler" integration. Implict formulae are those like xy = 1, x^2 + y^2 = 2, etc. In general they're of limited interest to games programmers as they are difficult and expensive to deal with in code.

As for Euler, he was one of the most significant contributors to modern maths, particularly calculus, and his name is associated with many interesting discoveries. A quick seach of MAthworld turned up the Euler-Lagrange Differential Equation, the Euler Integral, the Euler Differential Equation and many more.

[edited by - johnb on November 21, 2002 10:30:34 AM]

[edited by - johnb on November 21, 2002 10:30:46 AM]
John BlackburneProgrammer, The Pitbull Syndicate
Advertisement
If you can find the David Baraff tutorials (which I''m sure google can) I think he had some implicit euler stuff. And I think there where in the later sections...but I could be wrong on both scores.

Good luck
http://www.physics.udel.edu/faculty/macdonald/Ordinary%20Differential%20Equations/The%20Implicit%20Euler%20Method.htm

Yes, you can find the latest version of Baraff''s tutorials at

http://www.pixar.com/companyinfo/research/pbm2001/index.html

and if you look under Implicit Methods you''ll find that these come about to solve stiff ODEs.

Another name for Implicit Euler is Backward Euler (I think).

As I understand it the normal (explicit) Euler integration method can be expressed like

X1 = X0 + h * Y(X0)

where X1 is the new position (orientation), X0 the current position, h is the step (delta time) and Y(X0) is the first derivative of X0 (velocity).

For Implicit Euler the change is simple

X1 = X0 + h * Y(X1)

so the only difference is that the derivative is taken on the new state and not the current one.
But because we don''t know the new state yet, then you can''t use this function directly and have to do it another way instead.

I can''t really follow the whole explanation from Baraff, and I have a feeling that even if I did that still wouldn''t help me to implement this algorithm.
This is becasue in some other documents I''ve read that you need to calculate some big matrices and other complicated things in order to get to the final result.
I can''t actually remember which documents these were, but I do remember them only mentioning it but never explaining how to do it.
>so the only difference is that the derivative
>is taken on the new state and not the current one.
>But because we don't know the new state yet, then you
>can't use this function directly and have to do it another way >instead.

You're correct. Let's write the equation this way:

(I denote X0 with X)

X1 = X + h*Y(X1) // replace X1 with X+DX
X+DX = X + h*Y(X+DX) // subtract X
DX = h*Y(X+DX)

the problem is that we are not able to compute Y(X+DX) directly. Therefore, we use a technique called linearization, that is, we approximate Y(X+DX) using the normal Euler method:

Y(X+DX) ~= Y(X) + (@Y/@X)*DX // ~= is "approximates"

Here @Y/@X is the partial derivative of Y wrt. X. If Y is a vector ("forces of particles") and X is a vector ("positions of particles"), the partial derivative will be a matrix. The most complicated part of the algorithm is computing this matrix.

Finally we can solve for DX:

DX = h*Y(X) + h*(@Y/@X)*DX // solving for DX

[I-h*(@Y/@X)]*DX = h*Y(X) // I denotes the identity matrix

If X is a vector, and Y is a vector, this becomes a linear system of equations. Usually people use the so called conjugate gradient method for solving the equation. After it's done you have found DX: the problem of backward Euler has been successfully solved!

>This is becasue in some other documents I've read that you need >to calculate some big matrices and other complicated things in >order to get to the final result.

True: the matrix (@Y/@X) is a "big" matrix. If the size of X and Y (both assumed to vectors) is n, the matrix will be of size nxn. The lucky thing is, that in the matrix, most nodes are zero, so you can save a hell lot of memory and computational power. The matrix @Y/@X is defined as follows: let

Y = (y1, y2, y3, ..., yn)
X = (x1, x2, x3, ..., xn)

Then the matrix is defined as follows:

@Y/@X = (@Y/@x1, @Y/@x2, @Y/@x3, ..., @Y/@xn)

(where each node @Y/@xi is a column vector so @Y/@X really is a matrix)

More generally, we can write out the (j,i) node of the matrix as:

(@Y/@X)ji = @fj/@xi

Hope this rant helps a bit.

- Mikko Kauppila

[edited by - uutee on November 21, 2002 4:27:53 PM]
Advertisement

Thanks Mikko Kauppila, I think that helps as now I can start learning about the Conjugate Gradient Method and how to implement that.

Doing a quick seach on the web I found that one of Baraff''s papers actually talks about this too. It''s "Large steps in cloth simulation" at http://www.pixar.com/companyinfo/research/deb/sig98.pdf

One of his refenrences is "An introduction to the Conjugate Gradient Method without the agonizing pain" at
http://www-2.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.ps

I''ll need some time to digest both of these documents.

Thanks,
lusinho
Baraff''s paper (Large steps) explains the subject nicely for cloth simulation. There''s also a more general paper on the subject by Baraff, it''s in Siggraph course notes. Google for "physically based modeling baraff", you should find an article called "implicit methods for differential equations" quickly. "Large steps in cloth simulation" explains implicit integration in a great, great fashion, but it explains the cloth forces quite badly, to speak the truth.

Shewchuk''s paper on the CG method is truely a great piece of information. It gets quite hard to read after a couple of tens of pages though (as it approaches the CG method), but the "Canned algorithms" section of the paper gives a nice pseudo code of the CG algorithm, if you just need to implement it. Roughly speaking CG solves the equation A*x=b (where A is a matrix, b is a vector and x is the vector we''re trying to solve) by starting with an initial guess vector x0 (for example, the null vector) and by iteratively adjusting it. Each iteration requires one multiplication of a vector by A, which is a quick-to-do operation for sparse matrices (such as the one''s we tend to have in implicit methods).

- Mikko Kauppila
quote:
Original post by lusinho
Another name for Implicit Euler is Backward Euler (I think).



Yep, you''re right about that.



Graham Rhodes
Senior Scientist
Applied Research Associates, Inc.
Graham Rhodes Moderator, Math & Physics forum @ gamedev.net
After reading those documents I am slowly crawling towards a better understanding and I have a few more questions and coments that I would appreciate any views.

From Baraff''s papers our state vector is

	X(t) =	[ x(t) ]	position		[ q(t) ]	orientation		[ P(t) ]	linear momentum		[ L(t) ]	angular momentum 


The derivative of the state vector is

	d         .		[ v(t) ]	linear velocity	-- X(t) = X(t) =	  .	dt			[ q(t) ]	angular velocity				[ F(t) ]	force				[ T(t) ]	torque 


The final Implicit Euler equation is

X(t+h) = X(t) + delta_X

where delta_X is given by

	  1       d   .                    .	[ - * I - --( X(t) ) ] * delta_X = X(t)	  h       dX 


and as this equation is of the form

A * x = b

where b is a known vector, A is a known matrix (square, symmetric and positive-definite) and x is our unknown vector, we can solve it using the Conjugate Gradient Method.

Great, but I don''t know what the A matrix is.

Baraff tells us that A is a nxn sparse matrix where n is the size of the state vector and also that the (i,j) entry only exists iff (if and only if)

f(i) depends on X(j)

After some head-scratching I thought I understood what this meant and made an attempt at building this matrix.

A =[   1 	d( v(t) )	d( v(t) )	d( v(t) )	d( v(t) )	][   - -	---------	---------	---------	---------	][   h	d( x(t) )	d( q(t) )	d( P(t) )	d( L(t) )	][									][	   .		   .		   .		   .		][ 	d( q(t) )   1	d( q(t) )	d( q(t) )	d( q(t) )	][	---------   - -	---------	---------	---------	][	d( x(t) )   h	d( q(t) )	d( P(t) )	d( L(t) )	][									][ 	d( F(t) )	d( F(t) )   1	d( F(t) )	d( F(t) )	][	---------	---------   - -	---------	---------	][	d( x(t) )	d( q(t) )   h	d( P(t) )	d( L(t) )	][									][ 	d( T(t) )	d( T(t) )	d( T(t) )   1	d( T(t) )	][	---------	---------	---------   - -	---------	][	d( x(t) )	d( q(t) )	d( P(t) )   h	d( L(t) )	] 


Then I tried to identify which elements would be zero according to the above rule.
For each element I thought "does the term on top depend on the term on the bottom?".
So for example for the element (0,0): "does velocity depend on position?"
The answer is no. Position depends on velocity but not the other way around.

So this is the matrix I arrived to

A =[	1 		d( v(t) )			][	-	0	---------	    0		][	h		d( P(t) )			][							][	   	   	   		   .		][ 		1			d( q(t) )	][	0	-	    0		---------	][		h			d( L(t) )	][							][ 			    1				][	0	0	    -		    0		][			    h				][							][ 					    1		][	0	0	    0		    -		][					    h		] 


This looks a bit too sparse.

Is my initial matrix correct?
Did I make a mistake in identifying the zero components?
How do I calculate the remaining elements of the matrix:

d( v(t) )--------- = ?d( P(t) )   .d( q(t) )--------- = ?d( L(t) ) 


Thanks for any help.
lusinho

This topic is closed to new replies.

Advertisement