Just for the record.
All recursion can be implemented with iteration using one of the simplest data structures around, the stack.
It''s no harder than that to make a recursive function iterative. The problem is that you won''t gain very much performance. Hardly any at all, you only save the call that way...
How much of a performance hit do recursive functions have?
HardDrop - hard link shell extension."Tread softly because you tread on my dreams" - Yeats
I don't know if this was said already, but when I learned in my intro programming class about recursion, the Fibonacci problem was brought up because:
- it is easy to program a recursive procedure to solve it (to N)
- it is inefficient to do it recursively.
If people take this to mean "all recursive solutions are less efficient than iterative ones", they are dead wrong.
Another problem is finding X raised to the Nth power. Iteratively, this is easy: answer = 1, for i=0 to N multiply answer by X.
However, this is more effectively solved recursively, using what I believe is called the "Peasant's Algorithm" (named such because it's been around for centuries). Basically, it uses the commutative property of multiplication to reduce the number of multiplications involved:
X * X * X * X = (X * X) * (X * X)
Let's say the number is 17, and you're doing it by hand. If you were to do it as shown on the left side of the equality, you'd multiply 17 * 17 (= 289), then 17 * 298 (4913), then 17 * 4913 (83521).
Assuming your hand isn't cramped, now try it the other way:
(17 * 17) * (17 * 17). You do 17 * 17 = 289, and the right half of the equation is ALSO 289, so you don't need to do the other multiplication . Now all you need to do is 289 * 289. You do two multiplications instead of three. It is trivial to write this recursively, the only sticky spot being figuring out what to do if N isn't a power of two.
Now, if anybody out there is mathematically minded, they can see that I've shown one example of a recursive solution that bests its iterative counterpart, and vice versa. Therefore, there should be no more argument about one always performing better than the other; both sides have been disproven.
Good techniques should ALWAYS be used, regardless if they're being used in Ivory Tower Theoretical CS Courses, or in the latest Quake engine.
I'd also like to thank Bishop; your discussion has been on-target here, thanks for keeping your head.
Edited by - Stoffel on November 13, 2000 12:51:40 PM
- it is easy to program a recursive procedure to solve it (to N)
- it is inefficient to do it recursively.
If people take this to mean "all recursive solutions are less efficient than iterative ones", they are dead wrong.
Another problem is finding X raised to the Nth power. Iteratively, this is easy: answer = 1, for i=0 to N multiply answer by X.
However, this is more effectively solved recursively, using what I believe is called the "Peasant's Algorithm" (named such because it's been around for centuries). Basically, it uses the commutative property of multiplication to reduce the number of multiplications involved:
X * X * X * X = (X * X) * (X * X)
Let's say the number is 17, and you're doing it by hand. If you were to do it as shown on the left side of the equality, you'd multiply 17 * 17 (= 289), then 17 * 298 (4913), then 17 * 4913 (83521).
Assuming your hand isn't cramped, now try it the other way:
(17 * 17) * (17 * 17). You do 17 * 17 = 289, and the right half of the equation is ALSO 289, so you don't need to do the other multiplication . Now all you need to do is 289 * 289. You do two multiplications instead of three. It is trivial to write this recursively, the only sticky spot being figuring out what to do if N isn't a power of two.
Now, if anybody out there is mathematically minded, they can see that I've shown one example of a recursive solution that bests its iterative counterpart, and vice versa. Therefore, there should be no more argument about one always performing better than the other; both sides have been disproven.
Good techniques should ALWAYS be used, regardless if they're being used in Ivory Tower Theoretical CS Courses, or in the latest Quake engine.
I'd also like to thank Bishop; your discussion has been on-target here, thanks for keeping your head.
Edited by - Stoffel on November 13, 2000 12:51:40 PM
uhm, ok... I think i know what recu... (blhablha) is, but could anyone post a bit of code demonstrating it?
=======================
Game project(s):
www.fiend.cjb.net
=======================
Game project(s):
www.fiend.cjb.net
=======================Game project(s):www.fiend.cjb.net
Is Recursion unavoidable for some problems?
Yes
Is iteration better for most problems in games?
Yes
I have talked to the test engineers for Mechwarrior and they said that they avoid recursion like the plauge.
Take that answer as you will, I personally try to use it sparingly. This is where industry and academics conflict. Regardless of what anyone says.
<style>a.h{text-decoration:none;color:blue;};a.h:hover{text-decoration:underline;background:red;};
Yes
Is iteration better for most problems in games?
Yes
I have talked to the test engineers for Mechwarrior and they said that they avoid recursion like the plauge.
Take that answer as you will, I personally try to use it sparingly. This is where industry and academics conflict. Regardless of what anyone says.
<style>a.h{text-decoration:none;color:blue;};a.h:hover{text-decoration:underline;background:red;};
Why is it called a hot water heater? Isn''t it cold when it goes in the tank?
-=CF=-</html>
a.h{text-decoration:none;color:blue;};a.h:hover{text-decoration:underline;background:red;};
Why is it called a hot water heater? Isn't it cold when it goes in the tank?
[email=jtaylor@gtemail.net" class="h]-=CF=-[/email]quote: Original post by Magmai Kai Holmlor
CurrentNode = CurrentNode->Child[j];
...
CurrentNode = CurrentNode->Parent->Child[j+1];
is not better than
call RayTrace(CurrentNode->Child[j], ...);
Magmai,
Where are these nodes stored? And what if some rays only have 4 children while others require 64?
Implicit in your solution above is the definition:
Node *Child[64];
If not. then do you assume we have some static pool which we can allocate these nodes from? Are you going to do a function call for that allocation? If you do, you''ve just defeated the purpose of avoiding the recursive function call!
First off, we have to avoid the array of 64. Thie is because we just can''t have all that memory available for the rare instances where the tree is very dense. That means we need to make a more complex data structure from some static pool involving linking siblings and so forth.
Now, do all this and avoid the function call you insist is worth saving.
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
Check this out:
http://users.eecs.ukans.edu/~millew/peasant.cpp
I have implemented the "Peasant Algorithm" proposed by Stoffel. The implementations I tested (and their proposed time complexities) are:
The naive iterative algorithm. O(n)
The iterative algorithm. O(log n)
The recursive algorithm. O(log n)
(n is the power of X)
The implementation was done in windows. If there is a more optimized recursive version, I would be happy to change my code and test that one instead. It is my intent to pit the best against the best, not to weasel the comparison in one direction.
I tried both __fastcall and inline with all the versions; none offered a performance improvement (actually, it got worse)
Please let me know if I have made logical errors in the program.
The results of the tests were shocking.
http://users.eecs.ukans.edu/~millew/peasant.cpp
I have implemented the "Peasant Algorithm" proposed by Stoffel. The implementations I tested (and their proposed time complexities) are:
The naive iterative algorithm. O(n)
The iterative algorithm. O(log n)
The recursive algorithm. O(log n)
(n is the power of X)
The implementation was done in windows. If there is a more optimized recursive version, I would be happy to change my code and test that one instead. It is my intent to pit the best against the best, not to weasel the comparison in one direction.
I tried both __fastcall and inline with all the versions; none offered a performance improvement (actually, it got worse)
Please let me know if I have made logical errors in the program.
The results of the tests were shocking.
If you're using a stack, its ad hoc recursion and may very well be slower than making the function call, maybe not though if less needs to get push/pop'ed...
...
To be honest I don't know much about ray tracing, or the particular type you want to do...
Can it be done using a b-tree? (not a bst)
template
struct CBTreeNode
{
CBTreeNode* Parent;
TElement* DataArray;
CBTreeNode** ChildNodeArray;
int children;
}
...
All iterative solutions are equal or faster in computational time than recursive counterparts; recursive solutions are easier (hence faster) to code.
...
Peasant Test results:
I changed LOTS to 10,000,000
and used QueryPerformanceCounter (times are in us)
Iterative Time: 1478376
Recursive Time: 2688900
Naive Iterative Time: 2196855
Dummy Loop Exec Time: 347888
this might be a bad example, because in reality you would loop much less and spend more time on node calculations... ie if it takes 400MHz to do the node calculations it doesn't matter much if it takes 3 vs 1.5 MHz on the looping...
Edited by - Magmai Kai Holmlor on November 13, 2000 9:22:10 PM
...
To be honest I don't know much about ray tracing, or the particular type you want to do...
Can it be done using a b-tree? (not a bst)
template
struct CBTreeNode
{
CBTreeNode* Parent;
TElement* DataArray;
CBTreeNode** ChildNodeArray;
int children;
}
...
All iterative solutions are equal or faster in computational time than recursive counterparts; recursive solutions are easier (hence faster) to code.
...
Peasant Test results:
I changed LOTS to 10,000,000
and used QueryPerformanceCounter (times are in us)
Iterative Time: 1478376
Recursive Time: 2688900
Naive Iterative Time: 2196855
Dummy Loop Exec Time: 347888
this might be a bad example, because in reality you would loop much less and spend more time on node calculations... ie if it takes 400MHz to do the node calculations it doesn't matter much if it takes 3 vs 1.5 MHz on the looping...
Edited by - Magmai Kai Holmlor on November 13, 2000 9:22:10 PM
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
quote: Please let me know if I have made logical errors in the program.
Just one (but it''s a big one):
The Peasant''s Algorithm is a divide-and-conquer algorithm. When you tried to write a recursive function to implement it, you did not divide-and-conquer; you merely duplicated work. Your code:
int recursivePower(int base, int power){ if (power == 0) return 1; if (power == 1) return base; int split = power >> 1; return recursivePower(base,split) * recursivePower(base,power-split);}
As you can see, two recursive calls per branch. Even though the power is divided by two, two calls are made, therefore this solution is O(N).
Here''s a working (efficient) implementation:
int recursivePower(int base, int power){ if (power == 0) return 1; if (power == 1) return base; if (power & 1) return recursivePower (base * base, power / 2) * base; else return recursivePower (base * base, power / 2);}
It should be easy to see that this is a divide-and-conquer algorithm, since power is being divided by two, but only one recursive call is made each time through the loop.
The numbers I saw didn''t stack up very well, but they were much better than what you were seeing:
--program output--
1073741824
243
8
1073741824
243
8
Iteration Time: 80
Recursive Time: 141
Naive Iter Time: 280
(BTW, this is with BASE = 2, POWER = 30, and I increased the number of times the test was run substantially)
I saw no difference using _fastcall, and inline wouldn''t work anyway (can''t recurse inline functions). So I must say I''m surprised, though not as surprised as you are . I think the thing that helps your iterative function is that it''s a very easy conversion from the recursive function to a loop solution. I''m not ready to throw out my recursion yet; I think it might be the best solution when an iterative one isn''t readily apparanet. But it does appear that the function call overhead does make recursion an unappealing solution (at least on MSVC 6.0--any Borland or GCC people able to get it better?)
Also, thanks WMiller for going about this the right way. Everybody should take notice: if there''s an argument about efficiency, the way to go about pleading your side is to CODE AN EXAMPLE!!! That he/she got the algorithm wrong on the first shot doesn''t really matter--I was able to fix it soon enough, and the results show that recursion isn''t all it''s cracked up to be. Way to go, WMiller.
compile in release mode & crank up LOTS
and Byondo coded a factorial example.
Edited by - Magmai Kai Holmlor on November 13, 2000 9:26:29 PM
void testFunc(int (*iter)(int,int),const char * text){ //int start = GetTickCount(); LARGE_INTEGER Start; LARGE_INTEGER Finish; QueryPerformanceCounter(&Start); for(int loop = 0; loop < LOTS; loop++) { iter(BASE,POWER); } QueryPerformanceCounter(&Finish); cout<< text << int(Finish.QuadPart - Start.QuadPart) <<endl; //cout << text << GetTickCount() - start << endl;}
and Byondo coded a factorial example.
Edited by - Magmai Kai Holmlor on November 13, 2000 9:26:29 PM
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement