quote:
I don't see why, but I don't know much Intel assembly. I should think that most of that negative effect on performance is caused by the compiler; compilers are getting better and will continue to improve.
And wrapper classes continue to improve too. You have to trade off some performance for other features. Most programmers consider trading off 5% processor time for a significant amount of programmer time to be a good choice. Strangely enough, so do project leaders and managers...
quote:
Are you also implying that it is faster to learn auto_ptr instead of new() and delete()?
No, but it's quicker and clearer to make certain parts of your program safe with auto_ptr than with new and delete.
quote:
I don't believe so, but then again I don't quite understand what you are saying. My 'stretched' code is merely for evaluation purposes; I never meant it to compile. I was not comparing the 'stretched' code of auto_ptr to the actual code of 'auto_ptr'; I was comparing the 'stretched' code of normal new() and delete() to the 'stretched' code of auto_ptr.
The point is that although the auto_ptr solution may be longer in stretched code, it is shorter and simpler in client code. You lose some performance, and gain some clarity. If you do not want to make that tradeoff, that is your decision, but many (most) do. That is why we use C++ and not assembly for everything.
quote:
What are you talking about? hundreds of methods in the base class? I don't think you understand; you don't implement a whole hierarchy of window and window-derived classes unless the derived classes contain specific functionality not found in and not applicable to the root base class.
You were implying that everything would be accessed through virtual functions in the base class rather than by downcasting to the derived class. I think your original post on this was not very clear so I'm sorry for the misunderstanding.
quote:
That is the whole purpose of using abstract classes -- you do not need to know the type, only whether a given class is derived from the abstract base class. Then why should you need to know the type when loading the classes?
I am probably missing your point, but -something- has to know what type it is. Otherwise you cannot allocate the memory and call the correct constructor.
quote:
1) MFC comes in two versions, release and debug, and variations of those for different applications...kind of like the run-time libraries and their different libs. Only the debug version has assertions.
There's a distinction between my own project in debug mode (ie. I want it to give more information about what is wrong with it so I can fix it) and a debug library (it should give more information about what it's doing so that I can fix the way my code uses it). If MFC uses asserts to check that I passed it a valid pointer, I consider that an abuse of assertions. Assertions should be there to check invariants, not to try and catch the user. If it gets an invalid parameter, it should throw an exception or return an appropriate failure code.
In properly written code, you know exactly where your classes will interface with other 'layers' and assertions shouldn't be made to check for valid input. However assertions within your classes checking that -your- logic is correct are a useful tool during development. They also do not come with the overhead of exceptions, and -should- be removed when you distribute your application/library/whatever.
quote:
1) It depends on what "the public" in your statement meant. As I said earlier, when you release a library, you typically give out 2 versions: release and debug. The debug version has extra features that are useful to the programmers that build software using the library, but those extra features are useless to the end user. If assertions are used in the debug build of the library, you -will- encounter them if you write buggy software (which you say everyone does at some time in their life).
I would consider that a defect of the library, and not just because it uses assertions. Assertions are there to help development of the library, not to enforce its proper use. Assertions should be gone by the time an end user gets to touch your code.
quote:
2) The question was not whether or not Eiffel has assertions in code that you will never see; the question was about how in the world assertions comply with OOP and encapsulation and what-have-you when dealing with different software layers. No one has yet answered me.
Because you are expecting everyone's use of assertions to match the (apparently) foolish method used in MFC, perhaps? Admittedly, that is partly down to the limitations of C++.
Now, I am no Eiffel expert, but I believe that it uses preconditions and postconditions for function calls, etc. This means it will tell you if you tried to call a function with an invalid parameter. Consider this an extension on type-checking: not only does it check the correct types, but it checks the data is valid for the target. This mechanism enforces any documentation that comes with your library on the parameters to pass it. It does not break encapsulation, you do not get dumped into the libraries source code. They enforce the proper interface between 2 classes.
In fact, failing to satisfy an assertion in Eiffel generates an exception. It is just a cleaner and shorter way of doing such a thing.
quote:
3) Enforce correctness in client code? I thought you said server code should not have assertions in it?
Eiffel's use of assertions is better than that of C++, and that was the context in which I used the above sentence.
quote:
1) When you save the variables into the exception object, you copy by value. So it doesn't matter where the variables are as they are saved from the calling code, which is -in- the try block (it has to be in the try block to throw()).
Many bugs are down to obscure low-level errors, such as going over an array's bounds, or a pointer to the wrong place. To call copy constructors on these potentially defective objects (after all, your program has just done something 'wrong') is not always going to work, when it does it is not always going to be meaningful, and it is not always going to show you the problem.
You are also requiring a heavy handed approach to debugging: grab every variable we could possibly need to check and throw it down. How is this much better than the old routine in Basic of polluting the program with a load of 'print' statements? How much work do you have to go to for that? It looks like an extremely unwieldy exception class, too. How would you implement that? Variable argument list? Or have to redefine the exception type each time you felt like checking different variables? Or every time you add a new variable to your routine that throws the exception?
quote:
2) How do you get to the information in calling functions from inside called functions when an assertion is called? You hit the break button on the compiler. Do the same when the message box is up in a catch() statement.
Again, by that point several automatic variables have been deallocated, and merely copying them is not always sufficient. You are also suggesting that every function which can 'throw' has its own 'catch' block, which is not really the case, nor should it need to be. A 3rd, platform dependent, argument, is that I run programs outside of the debugger and only run the debugger once it has crashed/failed an assertion, by clicking the 'debug' button that appears. That does not appear with exceptions.
You control where the message box pops up. You could just as easily call display() before throwing the function, so that you can break inside the try() block.
It's a lot of code and effort compared to assert(a != 0). And doesn't gain you very much. Cleanup, maybe, but I've never found that to be a problem during debugging.
quote:
2) Why aren't exceptions easy to use and concise?
Because they take a lot more explicit code to achieve simple things.
assert (a != 0);
is much more concise than
try{ if (a == 0) throw NullA;}catch (NullA){ report (NullA);}
quote:
-------------------
I personally think that platform independent code in a custom language is going to become more and more popular as portability and user-mods gain importance. Less and less of the game is going to be done in native code, and more in languages 'closer to the problem domain', whether they are simple scripting languages or compiled bytecode.
------------------
Ah...no, I don't think so. But then if I am ignorant how would I know I am ignorant?
If you do not think so, then you are not following the trends.
quote:
2) Do we both understand correctly what a Virtual Machine (VM) is? I think not. So I will explain what I know (do be prepared, it can be boring) in the hopes that we will clarify any mistaken concepts we may have.
=SNIPPED DETAILS=
The only real problem with VMs is that the CPU/memory model doesn't exist. At best you will be operating at the efficiency of the target CPU. At worst you could be...well...let's just say very, very slow. You see, the VM interpreter (or CPU) does its best to match its virtual bytecodes to the actual CPU's opcodes, but all CPUs are built differently. You will get a definite performance hit for being this generic at such a low level. Not only do the virtual bytecodes not match the actual opcodes, but software must drive and evaluate the CPU and the code, manage the cache, the memory, etc. If the target machine doesn't turn out to be exactly the same as the VM, you get a big performance hit. Not only is the VM CPU model inefficient by definition, but the VM itself requires extra resources to emulate another computer.
The fact is, you don't have to get a big performance hit. A noticable one, perhaps. But in many cases, an unnoticable one.
quote:
By sheer obviousness, we can observe that software written in all native code will run faster on the same hardware than software written using a VM.
Yes, and will often take an order of magnitude less time to write. On exactly the same token, software written in C++ does not run as fast as software written in assembly for exactly the same reasons: C++ is generic and does not take advantage of very specific asm instructions. You lose performance to gain in coding time and structure.
quote:
Why do I think VMs are needless? Well, for portability, C++ classes, virtual functions, and DLLs provide all the flexibility you could ever need.
It is also like giving power tools to a 4 year old. Once past the low level details, you only need a subset of the functions to actually make the game. The rest can be implemented in something higher level, where you trade off a little performance for more productivity, fewer bugs, and a smaller, more well defined interface. You also do not have to own different compilers for every platform you are aiming at, since the VM is already there. Nor worry about functions that are not present on certain platforms, or require different syntax on different platforms. You can create a slimlined interface that inreases productivity.
quote:
Why do you need to make a VM for portability? You don't. The virtual CPU, bytecodes, and language are merely replacing what a C++ compiler does; a C++ compiler already turns a portable language into machine-specific code.
C++ is not a perfect language. It is also general purpose. It is not friendly to non-programmers. Given a knowledge of what you need, you can create a VM and language that are nearer to being perfect for your task, more specific, and easier for non-programmers to work with. Arguing that C++ does everything you need is like saying Visual Basic is not needed, since you can do it all in C++. Visual Basic offers several advantages over C++ in certain instances: simplified memory management being the main one. More readable code to a new programmer is another. These are two benefits that a new application-specific language can (and usually does) bring.
quote:
OK, now for the custom game engine reason. *rolls up sleeves* Why do you need a VM? It only makes the programmers who use the engine learn yet another language,
The languages are usually designed to be more intuitive and closer to the game than a generic language can be. Thus, they are quicker to learn. A good programmer will be able to learn a new language quickly anyway. A non-programmer (who these languages are often also aimed at) will learn a simplified language that is directly relevant to the game engine much more quickly than they will learn C++.
quote:
decreases performance over native code,
Using RTTI or exceptions also incur performance decreases, but you like those features. You have to pay for what you want. However usually the cost is so minimal that it is worth it.
quote:
and provides countless hours of hard work.
Work done once to reduce the amount of work done later, whether in developing add-ons, or debugging.
quote:
it would still be wasted effort. The useless megabytes of source; the countless extra hours spent by both VM writer and game programmer alike;
The countless hours saved by being able to have your designer 'program' the game levels rather than them having to come and ask you to code in yet another custom feature...
quote:
-------------------
I personally think that platform independent code in a custom language is going to become more and more popular as portability and user-mods gain importance. Less and less of the game is going to be done in native code, and more in languages 'closer to the problem domain', whether they are simple scripting languages or compiled bytecode.
-------------------
It's nice to dream...
No, this is happening. Baldur's Gate featured LUA for a part of it's AI. The Unreal series runs on UnrealScript, a compiled Java-like bytecode that runs on an VM (and this is not just game logic, it goes as low level as textures and vertex management too). Quake 3 has its own language, it seems, although it is almost entirely C-like apparently. Jedi Knight had its own language and you can read about it on gamasutra somewhere.
If you want to deny what is -actually- happening, that is fine. But the real, observed trend is -away- from native code and towards scripting languages and engine-specific languages.
quote:
If you want to get technical about the whole thing, the point is modularity. Separate the programmer's needs from the computer's optimization from the user's needs, and you've got: 1) a language, 2) a compiler, and 3) an API. Throwing them all together into one disorganized mess called a VM is not to progress but to regress...
Why should an API differ from the language? BASIC has worked just fine without needing to make such a distinction. You call functions to do things, you don't need to know where they came from. And why should we need an explicit compiler? These are just added complications.
quote:
To put it bluntly, you shouldn't have to create a new computer for every piece of software that you make. That's a bit backwards...
The idea is generally that you don't. You create a better system once, which can then be used repeatedly in future applications, enhanced by programmers and non-programmers alike. As a library writer yourself, you should appreciate that whereas you are just duplicating the efforts of what has gone before, the idea is to do it well enough such that it will be useful for many future applications.
(Nested quoting has stopped working. Grr.)
Edited by - Kylotan on May 29, 2000 11:34:30 AM
Edited by - Kylotan on May 29, 2000 11:38:18 AM