Advertisement

physx chip

Started by April 20, 2006 07:42 PM
223 comments, last by GameDev.net 18 years, 5 months ago
Quote: Original post by Tunah
Yeah, there's always the cost of the card.. Unless of course it picks up and gets really popular. I think you would need to upgrade graphics cards more often than the PPU, but that remains to be seen I guess.

If it becomes popular then indeed cheaper models will come. Unfortunately it takes years to optimize hardware (in terms of performance/cost) and there are many hurdles. Just to illustrate today's efficiency, note that they're using a fan to cool PhysX P1 while my Geforce 6600 is passively cooled. By the time they've made it affordable for the mainstream gamer, NVIDIA/ATI/Intel/AMD will have G90/R700/P9/K8L ready.
Quote: But they (probably) won't require the card in the first place since Ageia has a software fallback. And as for the risk involved, do you mean they're risking something by supporting Ageia? Ghost Recon: Advanced Warfighter has support for it, and I really hadn't heard of Ageia for long before I saw GR:AW. I think the developers just took a look at the benefits; by using Ageias' physics, they also support dedicated physics hardware. If the user doesn't have the card, it falls back on their software. Is there really that much risk involved in that?

Good question. What they risk is that people expect the advanced physics to run on their powerful system even if they don't have a PhysX card. Just have a look at this screenshot: Without/with PPU. I don't think it's unreasonable to have those few extra particles in software mode as well. The NovodeX Rocket demo shows several hunderds of object at very high framerates in software, without SSE optimizations!

For GRAW I must add that the true risk is probably low because they're so early and the difference between software and hardware physics in this game is frankly not that big. It's not 'gameplay' physics. But for say Unreal Tournament 2007 I don't think it will be tolerated if they don't offer the best perfomance and quality combination for every system configuration. If another game shows that it's possible to have advanced physics on a next-generation GPU or CPU then that can be a real problem when it's mentioned on each and every review site.
Quote: I've only seen one Ageia demo movie, the CellFactor one. The action displayed in that movie, alone, makes me want to shell out the cash for a card. It says it is multiplayer, it has A LOT of debris and objects flying around that (atleast seemingly) affects the other players. How they solved syncronization issues, I have no idea. But if this trend continues, lots of physics which matter even in multiplayer, thanks to PhysX, I'm all for it.
If it never gets better than that movie, then I'm not so sure I would buy it.

Fair enough. Here's one interesting quote from VR-Zone about CellFactor though: "However, this realism also means one thing: It is very taxing on your graphics card. Running at 1280x1024 without Anti-Aliasing or Anisotropic Filtering, our 7900GTX setup only managed an average of 37 frames per second, seeing a minimum of 8fps as recorded by FRAPS". That's a very powerful graphics card, a modest resolution and low quality setting. Yet the framerate is very dissapointing. On the same site this graphics card renders Half-Life 2: Lost Coast at 1600x1200 4xAA 16xAF at 66 FPS! This makes me wonder whether maybe it's actually limited by the PhysX card. Unfortunately I couldn't find any CellFactor review using a less powerful graphics card...
Quote: Original post by C0D1F1ED
...
Without/with PPU


The non-PPU version reminds me of old cartoons, where the backgrounds look cool, but you can tell that the animated objects have been glued onto the background.

The PPU version reminds me of the same thing, except that they glued on some extra mysterious dark patches as well.

They both look equally "realistic" to me. :)
Advertisement
Quote: Original post by Anonymous Poster
I followed this discussion for a few pages and then I lost track of it.
One question to C0D1F1ED though. Rendering HDR in HL2 had quite a hit on my 6800 GT. So if HL2 would do its physics calculations on the GPU aswell, what would happen?
Do you really expect next-gen games, that need to look next-gen to get sales, are willing to downgrade how they look to be able to process the physics on the GPU, so that their next-gen game is actually playable on the next-gen graphicscards?


I don’t think that the options are do physics in the video card at the expense of the graphics, or buy an adapter that can only do physics for a particular API.

Ageia had not been candy in his approach to the market. They started as hardware manufacturer, but instead of offering the hardware to established developers (let it be free or with royalty based licenses)
Instead they when out bought small developers like (Novodex, Meqon and some others that had not been mention yet but that are there.)They dissolved these developers in order to create an exclusive and intrusive technology, and they are given it away for free, in order to force teh game developre to teminate they licence with othe develepers. (remember internet explorer/Nextscape fiasco)

After all one would think that the objective was to sell card not to flush out the competitors. As glamorous as it sound there is not enough money in middleware software development since it is not a mass market, but Ageia just one to destroy the competitors.

These are very dishonest and monopolistic practices that had been done in the past by other big bally companies like 3DF and Microsoft, IBM, Intel, and in each case they had to paid the consequences for such dishonest acts.

With an scenario like that companies like Havok that had been in the business for much longer, had not choice buit to find an alternative.
I believe the statement that Havok, Nvidia, and ATI are making is:
The PPU is not the only alternative to physics acceleration with dedicated hardware.

If you go the store you had the option to acquire an extra video card that will do the same and perhaps more, but when that video card is not used for physics calculation you still can used for other graphics stuff.

Now the options are.

1) Have a graphic card and a PPU that can only do physics.
2) Have two graphics card that can both do physics and graphics.
3) Have two graphics card and also a PPU (for the extreme hard core)

It is also worth mention that there is great deal of fussy logic, exgeration and lies when it comes to compared PPU and GPU, many people speak about band with as if bandwidth will only affect the graphics card when doing physics, correct me if I am wrong but doe the CPU will have to copy the same data to the PPU too.

I would argue that for massive effects a dual graphics card combo is a much better position than a Graphics / PPU combination since the PPU will have to used external bus communicate the result t the graphics awhile the graphics will be using the internal busses.

The point is that since hardware technology is more or less the same but the PPU and the GPU are very similar vector processors give or take few options and functionality.


Also another thing is that developing hardware is not really a huge investment when using is using legacy silicon. Anyone with few thousand dollars to spare can develop a custom ship for practically anything using Velirog and GPLA technology.
There are ships like the TMS320C6701 with programmable logic that that can reach over 1 gigafloat of sustained floating-point operations and only cost few thousand dollar for a prototype.
Havok is still the industry leady middleware physics developer and I would not be surprised if they decide to spend tow or three million dollars in research and developing of their own hardware solution.


How does a PPU accelerate physics calculations, other than by making a dedicated PU available?

--www.physicaluncertainty.com
--linkedin
--irc.freenode.net#gdnet

The same way DirectX and OpeneGL accelerate graphics for any hardware developer.
The same way any game develops that can license PS2 can write vector unit code.
The same way and any programmer write SSE code
The same way anybody can write altivec code.
the same way any developer can develeper code for PS3 SPE
the same way a develper can program PSP


Advertisement
Quote: Original post by Anonymous Poster
The same way DirectX and OpeneGL accelerate graphics for any hardware developer.
The same way any game develops that can license PS2 can write vector unit code.
The same way and any programmer write SSE code
The same way anybody can write altivec code.
the same way any developer can develeper code for PS3 SPE
the same way a develper can program PSP

Uhh... what is C and Assembly?

Beginner in Game Development?  Read here. And read here.

 

In other words by having available the possibility of and instruction set (even if is it on royalty bases), so other developers can write drivers for existing APIs.


So what kind of instruction would you use in a PPU that you wouldn't have in a GPU?

--www.physicaluncertainty.com
--linkedin
--irc.freenode.net#gdnet

Well, I’ve read this entire thread now and it seems we have two arguments – one side being bogart’d by Codified.

So we have the side of the masses:
1) Having an extra dedicated PU is going to do nothing but allow dev’s to created more enriched games by freeing up the CPU for other tasks.

Then we have Codified’s:
2) A PPU is pointless because the second (or third or forth) core on our CPU is going to be “adequate” (your exact word) to do all that the PPU does.

First of all, I remember when 640k of system mem was “adequate”. Nothing is ever adequate for long. If you don’t realize this..then you, sir, are a moron. I apologize for my childish language.

Secondly, no matter how many cores you have, a dedicated PU will always beat out a gen-purpose CPU, sync and bandwidth issues aside. And you still have these issues with multi-threading. (I mean are you kidding me? Do you not see this?)

Finally, the success of the PPU is merely a factor of how it is used by developers, and nothing else. This I can assure you!! If devs create “really good” games that have solid performance-benefit when employing the PPU, then the PPU will most certainly sell. I think $250 is a tad expensive, but price should diminish over time. Drop it to $100 and you got a mass-market seller. I argue that at least 3 “really good” games that show a “high degree” of performance gain when using the PPU is required for it to go “main stream” by gamer standards; particularly if each game is of a different genre, FPS, RTS, RPG (oblivion).

Anyway, let me close with two last statements: It is up to developers to take advantage of the PPU (PhsyX API) in such a way that it causes consumers to buy it, and Codified is an idiot. [Oh wait, my bad, I didn’t mean to say that. My apologies.]

My two cents

This topic is closed to new replies.

Advertisement