One thing really caught my eye in Oluseyi''s last post (in regards to peripheral vision).
In regards to the lack of peripheral vision...this is actually my major gripe with ALL first-person games.
Games (and computers in general) need to switch to widescreen aspect ratio. I would like to see more games REQUIRE 16:9 (or higher) to play, and anyone stuck on a 4:3 monitor or TV would have to live with black bars. Movies finally got it right with DVDs. How long until widescreen monitors are the norm? I hope it is soon. Once this is done, games can have an FOV more than 90 degrees without looking distorted. Of course, having a bigger screen helps too.
I hope console games systems will make the move to widescreen pretty soon. I''ve noticed that some games already have the option for widescreen, but you rarely gain anything from it (I''ve seen some just chop the top and bottom off, which causes you to LOSE something).
Computers will probably be last to make the transition to widescreen, and even then it may not happen. Maybe once we are using the same displays for TVs and computers, it will happen.
--TheMuuj
AI in a football game- how to start?
quote: Original post by TheMuuj
I would like to see more games REQUIRE 16:9 (or higher) to play...
No!! Viva la golden rectangle!
Well, 4:3 is by no means the golden ratio (but close enough, I suppose). You can always just buy widescreen monitors and use them with 4:3 resolutions (which will put black bars on the sides). Then you can use the extra space for post-it notes. :-D
--TheMuuj
--TheMuuj
--TheMuuj
Widescreen doesn''t really represent peripheral vision correctly, and as such would be of little use in improving FPS gameplay. The problem is the fact that the image is being projected onto a fixed rectangle which we then observe from out position. Our real life peripheral vision still records movement beyond the extents of the screen, no matter its aspect ratio.
VR goggles/helmets, perhaps...
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! | Asking Smart Questions ]
Thanks to Kylotan for the idea!
VR goggles/helmets, perhaps...
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! | Asking Smart Questions ]
Thanks to Kylotan for the idea!
Just a small point... going back to something said by TheMuuj about building perfect vs dumb AI....
One might consider a perfect AI as being an AI that has access to the entire game state vector at any time. Given this, one can presumably choose an optimal set of NPC actions based on a predefined ''script'' of gameplay. This is certainly what happens in many games (sports or otherwise) where NPCs use stock standard moves to effect some result.
Making the AI ''dumb'' may be as simple as choosing a sub-optimal action, which again is often what I see displayed in games.
It would seem to me that the best course of action would be to design an AI that computes the optimal action from limited information about the game state. This falls into the AI field of Reasoning Under Uncertainty and is a hot topic in planning for autonomous agents. This has been attempted: the SOAR QuakeBot is a good example. It would seem appropriate to me to extend this notion to cooperative team sports, where each NPCs action is based on their perception of the game state.
The difficulty of course comes down to a reasonable internal representation of the game state that allows for uncertainty. Human players have the benefit of exceptional visual skills. We can see a series of snapshots of the game state (displayed graphically) and interpolate a smoothly varying scene (if the frame rate is high enough!). From this we can extrapolate to possible future states. This is harder for a computer but certainly not impossible. Indeed I can visualise a fairly simple Switching State Space model for extrapolating motion of NPCs based on a time series of position observations. Indeed, this is done now in modern radar systems for tracking aircraft. Computationally it''s faily simple stuff and not too CPU intensive!
Anyway, enough of my ideas... please continue... I''m quite enjoying this thread!
Timkin
One might consider a perfect AI as being an AI that has access to the entire game state vector at any time. Given this, one can presumably choose an optimal set of NPC actions based on a predefined ''script'' of gameplay. This is certainly what happens in many games (sports or otherwise) where NPCs use stock standard moves to effect some result.
Making the AI ''dumb'' may be as simple as choosing a sub-optimal action, which again is often what I see displayed in games.
It would seem to me that the best course of action would be to design an AI that computes the optimal action from limited information about the game state. This falls into the AI field of Reasoning Under Uncertainty and is a hot topic in planning for autonomous agents. This has been attempted: the SOAR QuakeBot is a good example. It would seem appropriate to me to extend this notion to cooperative team sports, where each NPCs action is based on their perception of the game state.
The difficulty of course comes down to a reasonable internal representation of the game state that allows for uncertainty. Human players have the benefit of exceptional visual skills. We can see a series of snapshots of the game state (displayed graphically) and interpolate a smoothly varying scene (if the frame rate is high enough!). From this we can extrapolate to possible future states. This is harder for a computer but certainly not impossible. Indeed I can visualise a fairly simple Switching State Space model for extrapolating motion of NPCs based on a time series of position observations. Indeed, this is done now in modern radar systems for tracking aircraft. Computationally it''s faily simple stuff and not too CPU intensive!
Anyway, enough of my ideas... please continue... I''m quite enjoying this thread!
Timkin
I have yet to do much game AI coding myself, but I have read my fair share of information on the subject. But I have an idea and I wonder if any game uses something like it.
Rather than the AI base its decisions on the present state, the AI could base its decisions of a state from the past (anywhere from about 1/3 of a second to about 3 seconds, depending on skill). No human has an instantaneous reaction time, so why should a computer?
And if this were be implemented, would the easiest way be to simply store a circular list (well, a vector would work) of game states (or at least the part of the state that the AI needs to know about)?
And then you could actually make the AI use all of the states before the one it is "seeing" to interpolate what is currently happening and what will happen. So in a game like Quake, the AI would actually be a little lagged, and would have to adjust accordingly.
Or is this too much trouble for such a minor difference?
--TheMuuj
Rather than the AI base its decisions on the present state, the AI could base its decisions of a state from the past (anywhere from about 1/3 of a second to about 3 seconds, depending on skill). No human has an instantaneous reaction time, so why should a computer?
And if this were be implemented, would the easiest way be to simply store a circular list (well, a vector would work) of game states (or at least the part of the state that the AI needs to know about)?
And then you could actually make the AI use all of the states before the one it is "seeing" to interpolate what is currently happening and what will happen. So in a game like Quake, the AI would actually be a little lagged, and would have to adjust accordingly.
Or is this too much trouble for such a minor difference?
--TheMuuj
--TheMuuj
quote: Original post by Timkin
It would seem to me that the best course of action would be to design an AI that computes the optimal action from limited information about the game state. This falls into the AI field of Reasoning Under Uncertainty and is a hot topic in planning for autonomous agents. This has been attempted: the SOAR QuakeBot is a good example. It would seem appropriate to me to extend this notion to cooperative team sports, where each NPCs action is based on their perception of the game state.
I had been toying with this idea for my "upcoming" sports game project (basketball). The logic is that players (in sports games we have do distinguish between "players" - the virtual athletes - and the gamer) only have so much information, and then make decisions based on that information and on their skills/personality/etc. This will make those ambiguous "awareness" ratings seen in most sports games actually more meaningful; they will represent, in some form, how frequently the player attempts to reacquire information about the general game state. Thus, a more aware player would be less likely to commit silly turnovers by stepping out of bounds due to carelessness, or to lose track of time and miss last-second opportunities. The thing is, I also want these attempts to acquire game state information to be consistent with the physical representation of the game - if a player''s field of view is blocked (and this will be determined by some vector math/heuristics), he may not be able to acquire an estimation of how much time is left on the shot clock, for example, or to update where a teammate is and as such might hold on to the ball for too long or pass to a previous position. This rewards the gamer by making strategies like "pressure defense" more effective and integral.
quote: The difficulty of course comes down to a reasonable internal representation of the game state that allows for uncertainty. Human players have the benefit of exceptional visual skills. We can see a series of snapshots of the game state (displayed graphically) and interpolate a smoothly varying scene (if the frame rate is high enough!). From this we can extrapolate to possible future states. This is harder for a computer but certainly not impossible. Indeed I can visualise a fairly simple Switching State Space model for extrapolating motion of NPCs based on a time series of position observations. Indeed, this is done now in modern radar systems for tracking aircraft. Computationally it''s faily simple stuff and not too CPU intensive!
Precisely. While I hadn''t reasoned it out as fully as you have (and don''t have the experience to intuitively refer to other models, such as radar), I think you''ve provided an excellent suggestion for implementation. To balance this effect (ie, to mitigate the effects of the "all-seeingness" of the gamer), the same kinds of losses of acuity would need to be simulated for a gamer-controlled player, most likely in the form of loss of accuracy. For example, if a gamer-controlled athlete has the ball but is trapped in the corner and being triple-teamed, an attempt at a pass out of this situation is more likely to be blocked by one of the pressure defensive players or intercepted by a player out on the court (thanks to the anticipation model I briefly described earlier).
As I write this, my roommate is playing NBA Live 2000, and the game''s deficiencies are fuelling my creativity!
quote: Anyway, enough of my ideas... please continue... I''m quite enjoying this thread!
I''ve found your (brief) contribution to be very valuable. You''ve helped me concretize some of my ideas.
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! | Asking Smart Questions ]
Thanks to Kylotan for the idea!
quote: Original post by TheMuuj
Rather than the AI base its decisions on the present state, the AI could base its decisions of a state from the past (anywhere from about 1/3 of a second to about 3 seconds, depending on skill). No human has an instantaneous reaction time, so why should a computer?
Agreed, there is something we call "reaction time", and reaction time should affect AI players just as much as human players. However, reaction time isn''t due to a "cognitive delay", but rather to a physical delay - the time necessary to overcome inertia and get the muscles in motion. This reaction time decreases significantly with training, strength and "quickness" - a semi-intangible that the greatest athletes all posses. For all intents and purposes, reaction time can be ignored for professional athletes (and thus for simulations of professional athletes), especially since physical considerations by far outweigh reaction time as a factor. Overcoming your current velocity (inertia) to start moving in a different direction affects results far more than when you perceive and recognize a threat.
That said, I think my next statement/idea will mesh perfectly with what you''re trying to achieve. If a player acquires a "snapshot" of the game state (with physical limitations as detailed in my previous post) at time t and has an awareness that wont result in another acquisition until a time t + T, then the player''s image effectively decays and will naturally result in misplays.
quote: And if this were be implemented, would the easiest way be to simply store a circular list (well, a vector would work) of game states (or at least the part of the state that the AI needs to know about)?
I actually think the gamestate would/should be stored as an aggregation of the positions, velocities and "apparent intentions" of all objects (at least for my game), and each AI in the game would have a single gamestate variable. Not all the values in the gamestate will be accurate - or even valid!
quote: And then you could actually make the AI use all of the states before the one it is "seeing" to interpolate what is currently happening and what will happen. So in a game like Quake, the AI would actually be a little lagged, and would have to adjust accordingly.
Interpolation and estimation are fairly intrinsic to generating top-notch sports AI. A player intending a pass for a teammate should take a look at the teammate''s current position, estimate the distance of the pass (and the time necessary to complete it) and estimate the intersection of a pass path and the player path. If none can be found within a response cycle, a "mature" player would choose an alternate course of action, but an inexperience player would try to force the pass.
quote: Or is this too much trouble for such a minor difference?
Personally, I think it makes a tremendous difference to the way a game plays and the whole experience - a tremendously positive one. Keep the ideas coming!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! | Asking Smart Questions ]
Thanks to Kylotan for the idea!
quote: Original post by Oluseyi
That said, I think my next statement/idea will mesh perfectly with what you''re trying to achieve. If a player acquires a "snapshot" of the game state (with physical limitations as detailed in my previous post) at time t and has an awareness that wont result in another acquisition until a time t + T, then the player''s image effectively decays and will naturally result in misplays.
You''ve latched onto a very important phenomena in AI/information theory Oluseyi. I published a paper 2 years ago that showed the relationship between particular probabilistic models of states of a system and diffusion processes. The paper is particularly technical with a bit of advanced mathematics, but if you''re interested, it''s called, "Efficient Inference in Dynamic Belief Networks with Variable Temporal Resolution", T. A. Wilkin & A. E. Nicholson, Proceedings of the 6th Pacific Rim International Conference on Artificial Intelligence (PRICAI) 2000. (As a shameless plug, the paper was voted Best Paper for the conference!)
The fundamental result is that in the absence of observations of a system, knowledge about the particular state diffuses... in probability terms, the joint density function diffuses... this diffusion is governed by the same mathematical models that govern diffusion of smoke, or heat, for example.
It is fairly easy to incorporate this effect into a dynamic probabilistic model of a system... the only difficulty being that the complexity of inference is exponentially proportional to the complexity of the system being modelled.
If you want more details, I might be able to publish a short, non-technical summary of the paper on GD.net sometime in the near future.
Cheers,
Timkin
quote: Original post by Timkin
You''ve latched onto a very important phenomena in AI/information theory Oluseyi. I published a paper 2 years ago that showed the relationship between particular probabilistic models of states of a system and diffusion processes. The paper is particularly technical with a bit of advanced mathematics, but if you''re interested, it''s called, "Efficient Inference in Dynamic Belief Networks with Variable Temporal Resolution", T. A. Wilkin & A. E. Nicholson, Proceedings of the 6th Pacific Rim International Conference on Artificial Intelligence (PRICAI) 2000. (As a shameless plug, the paper was voted Best Paper for the conference!)
Fascinating! I must say that I''m quite pleased that my seemingly ad-hoc and hodge-podge ruminations on AI principles correlate with mathematically rigid theories, propositions and definitions.
quote: this diffusion is governed by the same mathematical models that govern diffusion of smoke, or heat, for example.
Hmm... Time to whip out "Ye Olde Physics Textbooke". I''ll be trying out some simple experiments this summer (I''m mired in other, mostly user-interface work for now), so I''ll be sure to keep this in mind.
quote: If you want more details, I might be able to publish a short, non-technical summary of the paper on GD.net sometime in the near future.
Absolutely. I would love to see it as an article, in part because I think too many of the articles focus on the glamorous and popular areas of 3D graphics and animation, and application frameworks. We need more articles that are designed to be contemplated rather than followed - more discussions and fewer tutorials.
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! | Asking Smart Questions ]
Thanks to Kylotan for the idea!
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement