Advertisement

Random thoughts on cognition

Started by June 16, 2001 12:09 AM
-1 comments, last by bishop_pass 23 years, 8 months ago
While driving down the highway last night, I was mulling over the concept of agent architectures, cognition, learning, etc. A lofty goal of many is to produce a programmable architecture for an agent which enables reactive behavior, rational thought, ease of programming, efficient processing, an input and output cycle, the ability to plan, the ability to maintain beliefs about the world, etc. I was just tossing about ideas in my head and arrived at some half baked conclusions which I thought were interesting. Whether any of these are genuinely new or not, I don't know. I think an agent should always have an agenda. The agenda is a list of goals to be acheived. The agenda is the driving force behind the agent's actions. One goal on the agenda may be as simple as logically deducing a conclusion, another may be as broad and sweeping as live a long and prosperous life. Other goals are naturally put on the agenda as subgoals of the larger goals. Anyway, nothing really new there. The agents is sensing the world through its inputs. Incoming information may sometimes solve certain goals on the agenda, or it may cause new goals to be put on the agenda. Again, nothing really new there. I was thinking that goals on the agenda naturally link to nodes in memory which the goals are related to. And I was thinking that these links should be tagged as 'hypersensitive' indicating a heightened sense of awareness with regard to these nodes and links. Now, I was thinking that any incoming information, or internal processing which produces a connection to one of these 'hypersensitive' nodes or links should cause an 'insight' to occur. Such insights would cause the observed input or internal thought to have significance, thus causing that observed fact or internal thought to have a much higher chance of being remembered as a long term memory item, or cached as a link to aid in higher recall next time around. I believe this is analogous to deliberate remembering of observed data, or gaining a better conceptualization of data already within one's head. Any thoughts? And please, no comments about when I am going to implement this or what relevance this has to games. This is a forum for discussion of AI, so here I am discussing AI. Edited by - bishop_pass on June 16, 2001 1:11:54 AM
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.

This topic is closed to new replies.

Advertisement