Graph theory
First, I should state that in university I''ve never had any contact with this topic, my course doesn''t require any knowledge of it.
But on talking to some of my friends who are taking programming and engineering courses they gave me some "pointers" on the topic and, well, it arouse my curiosity so I''ve been doing a little reseach but I really don''t understand what use this topic has. Probably my problem comes from the fact that all that I''ve seen are various theories on how many cycles on some mesh there might be, a small aplication of the greedy routine and some other little theories like that which absolutely didn''t bring anything new to the world of non-theortical mathematicians...
So I''d like to know what can this topic do for me?
Also, what should I be looking at? I''m not majoring in theoretical mathematics so some of those less useful theories are not of my interest. I really what the big ugly dosh and not the famous "what''s his name" theory on some minor aplication.
(comparatively, if we''re talking about books I''d like an Eichberger and not a Gibbons).
I suspect that there is a lot in this topic which is extremely useful for many aplications, but I don''t know what it is, so maybe someone out there can give me (and other people) some pointers...
I''d greatly apreciate internet links, if possible...
Being punctual is only making your mistake on time...
Murphy
1 to 3 chefs make a good restaurant 100 chefs makes a McDonalds
Tim Sweeney
Being punctual is only making your mistake on time...Murphy1 to 3 chefs make a good restaurant 100 chefs makes a McDonaldsTim Sweeney
A graph is a natural way of representing many situations.
1. Let''s say you are writting the AI for a game. The scene is a castle. If an agent wants to go to some room, it should have some description of the castle that allows him to decide how to get there. The description of the castle can be a graph in which each node is a room and two nodes are connected if the corresponding rooms are connected. You can then use some path finding algorithm (Dijkstra, A*, ...) to trace a path from the current agent''s possition to the destination room.
2. A game like chess can be viewed as a graph. Each position is a node, and two nodes are linked if the transition from one position to another is legal. This would be a directed graph. Game values are assigned to each node, with some simple rules: a check-mated position is a win for the side that just moved, a position is lost if all it''s successors are won positions for the opponent... The whole graph is too big to deal with on a computer, so you usually only search in a small sub-graph, using heuristics to approximate the game values of the nodes where you stop (the leaves of your search tree). Alpha-beta is the main algorithm for doing this kind of search. Note that if only a few (say, 4) pieces are left on the board, there aren''t that many nodes in the game that''s left. Now you can effectively compute the theoretical game values for all possible positions with a few pieces. Look for chess endgame databases (a.k.a. tablebases).
3. A computer program is a graph. Each command is a node, and two nodes are connected if the corresponding commands can be executed one after the other. An execution of the program is a path in the graph. This approach is sometimes used when studying an algorithm''s performance. Knuth explains this in "The Art of Computer Programming", I think.
4. When a task is really big, you split it in sub-tasks. Some tasks require that some other tasks are finished before they can be started. You can represent that with a graph. For instance, when you compile a project but you have only modified a few files since the last compile, you don''t need to recompile the whole thing. If the project description has a graph with the dependencies, the problem of finding a suitable order in which files can be compiled is called "topological sorting" (and is quite simple).
Graphs are also used to represent computer networks, electronic circuits, water distribution systems, text parsers, error-correcting decoders... And compilers use a graph-colouring algorithm to allocate registers. You are right: graphs are extremely useful. As for graph theory, there are some things that I haven''t found a use for, yet (i.e., Ramsey theory).
I feel like there is not much point in learning how to use a tool before you have a problem to apply it to. On the other hand, if you know some graph theory, you will see where to apply it more easily. My advice: read one or two books that talk about graphs from a programming perspective, and only learn the hardcore math when/if you need it. "Algorithms in C, Part 5: Graph Algorithms" by Robert Sedgewick is what I would read if I were you.
1. Let''s say you are writting the AI for a game. The scene is a castle. If an agent wants to go to some room, it should have some description of the castle that allows him to decide how to get there. The description of the castle can be a graph in which each node is a room and two nodes are connected if the corresponding rooms are connected. You can then use some path finding algorithm (Dijkstra, A*, ...) to trace a path from the current agent''s possition to the destination room.
2. A game like chess can be viewed as a graph. Each position is a node, and two nodes are linked if the transition from one position to another is legal. This would be a directed graph. Game values are assigned to each node, with some simple rules: a check-mated position is a win for the side that just moved, a position is lost if all it''s successors are won positions for the opponent... The whole graph is too big to deal with on a computer, so you usually only search in a small sub-graph, using heuristics to approximate the game values of the nodes where you stop (the leaves of your search tree). Alpha-beta is the main algorithm for doing this kind of search. Note that if only a few (say, 4) pieces are left on the board, there aren''t that many nodes in the game that''s left. Now you can effectively compute the theoretical game values for all possible positions with a few pieces. Look for chess endgame databases (a.k.a. tablebases).
3. A computer program is a graph. Each command is a node, and two nodes are connected if the corresponding commands can be executed one after the other. An execution of the program is a path in the graph. This approach is sometimes used when studying an algorithm''s performance. Knuth explains this in "The Art of Computer Programming", I think.
4. When a task is really big, you split it in sub-tasks. Some tasks require that some other tasks are finished before they can be started. You can represent that with a graph. For instance, when you compile a project but you have only modified a few files since the last compile, you don''t need to recompile the whole thing. If the project description has a graph with the dependencies, the problem of finding a suitable order in which files can be compiled is called "topological sorting" (and is quite simple).
Graphs are also used to represent computer networks, electronic circuits, water distribution systems, text parsers, error-correcting decoders... And compilers use a graph-colouring algorithm to allocate registers. You are right: graphs are extremely useful. As for graph theory, there are some things that I haven''t found a use for, yet (i.e., Ramsey theory).
I feel like there is not much point in learning how to use a tool before you have a problem to apply it to. On the other hand, if you know some graph theory, you will see where to apply it more easily. My advice: read one or two books that talk about graphs from a programming perspective, and only learn the hardcore math when/if you need it. "Algorithms in C, Part 5: Graph Algorithms" by Robert Sedgewick is what I would read if I were you.
Graph Theory is isomorphic to linear algebra. So, anything you can think of that''s useful in linear algebra, you can also do in graph theory, and vice versa.
But when computer science guys talk about graph theory, most of the time they mean either network flow or pathfinding.
There are other interesting things in combinatorics or theoretical computer science that graph theory is helpful with, though. For instance, Turing machines or Finite Automata look like graphs, and I believe you can model grammars that way for some interesting results if you''re a compiler guy.
alvaro: Here''s a neat paper applying Ramsey theory to Game theory: http://www.math.tau.ac.il/~eilons/ramsey7.pdf
Eauler developed large parts of graph theory, and there is a lot still named after him, for example Eulerian Networks (no nodes with odd vertices), Semi - Eulerian Networks (2 odd, rest even) etc.
He came up with his ideas to solve an old problem that had troubled people for a long time, called the Bridges of Konigsburg. This problem consisted of an aerial, simplified view of the city of Konigsburg. The city itself is made up of a number of islands, with a number of bridges connecting the island. The problem asked whether it was possible for someone to visit ever island and return back to their starting point again without crossing the same bridge twice.
Euler worked on this, and drew up a network that helped him solve it. He showed that as more than two of the nodes (islands) are odd ordered (i.e. they have an odd number of bridges connecting them to another island) it was impossible to solve (it stands to reason if you think about it logically, after all, to get away from a node without returning via a previously visited edge they must always be present in pairs).
Euler worked out that:
Eulerian graph <=> Eulerian Cycle (I think?)
and graph theory was born :-) Just goes to demonstrate how useful this is, as up till that point mathematicians had no clue as to how to prove whether the problem was possible or not.
He came up with his ideas to solve an old problem that had troubled people for a long time, called the Bridges of Konigsburg. This problem consisted of an aerial, simplified view of the city of Konigsburg. The city itself is made up of a number of islands, with a number of bridges connecting the island. The problem asked whether it was possible for someone to visit ever island and return back to their starting point again without crossing the same bridge twice.
Euler worked on this, and drew up a network that helped him solve it. He showed that as more than two of the nodes (islands) are odd ordered (i.e. they have an odd number of bridges connecting them to another island) it was impossible to solve (it stands to reason if you think about it logically, after all, to get away from a node without returning via a previously visited edge they must always be present in pairs).
Euler worked out that:
Eulerian graph <=> Eulerian Cycle (I think?)
and graph theory was born :-) Just goes to demonstrate how useful this is, as up till that point mathematicians had no clue as to how to prove whether the problem was possible or not.
quote:
Original post by CheeseGrater
But when computer science guys talk about graph theory, most of the time they mean either network flow or pathfinding.
You''ve omitted one of the really BIG areas of computer science these days... that of causal graphs and their sibling, belief networks.

As to what aspect of graph theory you will find most useful, really depends on what field you end up working in (or pursuing your hobbies in). Take a look at why you want to learn graph theory and then grab an appropriate book from that field as a starting point.
Cheers,
Timkin
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement