Advertisement

The difference between Logic, Reasoning, and Thinking; Data, Information, and Knowledge?

Started by September 12, 2014 01:30 AM
19 comments, last by Nypyren 10 years, 2 months ago
Thanks Algorithmic! Great info. I've been introduced to fuzzy logic already, and I really like it. I will look into the other terms you've mentioned, and check out the book. Thanks again.

They call me the Tutorial Doctor.

Why have you chosen these 6 terms? For the first group there are others like it, such as Deduction, Analysis, Inference. For the second group the synonyms are Wisdom, Understanding, Experience, Facts.
I doubt that someone can define clearly how they differ. So, what is it you need the definitions for?

Advertisement
@comfy chair.

In programming, you deal with data all of the time, but I wanted to know the difference between data and knowledge. Soon I found that there is very little I know about how data is processed, stored, recalled, etc.

Humans do this naturally. The complexity of our own intelligence, I found I hadn't truly understood enough to teach a computer how to think.

I actually have started a document to help myself clarify these definitions, and I ended up with lots more.

Wisdom, understanding, facts, knowledge, data, information, etc, turn out to have quite different meanings.

So, I want to explore the deeper facets of human processing first.

Here is the doc so far:

http://markdownshare.com/view/dc13dfea-1a2d-4831-980d-e515a1603249

They call me the Tutorial Doctor.

Also be sure to consider: http://en.wikipedia.org/wiki/Domain_of_discourse

Pretty much any concept can be evaluated to mean different things (and be true or false) depending on what context you evaluate them in. Some examples of statements that you could try evaluating for truth, which are obviously affected by the current context (by no means comprehensive):

Time:
"The president is Abe Lincoln." -> If you said this in the past, it would have been true then, but it's not now. But it's nuanced: If you found a document from back then with this written in it, what the document is saying is true in the context of the document itself, even if you're reading that document today where the statement by itself is false. The same thing can be said of fictional stories - they have internal consistencies which may make no sense when compared against our reality, but which make perfect sense in the story.

Place:
"It's raining." -> True where I am right now, but obviously not true in various other places.

Universe (fictional, mental, real, etc):
"I cast magic missile!" -> You're talking about a game (I hope).
Whew Nypryen, good save! Of course, I have to consider the domain!

That slipped right past me. Thanks for the term too.

This is, of course, on the conversation of games, so the domain matters.

Your place example makes me wonder more. The statement, "it is raining" sounds very fuzzy.

So using Boolean Logic in this case doesn't seem right. It isn't an accurate statement in the domain of everything.

Perhaps a term like "local truth" is allowable here?

"It is raining" is missing a lot of key information. Perhaps though we make the inference that the person meant it is raining locally. But that isn't a guarantee.

Here is a quote from my doc:

"An Interpretation is the assignment of meanings to various concepts, symbols, or objects under consideration.

A meaning is the intended understanding or purpose of something."

On the mathematical side, perhaps computer-assisted proofs is a good start?

http://en.m.wikipedia.org/wiki/Computer-assisted_proof

Automated reasoning?
http://en.m.wikipedia.org/wiki/Machine_reasoning

They call me the Tutorial Doctor.

"It is raining" is missing a lot of key information.


Exactly. Statements are always incomplete information. So is the information that an agent is receiving as input from the (virtual) world.

Learning processes need to take this into account. In humans, sensory data is fed from receptors to the nervous system, triggering reflexes and/or being processed by the brain. At some point, the brain may encode the data in a way that can be reasoned about.

This kind of data is never something you should unconditionally trust. What people see and hear is not a perfect representation of the world. You might be hallucinating. There might be a lot of noise that your brain is having a hard time filtering out (a torrent of raindrops on your windshield). Or you might have mental health issues which can do all kinds of bad things to your thoughts. Data also might be something intentionally misleading (a lie) that someone else has told you.

In a computer simulation, you might not need to take all these things into account - it's likely overcomplicating things. But if you want fault-tolerant AI, you need to treat data critically at all times.


I think an interesting way of doing that might be to reuse the concept of contexts during learning. Let's say you have your knowledge stored in a general purpose graph data structure. Now, your agent receives some new data that you want to try to store. First off, you can just add that data to the graph without connecting it to any existing nodes. Then you can begin searching for ways to connect it. Each time you attempt a new connection, you can search the graph to find out if contradictions were introduced by the new connection. If contradictions are found, you don't need to discard the data; you remove the connection and try somewhere else.

I imagine your knowledge graph would start out as several islands of data, and eventually as you start adding enough data, the islands could finally be connected to each other, loosely at first, perhaps forming stronger connections as your reasoning process has time to analyze and re-form those thoughts.

Perhaps you never find a way to connect the data to the rest of your graph. That's fine, you can leave it isolated. It's always possible that new data might arrive LATER which serves as a way to connect everything up. On the other hand, you don't want to collect data that you store forever if you can't process it. Some data IS going to be useless, noise, or nonsense, and you should discard it eventually. As your system refines its existing knowledge, some data will become redundant or get disconnected from the graph. Some data might stay connected, but will never be accessed by any important functions of the agent. You can use various techniques such as mark-and-sweep garbage collection and last-accessed timestamps to eventually eliminate data that you no longer need.



For a concrete example, let's think of a bot that's playing a first person shooter. One of the things the bot needs to do is learn how the navigate the level, and where powerups/ammo/health are located. If the level doesn't already have a navmesh precomputed by the level creator, the bot has to make this itself.

The data that it "learns" would be the set of 3D positions which represent places where it can move, and 3D positions and types of objects that are important. These points can be connected together in a graph. The bot might spawn into a room and start sampling its immediate vicinity with line probes (collision detection). If there is an unobstructed path from where it is to the other end of the line probe, it can add that link as a walkable path to its knowledge.

However, perhaps the line probe went through a small window that the bot can't actually navigate through. When the bot attempts to follow that edge in its navigation graph, it will get stuck. At this point, if there is code which notices that the bot is stuck, it can "learn" that the edge is a false positive and mark it as untraversable. It might retain the link in case it has a sniping behavior, since the bot can still shoot through that window if it wanted to.

If a player comes in and kills the bot, and the bot respawns somewhere completely different, their new room probably doesn't connect to the one they were just in. But as they move through the level, they may eventually form a complete connection between the two rooms. So the old knowledge and new knowledge doesn't need to be discarded just because it can't be connected for a while. At the same time, perhaps the level is divided into two halves which have no way of getting between. The bot shouldn't discard one half of its map data, since when it respawns, it has a chance of landing on either half of the map, and it will have knowledge about wherever it respawns already prepared as long as it keeps it.
Advertisement

So, what is Logic, Reasoning, and Thinking, and how are they different?

What is Data, Information and Knowledge, and how are they different?

I am logician. My answer as a logician is the following.

Logic:

A logic (because you don't have ONE logic) is a set a rules that allow to infer (deduce) the truth from the truth. More precisely: a logic gives a set of symbols, how to combine them to make sentences and how to deduce true sentences from a set of true sentences.

examples: propositional calculus, 1st order logic, 2nd order logic, modal logic....

Reasoning:

A reasoning is synonym of a proof in a particular logic. More precisely, a reasoning can be reduced to a deduction of a sentence from a set of others sentences in a given logic. It is what we call a proof, in the sense of the proof theory.

Thinking:

This term has no logic or mathematical definition.

Knowledge:

A set of true sentences in a given logic. Not all true sentences in that logic, only a subset that is labeled as "known".

Information:

A sentence in a given logic. Can be true or false.

Data:

Atomic formula in a given logic.

My blog about games, AI...

http://totologic.blogspot.com/

Totologic's definitions are valid in the context of mathematical logic, but they don't mean much for AI. I'll give you other definitions.

Reasoning is a computation of probabilities, where the primary tool is Bayes' theorem (given my prior beliefs and my observations, these are my posterior beliefs).

Thinking means running the code that implements the AI.

Knowledge is a mapping from a set of sentences to probabilities of being true (whether it's just a collection of pairs, or a model that will compute a probability for a large class of statements).

Data is a collection of observations.

Information is a measure of the probability of observing some set of data, given our model of the world. (the definition is -log(probability)/log(2), to be precise).

I do think those are more useful definitions. However, I still think progress in AI is not made by pondering about definitions, but by figuring out how to write programs that perform well at specific tasks. You can either build submarines or ponder endlessly on the definition of swimming.

Hehe. I have so much new information to work with now, it is getting hard to place it.

Nypyren, you have given me quite a bit to consider. Another thing I find from reading your last post is that precedence also matters. The first definition I found was:

the condition of being considered more important than someone or something else; priority in importance, order, or rank.
"his desire for power soon took precedence over any other consideration"
synonyms:

priority, rank, seniority, superiority, primacy, preeminence, eminenceMore"


Data is a collection of observations.

This is good too. I have a way to collect data through sensory mechanisms in the link I posted. But I have no way to process that data. For instance, I have a system for the 5 senses that detect objects or other systems and then store the name of the system. But I have no system for describing the objects.

Right now, the game engine I use cannot refer to material names, but the idea I had was to use the actual material name, type, values as part of the properties of the object. I could also use the actual textures of the object. I could use physics properties too.

So, there I have observations/data, but to give this data meaningful value, I would have to process it in such a way that it becomes useful information.


For a concrete example, let's think of a bot that's playing a first person shooter. One of the things the bot needs to do is learn how the navigate the level, and where powerups/ammo/health are located. If the level doesn't already have a navmesh precomputed by the level creator, the bot has to make this itself.

I believe I can make the player in the link I provided automatically navigate a room. I just use collision detection to make the player turn whenever there is a collision with the sight cone. I also get the name of the object name and place it in a string, "I see a... (whatever the object name is)."

I was able to make the player jump whenever a specific object collided with the sight cone. So, I can control the player, but whenever the specified object is seen, a force is applied to the player. It actually felt pretty good.

The wikipedia entry on "Navigation" gives me some good tips. We are not always conscious of global North, but always conscious of local North.

http://en.wikipedia.org/wiki/Navigation

They call me the Tutorial Doctor.

You should also dive in to Plato, René Descartes, Immanuel Kant, and David Hume. They all directly address these questions to a greater or lesser extent. No metaexploration of reasoning, knowledge, thought, and general epistemology would be complete without and examination of the body of literature developed on the topic over the last 50 centuries.

Stephen M. Webb
Professional Free Software Developer

This topic is closed to new replies.

Advertisement