In Defense of Fallacies

I've seen the phrase "correlation is not causality" abused on several occasions in the past, usually like this:

A: A recent study found that X is correlated with Y.

B: You racist/sexist/classist/heteronormative moron! Don't you know that correlation is not causation?

While it's true that a correlation between X and Y does not prove that X causes Y, a highly statistically significant correlation does strongly imply some causal relationship. Maybe X causes Y. Maybe Y causes X. Maybe Z causes both X and Y. Maybe it's something even more elaborate. But highly statistically significant correlations are, by definition, unlikely to be the result of pure chance. If there are other reasons to believe that X causes Y, the knowledge that the two are correlated increases the likelihood of this hypothesis being true.

"Correlation is not causation [so there's nothing to see here]" is not a valid reason to dismiss the observation of a statistically significant correlation, especially when there's a plausible causal mechanism. Of course, this doesn't mean that the combination of a plausible causal mechanism and a correlation proves causation; only that it makes for a credible hypothesis.

While considering this, it occurred to me that there's a more general point to be made here, namely that there are several other fallacies that can actually be valid tools of Bayesian inference if used properly. For example:

Appeal to Authority: If the vast majority of experts in a particular field believe that X causes Y, that doesn't prove that X causes Y, but learning this should increase your estimate of the probability that X causes Y.

Affirming the Consequent: If the sidewalk's wet, that doesn't prove that it rained last night, but it's a good hypothesis to start with, especially if you don't have sprinklers.

Fallacy of Composition: A team of first-rate engineers isn't necessarily a first-rate engineering team, but all else being equal or unknown, it's likely to do better work than a team of mediocre engineers.

Argumentum ad Hominem: If someone makes an improbable claim, the fact that he has a history of making similarly improbable claims later proven to be false doesn't prove that he's lying this time, but it's certainly cause for skepticism.

Denying the Antecedent: If you don't pay your electric bill, the electric company may decide to keep the lights on just because you're such a swell guy*. But the likelihood of uninterrupted service is much greater if you pay the bill.

*Funny story: I haven't received an electric bill since I moved into my apartment nearly six months ago, and the electric company won't tell me anything because the account's apparently not in my name.

Share this

The problem is idiots

I don't think that, strictly speaking, anybody really needs to know any of the fallacies by name. People are perfectly capable of poking holes in other people's arguments without having memorized the name, or the idea, of a single fallacy. What the named fallacies are, in my view, is nothing more than a taxonomy of corrections which are continually generated all the time anyway. We no more need this taxonomy in order to point out errors in other people's arguments, than birds need to be classified in order to be birds. It is, at best, a convenience to have the names, since they let us outsource the explanation of the error to other people (e.g., people who write the web pages that come up when you google classical fallacies).

"Maybe it was just a coincidence" is a thought that pretty much anyone can come up with. We don't really need to memorize the phrase "correlation does not equal causation" in order to recognize, and point out, that sometimes some people are too quick to jump to conclusions about causation.

But not only are people already able to poke holes in other people's arguments without knowing the classical taxonomy of errors, they also have a sense of the limits of the counterarguments. Anybody who says "maybe it was just a coincidence" knows enough to back down if enough evidence piles up.

Anybody, that is, except for idiots. Everybody really ought to have a sense of what does or does not constitute an error. If somebody misuses the name of a classical fallacy, what this tells me is that they are regurgitating something they memorized but did not understand, which in turn tells me that, before learning about the classical fallacies, they were genuinely clueless about what does or does not constitute valid reasoning.

That said, I think there is room for disagreement between two non-idiots about whether some bit of reasoning is valid, whether some bit of evidence is sufficiently strong, etc. Different people bring different assumptions to the table, different unspoken knowledge, different assessments of authority, and so on. An inference that taken "bare" is invalid, may in fact be valid within the context of the assumptions of one of the participants. And arguments from the authority of (say) claimed scientific consensus may be accepted by one party but rejected by the other for various reasons. For example, if there is some bit of knowledge which Bob thinks is the sort of thing that scientists would definitely know what's what about, but which Bill thinks is not that sort of thing, then Bob is likely to accept the authority of scientific consensus and Bill is less likely to; furthermore, if Bob and Bill have different ideas about the degree to which there is consensus, then again, they may disagree about whether the authority counts; furthermore, if Bill has great respect for a particular scientific authority who happens to be a dissenter from the consensus, then again, they may have sharply divergent assessments about the weight of scientific consensus; finally, if Bob is himself a scientist who has looked at all the evidence and come to a conclusion, then he may reasonably be unimpressed by the fact that most (but by no means all) scientists disagree with him.

Idiots in defense of syllogisms

Sometimes the taxonomy is important, if only for didactive reasons. For example, I have abstracted the following from Wikipedia's entry on inference:

"The conclusion inferred from multiple observations is made by the process of inductive reasoning. The conclusion may be correct or incorrect, and may be tested by additional observations. In contrast, the conclusion of a valid deductive inference is true if the premises are true. The conclusion is inferred using the process of deductive reasoning. A valid deductive inference is never false. This is because the validity of a deductive inference is formal. The inferred conclusion of a valid deductive inference is necessarily true if the premises it is based on are true."

I hope you have had enough coffee to appreciate that even a weakness of inductive reasoning is not really an error in deduction. The standard is different. With one method we "know" more, but what we know changes often. With the other method, we "know" less, but what we know is incontrovertible.

I've long thought that the

I've long thought that the classical fallacies are disconnected from the reality of truth accumulation. My own take is that all methods of discerning truth work to some extent, but they also have their blind spots. I think the ultimate logical fallacy is forgetting that, and pushing the conclusions of any one particular methodology too hard.

To that end, the fallacies can still be of use, if used wisely. Outside of math, truth is essentially probabilistic (I appreciate your nod to Bayesian methods). In my mind, that means the optimal way to use the fallacies is to deduct from the probability of a certain claim being true, rather than invalidating it all together. In practice, the problem is that most use of the fallacies is in the latter manner --- your argument is a form of fallacy X, and therefore, is completely wrong.

I doubt any of this is terribly revelatory. What I think is going on is that people are more inclined to rationalization than truth seeking, and selectively abusing the fallacies can be a powerful tool in that department.

Damn

That's some good shit right there

One of my professors put it

One of my professors put it like this:

"Correlation does imply causation... but more work is needed."

As you say, if X and Y correlate, there are only three viable hypotheses, all involving causation. 1. X causes Y; 2. Y causes X; 3. Z causes X and Y.

 The more work is simply figuring out which of these choices is correct.

<popperian>In fact we can

<popperian>In fact we can never observe causation nor can we observe correlation, we can only observe sample correlation from which we infer correlation upon which we build a causation model</popperian>

Since you bring it up, what

Since you bring it up, what is "sample correlation?"

Sample to population

Suppose you have a fairly weighted coin, so that there is a 50/50 chance of it coming up heads. If you were to flip it infinitely many times, then with a probability of 1 it would, in the long run, come up heads precisely 50 percent of the time.

But you don't have time to flip the coin infinitely many times. You can flip it, say, 200 times. At the end, if you count up the number of heads and tails, they will probably not exactly match. This is your observed frequency of heads and tails, and it is not the same thing as the probability of heads and tails, but it is a reasonably good indicator of it, and gets better the more times you flip.

Sample correlation is to correlation, as observed frequency is to probability. All you can directly calculate, from a finite number of observations, is observed frequency and sample correlation (if your observation is of two variables). From your observation, you can infer the probability or the correlation, but you cannot directly calculate it from your observations.

Corrections are welcome. IANAS.

You're correct. It matters

You're correct.

It matters because there are many ways in which sample correlation can be different from the correlation. The most obvious way is bad luck, the sample on which you compute your statistic yields a correlation that does not reflect the limit of what you'd obtain on an ever growing sample.

If you know the underlying distribution, you can compute the probability distribution of the true correlation. If your sample estimator is unbiaised, the average should be the sample correlation. The bigger your sample, the tighter this distribution and the more confident you can be about your sample correlation as an estimation of true correlation.

There are sneakier ways though, and the most common is selection bias. Imagine you badly want to publish a paper on an unsuspected correlation between two phenomenons. After a year of reseach you publish a paper showing a correlation between finger nail growth rate and job satisfaction, the data shows that the odds of finding this sample correlation although there would be no real correlation is slim: one in a million. Good ? No, because during a year you analyzed the correlation of 1000 different factors and rejected all those pairs that did not produce interesting correlation and publishable papers. Your paper is really one out of 500,000 so a confidence of 1 in 1,000,000 the correlation is really meaningless.

Correlation implies causation if you have a control group

That way the only difference between the two groups is the independent variable.

It is an appeal to authority

You can dispute a person's inferences without appealing to names of the classical fallacies. Those names don't, primarily, teach us how to reason, but give names to what we can already do without their help.

And yet, people invoke their names often. Why? One reason is to avoid wasting time reinventing the wheel. If the Greeks (or whoever) said it well, why say it again? But another, and I think all too common, reason is to appeal to the authority of the past.

If I were merely to dispute a particular inference speaking as myself, then the other person might, and in all likelihood will, defend the inference. But if I invoke the authority of the past by naming a fallacy, then that may intimidate him to some degree, since he is no longer simply brushing aside the objections of some particular other person, but trying to justify himself in the face of long tradition.

As mentioned, there is nothing per se wrong with an appeal to authority. However, if this intimidates the other person, who does not want to be thought of as a fool and certainly does not want to dig an even deeper hole for himself if he has already erred, he may back off from an argument without challenging the objection, even when he really should challenge it.

The appeal to authority in

The appeal to authority in this case is more often simply shorthand, so that the interlocuter doesn't have to explain why a fallacy is a fallacy every time one is observed. The more obscure or Latin-esque the name of the fallacy, the more likely a reader who isn't already familiar with the fallacy is to look it up on Google and self-educate. Obscure references in this context serve a similar purpose as hyperlinks or footnotes. At least, that's why I do it.

And so what if it intimidates the other person? It's that person's job to challenge the objection, if the objection is invalid and the fallacy grants exceptions (as they often do). Heck, most of the online fallacy dictionaries explain the exceptions as well, so the other person can easily respond to authority with authority. That's no reason not to take advantage of the shortcut instead of reinventing the wheel every other post.

Is this a disagreement?

Not sure if you're disagreeing. You've specified frequency and importance, but that's not really a disagreement.

Not necessarily, just an

Not necessarily, just an observation.

The fallacy fallacy

Your post actually deals with what has been called the "fallacy fallacy": "Because some argument has the form of a logical fallacy, it is false." Or, in informal logic, "Because some argument has the form resembling an informal logical fallacy, it is, in fact, fallacious."

Taking a couple of the cases you list, let's start with the formal fallacy of Affirming the Consequent. This is a fallacy involving if/then statements. If you know the statement "If A, then B" is true, and you know A is true, you are guaranteed that B is true.

That's formal logic. The validity of the argument is based on the form of the argument, and the content – the meaning of A and B – don't matter.

The fallacy of affirming the consequent is a formal fallacy – it doesn't matter what the consequent is, or what the antecedant is. You can look at the form of the argument and tell it's fallacious.

But that doesn't mean it's false, all it means is it's not guaranteed to be true. The Fallacy Fallacy occurs when people assume "fallacious" is the same as "false". (Denying the antecedant works the same way.)

Let's take a representative informal fallacy, the argumentum ad verecundiam or Appeal to Authority. The textbook form of this argument is, "Esteban X. Pert says we've been visited by aliens from Jupiter. E.X. Pert has a Nobel Peace Prize. Therefore, we've been visited by aliens." This is fallacious because the credential listed does not qualify E.X. Pert as an expert in anything remotely relevant to whether we've been visited by aliens. However, all this fallacy means is that Mr. E.X. Pert's testimony is not evidence in support of alleged alien visitations. Aliens could still have visited anyway.

In addition, sometimes an appeal to an authority is not the fallacy of Appeal to Authority. When I shattered my ankle, I put a great deal of reliance in my orthopedic surgeon's suggestions for how to take care of it.

The "Fallacy Fallacy" exists because people often lose track of just what validity and fallacy actually mean in any sort of discourse.

Just one opinion – Collect the whole set!

Clarify

Karl,

Your first two quoted statements are not equivalent.  The the second being true since by definition if an argument takes the form of a fallacy then it is indeed a fallacy.  I'm not even sure what your first statement means since it is ambiguous on what the word "false" is applying to, the argument or the conclusion of an argument.

"But that doesn't mean it's false, all it means is it's not guaranteed to be true."

Are arguments true and false in this sense?   An argument is quite distinct from what it is arguing for and you seemingly confuse the two through-out your comment.  Besides, can't a non-fallacious argument also come to a conclusion that is false?   Surely, not every non-fallacious argument comes to a conclusion that is true.   There are other ways to be mistaken besides fallacy.    Everything from mismeasurement, to contamination, to trickery can cause people to come to the wrong conclusions without them committing a fallacy.

So what exactly do you mean by saying that arguments are false?

I'm not sure if you are confusing these things or you just miscommunicate?

 

 

Fallacies

I'm not impressed

"In this light, the weight that the first logic and argumentation textbooks give to propositional logic is absolutely hilarious. I doubt that there has ever been a single situation anywhere where someone has successfully come to a solution and argued his case to others using propositional logic, for some nontrivial real-world issue."

Because of this statement I'm not to impressed with Kokkrinen.    I use logic all the time to eliminate incorrect conclusions.   The purpose of this kind of logic, categorization of fallacies, isn't creation but destruction.  To use an analogy.  It serves the purpose of "selection" in natural selection.

I also use deductive logic to arrive a creative conclusions.   Once I have a theory of how something works I can deduce solutions to problems via logic. 

Besides I see these kinds of arguments all the time:   "X is a danger to our children.   We should protect our children.  Therefore we should ban X."    How can one attack that without "using" propositional logic, or the assumption that others use it?   An understanding of propositional logic leads one to see the possible attack points.  "Your assumption that X is a danger to children is false", or "No, if we protect our children absolutely from all threats then how can they learn?". 

No one would accept an argument that didn't respect propositional logic.   To the above argument it would make no sense to say "Y isn't a danger to our children. Therefore banning X doesn't make sense".

 

 

 

Brian, I think you have to

Brian,

I think you have to take the passage you quoted within the context of early 20th century logical positivism's obsession with formal logic as the be-all, end-all to all philosophical problems under the sun, and Godel's subsequent smashing of that dream.

I think the best example of what the author you quoted is talking about is Anselm's Ontological Argument. A perfectly logical argument that has convinced approximately zero people ever - not for lack of logic, but mostly for definitional issues regarding what concepts like "exist" and "perfect" actually mean.

An understanding of propositional logic leads one to see the possible attack points.

Yes, and that's what it's best used for: locating the weaknesses in other people's arguments in order to criticize them, and locating the weaknesses in one's own arguments in order to strengthen them. But philosophers in not so distant history wanted to do a whole lot more with propositional logic alone, and they failed.

Still not impressed

I took his sentences in the context of his article and didn't see this as purely a criticism of the logical positivists.   He made many gross overgeneralizations and showed a lack of comprehension of how the formal logics are used in the real world.

For exampe this statement is false:

<blockquote>"Predicate logic and other such logics can be useful only if the problem domain is simple enough to allow such axiomatization."</blockquote>

Why is it false?   Because I can use predicate logic iteratively to handle complex systems were I cannot axiomize.     I can also use that predicate logic to determine the most efficient way to proceed in solving a problem.   I may use such logic to eliminate certain assumptions singlely or in groups.   Predicate logic is extremely useful even in situations were we are not 100% sure certain assumptions are correct.   That's because we can use the logic to work back to which assumptions may have been wrong.

His main mistake is asking too much of predicate logic.

His article was sprinkled with these kinds of errors.   His example of the alarm and the cat was just plain silly.   The predicate logic is correct in the example.  Just because you find your cat playing with the sensor does not mean you shouldn't check for a burglar with the same care you would as if the alarm were off.  The fact that the cat was playing with the sensor does not somehow guarentee there isn't a burglar.  If that were the case we could simply set up alarms with catnip on the sensors and furnish them with cats to make sure our valuables are safe.    In fact this is an old trick, breach some security measure then provide a plausible cause for it's malfunction, after which you can proceed unmolested.     The way to deal with this is iterative probing of ones assumptions.   Does the cat seem intoxicated with catnip oil?  Does the sensor smell of cat food?  If so then how did it get there? 

I also don't think Godel applies to his article at all.   Logical positivists were trying to get to a kind of completeness which was foiled by, in this case, Godel's use of self-reference.   As a loose analogy, what the logical positivist were trying to get to was the ability to show that statements were either true or false.   Analogously, Godel presented them with a self referential statement of the form "This statement is false".  Thereby providing a sentence which was neither true nor false. Of course it was more complex than that and the last sentences were only analogous. 

Nowhere in this article was predicate logic defeated by self reference, and I can't think of an example where it would be other than the most esoteric.

The article I quoted did not mention any ontological argument, so maybe you confused it with something else you read.   I disagree that Anselm's Ontological Argument is perfectly logical.   It appears fallacious in several ways.  I'm not even sure it could be written in propositional logic properly.

Fallacies in Anselm's Ontological Argument

BTW, one of the fallacies that is apparent in Anselm's Ontological Argument is that the definition of perfection used in the argument entails existence.   Therefore the argument is committing the fallacy of question begging, smuggling in something it wishes to prove into the premises.   It also commits the fallacy of equivocation on the definition of "exists".   It uses the same term for something someone imagines, and something that truly exists.

It also smuggles in all sorts of other implicit premises, like the idea that the most perfect being must be singular.    I might be able to imagine the concept of the most smallest particle but even if such a thing exists that doesn't mean it's singular.  

He also assumes that the atheist is conceiving of the "most perfect being" as being one that doesn't exist.   But why can the atheist concieve of god as being something that exists, as a concept.   I can certainly concieve of both.   I can concieve of a non-existent god or an existent one.    This destroys his final step that leads to the supposed contradiction. 

I think if you substitute "most perfect fruit" for "most perfect being"  and "Great Pumpkin" for "fruit" then the tricks will be more apparent.   

About that assumption that existence somehow makes something more perfect, or greater.    I can imagine a perfect circle but all the existent ones I know about are flawed.   Seems like perfection is more a barrier to existence than something that is forces existence.   

So this particular argument is chock full of logical fallacies.   Where did you get the idea that it was valid?  I didn't look these up.  Just read the argument and spotted these problems.   Should I be writing some paper to expose my findings to the world? ;)

You hit on the two main

You hit on the two main problems with Anselm's argument: "The Perfect Island" analogy (we can conceive of an Island for which no greater Island can be conceived, therefore this perfect Island must actually exist somewhere outside our mind) and Kant's objection that existence is not a predicate. But I don't view either of these problems as "logical problems", but as definitional problems. The argument itself continues to contain a certain strange beauty, if only because it demonstrates the limits of pure a priori reasoning without focus on linguistics and a posteriori analogy. Yes, Anselm's argument is question begging with respect to its definitions; that is the nature of a priori statements: they are true by virtue of being tautologies. Which is to say, the logic is valid, the argument is well-formed, but that doesn't mean the conclusion is true or that the premises aren't problematic.

Hehe, I didn't (and still

Hehe, I didn't (and still haven't) read the article to which you were responding, only the excerpt you quoted. That excerpt alone made sense to me in the context of the historical progression of logic.

Magnifying flaws over successes

Sort’a outsizing one particular episode in a 2500 year old tradition that
the heart of computer technology. Seems kind’a important to me.

Besides it's not at the heart of most fallacies which involved higher level
logics. This is a fact that I had forgotten from classes of over a
quarter decade ago. Like Constant said
I use the stuff often without knowing the labeling. So I'm
not sure why the guy was denigrating it.

I’m not sure I buy this whole line of reasoning as a valid attack on
fallacies. Criticizing tobacco company
research is valid precisely because it is not an ad hominem attack in the
traditional sense. Ad hominem attacks
question the validity of an argument not the research backing up the
argument. It’s perfectly valid to
question someone’s honesty, fallibility, etc. when they are a source of
empirical data.

Some people also just don’t have the mental capacity to make proper rational
decisions, and for them it is perfectly reasonable to base their decisions on
issues of trust or imitation. “Don’t
know why but it works for me. See.” is a perfectly valid argument in some
cases. This in no way impacts or
invalidates fallacies or propositional logic.
No more so then showing that one can’t use only propositional logic to
write musical lyrics. Would you criticize Billy Joel’s success on the basis
that you can’t use his methods to build the hardware and software of a
computer?

least something to do with argumentation.
Well the same can be said of computers when you consider synthesizers,
text generators, and other aspects of computers. You could certainly analyze music in ways
using computers that Billy Joel couldn’t and go way beyond certain of his
capabilities. There are certain aspects
to music however to which people like Billy Joel are indispensable. Likewise with propositional logic and
fallacies in that you can’t go without them in their area of functionality.

So criticizing a 2500 year old area of math and science on the basis that
some group of people tried to stretch it beyond where it was able to go, is not
a valid way to proceed. In fact, at the
time they tried people weren’t quite sure if it could be stretched in that
way. The fact that I can now sit down
and listen to my digital DVD player and listen to Billy Joel having been
processed purely through propositional logic circuits indicates that it could
be stretched fantastically in other directions.

So I'd say the spirit of this whole article is wrong hearted. Fallacies aren't valid and as much as they might resemble other methods that do work for argumentation that doesn't mean that pointing out a true fallacy is the wrong way to proceed.

Besides debate and honest intellectual inquiry are two quite different beasts. What convinces people is quite often different that what is true. Otherwise we wouldn't have things like Communism, Islam, or Christianity for that matter.

Appeal to Authority

If the vast majority of experts in a particular field believe that X causes Y, that doesn't prove that X causes Y, but learning this should increase your estimate of the probability that X causes Y.

If the vast majority of economists believe that the minimum wage leads to improved results, should that increase your estimate of the probability that they're right? Or do you simply decide that the experts are the minority who have the better arguments?