(1) Reality is objective.
(2) One should always follow reason and never think or act contrary to reason. (I take this to be the meaning of "Reason is absolute.")
(3) Moral principles are also objective and can be known through reason.
(4) Every person should always be selfish.
(5) Capitalism is the only just social system.
It is in holding to these five propositions that Rand's philosophy most contrasts with the prevailing philosophical attitudes of our culture. Our current intellectual culture is shot through with collectivism, irrationalism, and subjectivism.
This is bound to make my disagreement with Objectivism seem small, at least to most non-Objectivists: I agree with 1, 2, 3, and 5. In fact, I regard each of those propositions as either self-evident or else provable beyond any reasonable doubt through philosophical argument and (in the case of #5) historical evidence. I would even go so far as to say that the continuing resistance to these facts is due essentially to evasion. And I regard #1 as so obvious as to be beneath a philosopher to argue.
Thus far the Objectivists and I are in agreement. But that is not enough to constitute a whole philosophy. And in fact I think Objectivism contains several definite errors, errors which can be proven to be such:
When Objectivists say that "the meaning of a concept is all of the concretes it subsumes, past, present, and future, including ones that we will never know about," they are failing to distinguish sense and reference. The need for distinguishing the 'sense' of a word from its 'reference' is shown by examples like this:
Oedipus, famously, wanted to marry Jocaste, and as he did so, he both believed and knew that he was marrying Jocaste. The following sentence, in other words, describes what Oedipus both wanted and believed to be the case:
(J) Oedipus marries Jocaste.However, Oedipus certainly did not want to marry his mother, and as he did so, he neither knew nor believed that he was marrying his mother. The following sentence, then, describes what Oedipus did not want or believe to be the case:
(M) Oedipus marries Oedipus' mother.But yet Jocaste just was Oedipus' mother. That is, the word "Jocaste" and the phrase "Oedipus' mother" both refer to the same person. Therefore, if the meaning of a word is simply what it refers to, then "Jocaste" and "Oedipus' mother" mean the same thing. And if that is the case, then (J) and (M) mean the same thing. But then how could it be that Oedipus could believe what (J) asserts without believing what (M) asserts, if they assert the same thing?
Of course, Oedipus did not know that Jocaste was his mother, which explains why he was not illogical in believing (J) without believing (M). But that doesn't answer the question above, and in fact it just creates another problem. If "Jocaste" means the same thing as "Oedipus' mother," then "Jocaste is Oedipus' mother" must mean the same thing as "Jocaste is Jocaste." How could Oedipus fail to know that Jocaste was his mother, when he certainly was not ignorant that Jocaste was Jocaste, if those mean the same thing?
Of course they do not mean the same thing. What the example shows is that (J) and (M) do not express the same thought since Oedipus had the first thought and did not have the second thought. And the only reason for that can be that "Jocaste" and "Oedipus' mother" do not express the same idea (since the other words in the sentences are the same). So there can be two different ideas, referring to the same thing.
The thing that the ideas refer to--the person, existing in physical space--I call the "reference" of the ideas. The reference of a word is the same as the reference of the idea that the word expresses. The sense of a word, however, I identify with the idea that the word expresses. Thus, "Jocaste" and "Oedipus' mother" have the same reference, but different sense. That's what we've just shown.
Thus, where Rand says, "a concept means all the concretes it subsumes..." I say, "a concept refers to all the concretes it subsumes."
So we have to distinguish the sense of a word from its reference. And furthermore, there is no reason not to make this distinction. The only reason I can think of why Objectivists refuse to recognize this distinction, is that they think in declaring the sense of a word to be something other than the objects the word refers to, that I am saying that a word refers to something other than the objects it refers to - i.e., they just don't understand the distinction.
And most of the time when one speaks of the "meaning" of a word, one means its sense, not its reference. Thus, for example, I can say that millions of tourists have come to New York to see the object that the word "the Empire State Building" refers to; but it is not the case that millions of tourists have come to New York to see the meaning of the word "the Empire State Building."
A note on the above: I speak of the sense and reference of a word, not of an idea. The reason for this is that the sense of a word is the idea associated with it. Ideas do not have senses; they are senses. For the same reason, it is incorrect to speak of the meaning of an idea. Mortimer Adler makes all of this clear in Ten Philosophical Mistakes (see esp. chapters 1 and 3).
To some this will seem a small, technical objection to Objectivism. But it has consequences.
Objectivism's rejection of the analytic/synthetic distinction is based on the failure to distinguish the sense and the reference of a word. An analytic statement is defined to be one that is true in virtue of the meanings of the words involved. Peikoff shows in his article on the analytic/synthetic distinction (in ITOE) that, from his theory of meaning, it would follow that no truth can be synthetic. Take an example of a typical, allegedly synthetic statement:
(A) All bachelors are less than 8 feet tall.and suppose that it is true. Then, since the meaning of "bachelors" includes all the bachelors in the world, including all of their characteristics, including their various heights, including (by hypothesis) the fact that they are all less than 8 feet, to say that there is a bachelor more than 8 feet tall would contradict the meaning of "bachelor". Hence, (A) is analytically true.
Having made the sense/reference distinction, however, we see this is wrong. (A) is analytic only if it is true in virtue of the senses of the words involved (not their reference). Of course, every true sentence is true in virtue of the reference of the subject and the predicate (e.g., the object that the subject refers to having the property that the predicate refers to).
(A) is analytic only if the concept of being less than 8 feet tall is contained in the concept of a bachelor. And this is not the case, since it is possible to think of a 9-foot-tall bachelor.
Note that I am not arguing that since you can imagine a 9-foot-tall bachelor, therefore there might be one. I am not saying this proves anything about how the external world is. I am only saying it proves something about our ideas, which is the only thing at issue in deciding whether a judgement is 'analytic' or 'synthetic': it proves that the idea of a bachelor doesn't contain the idea of being under 8 feet.
The attempt to deny the analytic/synthetic distinction really is perverse. There are sentences like "Every rectangle has 4 sides," "Every bachelor is male," "Every cat is a cat," etc., which certainly appear, prima facie, to have something in common and to be different in some way from "Every rectangle is blue," "Every bachelor is a slob," etc. Every philosopher is able to reliably classify certain specimens of each category and to produce indefinitely many additional examples each of 'analytic' and 'synthetic' propositions that have never been explicitly discussed by any other philosopher before ("Every dodecahedron has 12 faces"). Is this not strong evidence that there is some distinction here? If so-called 'analytic' statements really do not have any characteristic whatsoever in common and differ from so-called 'synthetic' statements in no way whatsoever, then the philosophers who claim there is a distinction must really be classifying statements entirely at random. If this is the case, what accounts for their intersubjective reliability?
Finally, note that (contrary to Peikoff's presentation), the analytic/synthetic distinction is not equivalent to the necessary/contingent, a priori/empirical, or certain/uncertain distinctions in the minds of contemporary philosophers. I do not say that analytic = necessary = known a priori = known with certainty; and I do not say that synthetic = contingent = empirical = uncertain. In contemporary philosophy, it has been generally recognized that those are four different distinctions, even though they were sometimes confounded in the past (especially by Hume).
I have so far said nothing about whether synthetic propositions may be known with certainty or whether they may be 'contingent'.
By an item of "empirical knowledge" I mean something that is known that either is an observation or else is justified by observations. A priori knowledge is that which is not empirical - i.e., an item of knowledge which is not an observation and which is not justified by observations.
Note the word "justified". I do not say that a priori knowledge does not depend causally on observations. I do not say that the concepts required to understand it are innate or formed without the aid of experience. I only maintain that a priori knowledge is not logically based on observations. In other words, if x is an item of a priori knowledge, then there is no observation that is evidence for the truth of x - but we still know x to be true.
This distinction is crucial. Perhaps some experiences have caused us to form certain concepts. And perhaps having these concepts enables us to understand the proposition, x. So our ability to understand the proposition depends on observation. But understanding a proposition is very different from being justified in believing it. You can understand something and still not be justified in believing it. For instance, I understand what it means to say "there is life on Mars" - but I have no justification for thinking it to be true. The question of whether our experiences justify a proposition is, therefore, different from the question whether our experiences enable us to understand it.
What I have offered above is what nearly all philosophers mean by "a priori knowledge." I take it that Objectivists deny that there is any a priori knowledge in exactly the sense just defined.
I say that we have a lot of a priori knowledge, to wit:
By "the principles of logic" in this argument, I will mean exclusively principles of inference: that is, principles stating what is and is not a valid or cogent argument. For example, "Modus ponens is valid" is a principle of logic, and it's one that we know. How do we know these things?
(1) Principles of logic are not observations.
You do not perceive, by the senses, the logical relation between two propositions. You may be able to perceive that A is true, and you may be able to perceive that B is true; but what you can not perceive is that B follows from A. You can also, perhaps, observe by introspection (I take introspection to be empirical knowledge) that you actually infer B from A. But again, you do not thereby observe that it was valid to do so.
Validity is not something literally visible, audible, tangible, etc.
(2) The principles of logic can not in general be known by inference.
Some principles of logic might be knowable by inference - if they could be supported by reference to other principles of logic. But it couldn't be the case that all principles of logic are known by inference, because this would require circular reasoning.
The principles of logic say what is and isn't a valid inference. If we didn't know the principles of logic at least implicitly, then we would not be in a position to find out anything by inference, since we wouldn't know which inferences were valid.
For example, to try to infer that modus ponens is valid by using modus ponens would beg the question. To try to infer it by some other kind of inference, would just push the question to how we know that other kind of inference to be valid. And so on. If we are to avoid either circularity or infinite regress, some principles of logic must be foundational.
Now it follows from (1) and (2) that:
(3) The principles of logic are known a priori.
For they are not observations (1) and they are not inferred from observations (2), but they are known. This is the definition of a priori knowledge.
Consider the proposition
(B) 1 + 1 = 2,which I know to be true. Is this proposition based on any observations? If so, what observations?
In order to learn the concept '2', I probably had to make some observations. I might have been shown a pair of oranges and told, "This is two oranges." I might then have been shown two fingers and told, "Here are two fingers." And so on. This might have spurred me to form the concept 'two'. And if not for the observations of the oranges, the fingers, etc., I might never have been able to form that concept.
I mention this, however, only to explain why it is irrelevant. As I previously explained, the issue is not whether observations were necessary in my coming to understand the equation (B) but whether any observation justifies the proposition, i.e., provides evidence of its truth.
How about this, then: I see one orange, over here. Then I see another orange, over there. I put the two oranges together. I count them, and get the result "2". I therefore conclude that 1 orange plus 1 orange = 2 oranges. Perhaps by doing this experiment with a lot of different kinds of objects, I eventually conclude (inductively) that 1 + 1 = 2, regardless of what type of objects are being counted. Thus, observation has confirmed (B). Perhaps by also confirming a lot of other equations, I might also be able to inductively support the axioms of arithmetic.
This idea, of course, involves a confusion about the nature of addition. Addition is not a physical operation. It is not the operation of physically or spatially bringing groups together, and the equation (B) does not assert that when you physically unite two distinct objects, you will wind up with two distinct objects at the end. Indeed, if it did, the equation would be wrong. It is possible, for example, to pour 1 liter of a substance and 1 liter of another substance together, and wind up with less than 2 liters total. (This happens because the liquids are partially miscible.) This does not refute arithmetic.
Addition is solely a mental operation, sc. the mental operation of taking two groups and considering them as one group. Thus, I can take a group of ten people, and another group of seven people that I have identified, and decide to mentally group all of them together. The result is a group of 17 people. That is what it means to say "10 + 7 = 17." I do not physically alter the people in any way.
The equation "74389 + 1983 = 76372" also does not entail the actual existence of at least 76,372 objects; it is not false if there are fewer than that many objects in the world (otherwise, there are almost certainly some false statements of arithmetic that could be deduced from Peano's axioms - addition problems with really large numbers. I assume one would not want to claim that the axioms of arithmetic are false.) It also does not entail that any person has actually ever held 76,372 objects in mind. All it means is that a group of 74389 objects and a group of 1983 objects would be the same thing as a group of 76,372 objects.
An empiricist still might claim that my observation of two oranges might justify my belief that 1 + 1 = 2. For I might look at each orange, and then try the 'experiment' of mentally grouping them together, and find that the result was a group of two. There are two reasons I do not consider this an empirical justification of the equation:
(1) Because there is no possible experience or sequence of observations that would have counted against (B). This being the case, no observation is truly a test of (B).
(2) In general, if p is my reason for believing q, then if p is false, I don't know q. So if my justification for (B) depends on some observations, then anything that cast doubt on those observations would cast doubt on my knowledge of (B).
To be more specific: Assume that my belief that (B) depends for its justification on my observation of the two oranges. Then if there really weren't two oranges there, I do not know that 1+1=2.
But this is not the case: Even if my experiences with the oranges, the fingers, etc., including all the experiences that helped me form the concepts of '1', '2', and 'addition', were all a long series of hallucinations, I still know that 1+1=2. (Please do not object that my experiences were not hallucinations. That is not the point.) This suggests that the experiences' role was only in giving me an understanding of the equation, not in justifying it. My knowledge of arithmetic is not put in jeopardy by the hypothesis that all my previous observations were false (it survives the brain-in-a-vat scenario, for instance, or Descartes' dream hypothesis). Therefore, my knowledge of arithmetic is not based on those observations in the relevant sense, since hypotheses that cast doubt on my observations do not cast doubt on my knowledge of arithmetic.
Therefore, my knowledge of arithmetic is a priori. The same argument could be made for any mathematical knowledge.
N.B.: In saying the brain-in-the-vat scenario 'casts doubt on' my observations, I simply mean that if I were a brain in a vat, none of my current observations would be knowledge. It is not part of the meaning of this that the brain-in-the-vat scenario is at all plausible or likely actually to be true.
That knowledge of moral principles is also a priori follows from the following two theses:
(1) Moral principles are not observations. The content of every observation is descriptive.
That is, you do not literally see, touch, hear, etc. moral value.
The only possible objection I can think of would be if one thought that the sensations of pleasure and pain are literally perceptions of moral value and evil. I do not think this is the case, though. No doubt, we generally take things that cause us pleasure to be good, and take things that cause pain to be bad. But I think this is because the pleasure itself is good, and the pain itself is bad, not because they are cognitions of goodness and badness. Pain is just a sensation; it isn't a sensation of anything (as there is a sensation of heat, or a sensation of pressure).
Only this explains why, when you're about to undergo an operation, you want an anesthetic (moreover, it is good to have anesthesia). After all, if it isn't the pain itself that is bad, but the pain only makes you aware of something bad, then as long as you know the operation is good for you, the pain involved shouldn't bother you (the pain would then be like a hallucination of badness - but since you know it's just an illusion, you realize nothing bad is really happening).
One might reply that although pain is the sensation of badness, serving to make us aware that what causes it is bad, the pain might also be bad in itself. While there's no contradiction in this, it appears ad hoc. Once we've been forced to admit that pleasure is good and pain is bad, the other account of the relationship between pain and badness appears superfluous and probably originally a confusion. Moreover, even if we take this view, we are then forced to the question, How do we know that pain is bad? Do we also observe this, so that there would have to be a second, meta- pain, which is caused by the first pain? (Note that for my argument I only require one evaluative fact to be a priori.)
One final point: In every other case of a sensory experience of a type of phenomenon (e.g., perception/sensation of heat, of a red car, of a noise), the occurrence of the sensations/perceptual experiences is explained by the existence of the phenomenon that the sensory experience is of. For example, when I see a red car, it is impossible to explain why I have the sensory experience I am having without mentioning the red car. It is because the car is there, and because the car is red, that I am having just this sort of experience.
This is not the case with pain and badness. When I have a sensation of pain, the fact that I have this sensation may be explained by the fact that my arm has been cut, or there is a flame under my hand, etc. It is not necessary to say that the cut on my arm is bad. Its badness adds nothing to the causal explanation. The cut didn't cause pain in virtue of its being bad; it caused pain in virtue of plain old, physical characteristics - just as all sensations are caused by physical phenomena. How cuts cause pain can be explained purely by descriptive physiology and physics, without any ethical claims.
I am not denying that most things that cause pain are bad independently of that. Most things that cause pain are bad in that they threaten our lives or damage our bodies. This fact, though, is not something known immediately by perception. It is something partly inferred by induction, and also inferred partly from our knowledge that death and injury are bad.
(2) Moral principles can not be inferred from descriptive premises. This principle is just an instance of the general fact that you cannot derive a conclusion within one subject matter from premises in a different subject matter. Just as you cannot expect to derive a geometrical conclusion from premises in economics, or derive a conclusion about birds from premises that don't say anything about birds, you should not expect to derive a conclusion about morality from non-moral premises.
More specifically, (2) follows from two sub-premises:
(a) Moral principles can not be deduced from descriptive premises. (This is 'Hume's Law'.)
(a) is a trivial result in the Aristotelian theory of the syllogism (a syllogism requires a middle term), as well of the more modern systems of logic, provided that moral principles are identified with propositions having only evaluative predicates, and descriptive propositions are identified as those having no evaluative predicates. For a simple example, take an argument of the form
x is D.
Hence, x is good.where D is any descriptive predicate. This argument, to be valid, requires the assumption that whatever is D is good. And then there will be the question of how we know that, major premise. To see the truth of Hume's Law, it is best to examine in particular some attempts to bridge the is/ought gap and see why they fail. This will best enable the reader to see how any such attempt should be answered, and therefore to see that all such attempts must fail:
Communism causes poverty, makes people miserable, and takes away people's freedom.
Therefore, communism is bad.The premise is apparently a descriptive and empirical fact, while the conclusion is evaluative. Assume the premise is true. My question: Does the conclusion follow from that alone?
No, the conclusion also depends upon the suppressed premises that poverty and misery are bad, and freedom is good. No doubt these additional premises are both true and known to be such, but that does not affect the point. The point is that the evaluative conclusion of the argument rests in part on evaluative premises. The argument therefore does not bridge the is/ought gap.
Freedom is necessary to our survival.
Therefore, freedom is good.Again, assume the premise is true, and ask, Does the conclusion follow from that alone?
No, because the argument presupposes that survival is good, and that survival is good is an evaluative premise. If survival is bad, then the conclusion to draw is that freedom is bad, not good.
I want to live.
Eating is necessary to live (and also will not interfere with anything else I want).
Therefore, I should eat.This requires the assumption that I ought to act on my desires, and/or that my desire to live is a morally acceptable one. To see this, compare the parallel inference, which is of exactly the same form as (iii), "Adolf Hitler wants to exterminate the Jews. Sending millions of Jews to gas chambers will increase the likelihood of accomplishing this (while not interfering with anything else he wants). Therefore, Adolf Hitler should send millions of Jews to gas chambers." Assume that the premises of that inference were true. Does the conclusion follow? Keep in mind that we are not concerned herein with any non-moral sense of "should", if there is any such thing. The question is, does it follow that it was morally right of Hitler to send millions of Jews to the gas chambers, given that he wanted to kill them and sending them to gas chambers effected that end? Obviously not, since in fact what he did was morally wrong, and therefore he should not have done it. And the fact that he wanted to exterminate the Jews does not render the action justified; if anything, it only makes it the more reprehensible. What this shows is that inferences of the form that (iii) has require the assumption that the agent's desire is morally acceptable. Your merely wanting something does not by itself make the thing good.
Again, my aim is not to doubt or deny any of the suppressed premises involved in these inferences. My aim is only to point out that they exist and therefore that none of these arguments bridges the is/ought gap.
Social cooperation increases our evolutionary fitness.
Therefore, we should cooperate.This presupposes that evolutionary fitness is good. One could try to prove this like so:
The process of evolution tends toward the survival of the fittest.
Therefore, fitness is good.But this presupposes that survival is good and/or that what evolution tends towards is good.
Hopefully, the pattern is clear enough by now, so that it is unnecessary to multiply examples further. If one tries to show that x is good because it produces y, one must presuppose that y is good. If one tries to show that some thing, x, is good because it is a y, one must then presuppose that y's are good. If one tries to show that x is good because it has some characteristic, F, one presupposes that having F is good, and one will be called upon to prove that.
Rand tries to give some kind of argument, or explain how one can give arguments, bridging the is/ought gap in "The Objectivist Ethics". I have not tried to reproduce that argument here, unless (ii) or (iii) is it, because I can not understand it clearly, and because expositions of it that I have heard from Objectivists have varied widely. Most attempts to derive an 'ought' from an is depend upon either equivocation or suppressed premises, all depend on some fallacy, and Rand is no exception. I think that her exposition depends upon the suppressed premise that life, or existence, is good. This, again, is an evaluative premise, so the is/ought gap has not been bridged.
One might try to reply by saying you simply choose to live, and then the other conclusions follow. But this would just be a variation on (iii) above:
I choose to live.
Eating is necessary to live.
Therefore, I should eat.And this fails because it presupposes that my choice is correct. To see this, again, compare the parallel inference that begins, "Hitler chooses to exterminate the Jews . . ." and ends "Hitler ought to send the Jews to gas chambers," which is invalid. The difference between the two has to be that your choice to live is a right choice, while Hitler's choice to exterminate the Jews is a wrong choice.
Note that the problem here is not to show some way in which you might come to want to follow the principles of ethics. The problem is to explain your knowledge that the principles of ethics are true, based upon some other knowledge. If, as the Objectivists and I both accept, moral principles are genuine items of knowledge, then if they are going to be based on something else, they must be validly inferred from some other item of knowledge. It is not enough to show that you could simply choose to follow the principles of ethics, or make some other choice that commits you to following the principles of ethics, etc. That would not explain your knowing the principles to be true, in the manner in which you know something like the law of gravity. That would just explain your choosing to follow them. And I take it that it is essential to Objectivism that moral principles are genuine knowledge. They are not just things that we choose to believe, in the manner that William James suggested one might choose to believe that God exists - for that sort of thing does not constitute knowledge.
(b) Moral principles can not be inductively inferred from descriptive premises.
The reason for this is simple. Induction is generalizing from experience. It enables you to know general truths. But you could only be led by induction to general moral truths, if the premises of the induction were particular moral truths. If you can not ever recognize a particular good thing in the first place, then induction will be no help to you either. If your premises are particular descriptive facts, then your inductive conclusions are just going to be a bunch of descriptive generalizations.
To take stock, then, we have this argument:
(1) Every observation is descriptive in content.
(2) No evaluative propositions are known on the basis of descriptive propositions alone, for
(a) No evaluative proposition is deduced from descriptive propositions; and
(b) No evaluative proposition is induced from descriptive propositions.
(3) Therefore, moral knowledge requires an a priori basis.
At this point, it will come as no surprise that I think there is a great deal of other a priori knowledge. Here are some examples:
A cause cannot occur later than its effect.
Time is one-dimensional.
If A and B have different heights, then either A is taller than B or B is taller than A.
"Inside" is a transitive relation.
It is not possible for something to be created out of nothing.Now to explain the nature and source of my disagreement with Objectivism, and also the positive nature of a priori knowledge:
We have four cognitive faculties: the senses, introspection, memory, and reason. The senses provide us direct awareness of concrete (particular) things in the external world. Introspection provides direct awareness of particular phenomena in our own minds. Memory provides us awareness of anything of which we were previously aware through another faculty. And what does reason provide us awareness of?
On the empiricist view (which Objectivists share) of reason, reason does not provide us direct awareness of anything the way the first two faculties do. Instead, reason only operates on cognitions that are provided by the other faculties. Reason takes observations (and memories) as input and then, through a certain process (inference), turns out a certain output. This output, according to empiricists, can include a huge amount of knowledge, from my knowledge that the sun will rise tomorrow, to the most elaborate of scientific theories, but all of it is dependent on receiving some input from the senses and/or introspection.
I say that reason does not only operate on input provided to it by other faculties, but is also a faculty of direct awareness of certain things - namely, all the things listed above. This knowledge that originates in reason is direct in the same sense that perceptions are direct knowledge. It is not innate, for the same reason that observations are not innate - it is acquired only by exercising the faculty (though the faculty is innate in both cases), and therefore not all the things in principle knowable by reason are actually known.
It is important to see the contrast between these two views. Inference is a cognitive process that requires some cognitive input (premises) and produces some cognitive output (conclusions). I and the empiricists will agree that the faculty of reason is what performs inferences. They, however, will say that all of the input is from the senses + introspection. I say that some of the input is also from reason itself, so that reason has two functions.
At the beginning of this section (section 3), I defined "a priori knowledge" only negatively, as that which is not empirical. It is now possible to provide the positive characterization: A priori knowledge is the knowledge of pure reason; that is, it is the knowledge whose source is solely the faculty of reason. Empirical knowledge, by contrast, is the knowledge whose source is observation. Observation (comprising both perception and introspection) may be defined as the direct awareness of particulars (i.e. individual objects or occurrences).
What, then, will we say the objects of pure reason are? Reason's foundational knowledge is the awareness of facts about universals.
I have here two white pieces of paper. They are not the same piece of paper, but they have something in common: they are both white. What there are two of are called "particulars" - the pieces of paper are particulars. What is or can be common to multiple particulars are called "universals" - whiteness is a universal. A universal is capable of being present in multiple instances, as whiteness is present in many different pieces of paper. A particular doesn't have 'instances' and can only be present in one place at a time (distinct parts of it can be in different locations though), and particulars are not 'present in' things.
A universal is a predicable: that is, it is the kind of thing that can be predicated of something. A particular can not be predicated of anything. For instance, whiteness can be predicated of things: you can attribute to things the property of being white (as in "This paper is white"). A piece of paper can't be predicated of something; you can't attribute the piece of paper as a property (or action or relation) to something else. The piece of paper can only be a subject of judgements; it can never be a predicate. (Incidentally, identity statements of the form "A is identical to B," where A and B are particulars, are not a counter-example to this. A and B are (is?) the subject of the judgement; the relation of identity is the predicate.) The logical subject of a proposition (or sentence or belief) is what the proposition is about. The logical predicate is what is asserted about the subject.
Note that this is not the same thing as the grammatical subject. In "It's raining," for example, the grammatical subject is "it", but the statement isn't about some entity called 'it'; the statement is about the rain.
One can see, then, that every judgement and so every item of knowledge involves universals, insofar as every judgement has a predicate. "This is white" involves the universal, whiteness. Most words in any natural language refer to universals, and if a language lacked such words, it would be impossible to say anything. One could name particulars, but one could not make any statements about the particulars. All knowledge is the knowledge that something(s) instantiates a certain universal.
Also understand that I don't by a "universal" mean a certain kind of word, idea or concept. I mean the sort of thing that you attribute to the objects of your knowledge: Whiteness itself is the universal, not the word "white" and not the concept 'white'. I do not attribute my concept of whiteness to the paper - I do not think that the paper has a concept in it. I attribute whiteness to the paper - i.e., I think the paper is white. Whiteness is not a concept; it is a color. When I have the concept of whiteness in my mind, I do not have whiteness in my mind (no part of my mind is actually white). I say this because the confusion between concepts and their referents is all too common, both inside and outside Objectivist circles - as, for example, when someone says, "Democracy is a nice concept . . ." Democracy is not a concept, it is a form of government!
Also notice that, although every universal is a predicable, I did not say that universals can not be subjects of judgements. A universal can also be the subject of a judgement, and universals can possess properties of their own. For example, "White is a color" is a statement in which whiteness is the subject.
Now I have said that reason gives us direct awareness of facts about universals: In other words, the knowledge of pure reason is that in which not only the predicate but also the subject is a universal. Observations, in contrast, we defined as direct knowledge in which the subject is a particular (for example, "This paper is white" expresses an observation).
Now the reader can go back and consider the above examples of a priori knowledge:
1. A rule of inference is a proposition describing a certain form of inference as valid, or (this is equivalent) describing a certain form of proposition as bearing a certain relation to another form of proposition. A 'form of inference,' of course, is not a particular (that's one reason you'll never bump into a form of inference on the sidewalk); it is a universal.
2. "1 + 1 = 2" and all the propositions of mathematics are about universals: In this instance, the subjects are two (the universal) and 1+1 (the universal) and the predicate is identity. ("1+1" means the quantity that results from grouping a group of 1 together with another group of 1.) That these are universals is shown by the fact that they can have multiple instances: a pair of oranges is an instance of two; it's also an instance of 1 and 1, of course, since those are identical. Every pair of objects is an instance of the number 2, and every pair of objects is an instance of 1 and 1.
3. Not all moral judgements are about universals, obviously. "Adolf Hitler was an evil man" is a moral statement about a particular. However, the suppressed premises we kept running into in the attempts to derive 'oughts' from 'is's are all about universals. When we conclude that Hitler was evil (in part) because we think that unprovoked hatred is evil and that killing innocent people is wrong, we rely on general moral principles that are about universals: In "Killing innocent people is wrong," the subject is killing innocent people (i.e. a certain type of action), which is a universal. In "life is good," the subject is life. Etc.
The philosophical questions about universals are
(1) Do universals (as defined above) exist?
(2) If not, why does it seem as if they do? (i.e., why do we have all these words and ideas apparently referring to them and knowledge apparently about them?)
(3) If they do, does their existence depend on the existence of particulars?
The people who answer #1 "Yes" are called "realists", and those who answer #1 "No" are called "nominalists". The nominalists then have to go on to answer #2. How they answer it determines what kind of nominalists they are. The realists have to go on to answer #3. Those who answer #3 "Yes" are called "immanent realists" (Rand: "moderate realists"), while those who answer #3 "No" are called "Platonic realists" or "transcendent realists".
That is why the traditional positions on the problem of universals have always been considered to be these three: nominalism, immanent realism, and Platonism. There is no fourth position. This is a simple outcome of the law of excluded middle. In particular, Ayn Rand can not possibly have a position on the problem of universals that is neither nominalist nor realist, unless it is that she either refuses to answer the questions or contradicts herself. Either universals exist, or they don't. If they don't, nominalism is true. If they do, realism is true. And that's that.
I am not going to try to refute nominalism here, because it is just obviously false. It is obvious that there is such a thing as whiteness, and that's all I have to say about that. (David Armstrong does a good job on it though in Nominalism and Realism.)
It also seems clear to me that universals exist in particulars, and so immanent realism is true. And my primary objection to Rand's theory of concepts (in ITOE) is that she presents it as an answer to the problem of universals, and an anti-realist answer, when in fact it is no such thing.
At first it seems as if she is answering question #2, so it seems as if she is a nominalist. Rand starts out by saying that two individual humans do not literally have in common any single attribute; it is not that all people are called "human" because they possess this one quality, 'humanness'. She goes on to explain why it is that we can classify all these different individuals as members of this same category, 'human' (this is where it seems as if an answer to #2 is coming): in essence, she explains that when we group a number of particulars (she calls them "concretes") together, we do so because these objects each possess a value along a certain dimension (a 'measurement' is a thing's place on a certain dimension - as for example "5 feet, 10 inches" is my approximate place on the dimension of length; you can also think of it as the value of a variable). They all possess different values on this dimension (e.g., every person has a different height), but in forming a concept, we abstract away from that, i.e. we mentally isolate only the common characteristic, without paying attention to the specific measurements.
I have no objection to this as a realist theory of how concepts are formed. I do object to it as a non-realist theory or as an answer to question #2 above. If a group of concretes are isolated according to a set of dimensions along which they all vary (each taking different values on these dimensions), the next question to ask is, what about the dimension, itself? Example: if one of the common characteristics is 'length', which all of these objects have different amounts of, what about length itself (i.e. the dimension of length): Is this not a universal? It appears it certainly is, for it is predicable of concrete objects, and multiple distinct particulars all share it. An anti-realist answer to the problem of universals, therefore, has not yet been produced: the explanation of how we classify multiple concretes under the same concept must advert to universals, if not in the first stage (i.e., a universal 'humanness') then in the second stage (i.e., a set of universals, the common dimensions along which humans vary).
Furthermore, the specific values that things have along certain dimensions are also universals, no matter how specific they are. Take a specific length, like 'exactly 5 feet': that is a universal, not a concrete. You will never encounter a five-foot length all by itself, lying on the sidewalk. If you encounter a 5-foot length, you will encounter it only as the length of some concrete object. It is only another way of stating this to say that '5 feet long' is a predicable, not an ultimate subject. There are two tests for a universal:
(1) It can be predicated of concretes.
(2) Multiple things could possess it.
We've just seen that '5 feet long' satisfies #1. It also satisfies #2: multiple things could be 5 feet long simultaneously. (It does not matter whether multiple things actually are 5 feet long. In fact, probably nothing is exactly 5 feet long, unless you count parts of objects like "the first five feet of the floor." The point is there is no reason in principle why there couldn't be a 5-foot long object, and if there were one, there is no reason why there couldn't be two.)
So we see that Rand's theory of concepts adverts to two things that appear to be universals. She does not attempt to explain these things, in turn, in terms of anything else. So it seems that Rand is a realist, specifically an immanent realist, whether she knows it or not.
This is not, per se, a problem with her view. I am a realist too, as I think every sensible person should be. There is no way of providing the sort of 'objective basis' for concepts that Rand is trying to provide without talking about the properties that multiple objects fitting under a single concept have in common. Rand has just described what they have in common in a fairly elaborate way. She has not, and could not possibly without making concepts nothing but arbitrary groupings, done away with the notion of there being anything in common to multiple objects.
I said earlier that what is wrong with Rand's attempted derivation of ethics is that it requires the evaluative presupposition that life is good, which has not been and cannot be inferred purely from observations. Some Objectivists say that life actually isn't good, but everything which promotes life is good. I think this (i.e. the first part of that claim) is obviously false, besides being a distortion of Rand's views, but not to press that - this view has the same problem as all attempts to bridge the is/ought gap, i.e., it just raises the question, how do we know that what promotes life is good?
One way to answer this might be to say that this is just the meaning of "good", i.e. "good" just means "promotes (my) life."
If you take the Objectivist theory of meaning, however, which rejects the analytic/synthetic distinction and identifies meaning with reference, then this sort of answer cannot be legitimate. It cannot ever be legitimate to answer "How do you know that A is B?" by saying that this is implicit in the meaning of "A". For on the Objectivist theory of meaning, everything that is true of A is implied in the meaning of "A", and everything that is not true of A contradicts the meaning of "A". Therefore, if something's being implied in the meaning of our words was a sufficient explanation for how we knew it, we would be omniscient. That is, if we know every fact that is implied in the meanings of our words (every fact the denial of which is contradictory), then, if the Objectivist theory of meaning is also correct, we know every fact. Since this is not the case, the Objectivist has to say that even the things that are implied in the meanings of our words need to be proven - specifically, they require observational evidence. For example, when asked how we know that gravitational attraction is inversely proportional to the square of the distance between the bodies, it is not correct to say we know this because the denial of it is contradictory. The denial of it is contradictory, on the Objectivist theory, but that does not explain how we know it. To explain how we know it, one would have to detail certain scientific experiments and observations of the solar system. For the Objectivist scientist, to defend a theory by saying the denial of it is contradictory, is just begging the question. We don't know whether it is contradictory until we first find out whether it is true.
Thus, it can not be an adequate answer to my question, "How do you know that what promotes life is good?" to say that the denial of this proposition is contradictory or that it is implied in the meaning of "good", if the Objectivist theory of meaning is correct. For such a reply would simply beg the question - the denial of the proposition in question is a contradiction, on the Objectivist view, if and only if it is true that what promotes life is good. We still need an explanation of how we know it is true - i.e., what observations lead to this conclusion, and exactly what is the form of the inference by which they lead there. In other words, even if "good" means "promotes life", on the Objectivist epistemology and philosophy of language, you still have to prove that this is what "good" means, by empirical (sensory) evidence. I have never seen such a proof.
On the other hand, suppose we take up my theory of meaning, in which there is an analytic/synthetic distinction, and only a small subset of all true propositions are analytic (i.e., such that their truth is implied in the meanings of the words involved and such that their denial is contradictory). In that case, it does not beg the question to say that we know what serves life is good because this is the meaning of good, because what the word means can be known immediately, by reflection (without this leading to omniscience) - at least, you can know what you mean by a word by reflection, although you need empirical evidence to determine whether others mean the same thing. However, the reply now faces a different problem: The claim that "good" means "promotes life" is now simply false, and it is refuted by Moore's 'Open Question Argument'. That is, given that we make a distinction between the analytic and the synthetic, we can repeat the "Jocasta/Oedipus" argument to show that "promotes life" does not mean the same as "good". Consider a person who decides to commit suicide. This person believes (let us suppose) that
(G) Ending his life is good.But he does not believe that
(P) Ending his life promotes his life.since he knows that ending his life will destroy his life. It is evident, then, that "good" can not mean the same as "promotes life," for the same reason that "Jocasta" can not mean the same as "Oedipus' mother." (It means something more like "worthy of being chosen" - though I would not claim this is a completely accurate definition either.) It is possible for a person to not know that what promotes life is good, just as Oedipus did not know that Jocasta was his mother. Therefore, some explanation is required of how we find out that what promotes our life is good.
Note that when I say "an explanation of how we know this" is required, I am not expressing doubt about it. I mean simply what I say: given that we know it, how do we know it? Do we know it based on observation, or do we know it a priori? If we do know it, but no observations can be found sufficient to justify it, then we must conclude it is a priori. That is the point of the present discussion.
In more general terms, you can see that this sort of appeal to the meaning of "good" can not be valid, since if it were, it would be a way of 'validating' any claim whatsoever. Any person could take whatever ethical views he has, and claim that they are true in virtue of the meaning of good. I might propose to define "good" to mean "promotes the production of chocolate ice cream," and thence deduce that every person ought to produce as much chocolate ice cream as he can. This is silly, of course. I can't simply claim that this is what "good" means. It is not what "good" means, and if I want to claim that producing chocolate ice cream is good, I need to give a substantive reason for thinking so. And the same holds no matter what is substituted for "chocolate ice cream." Claiming that "good" means "promotes x" is not a way of showing that it is good to promote x.
Similarly, one could use the strategy to validate any descriptive claim. Suppose I want to show that the sky is red. I say, "Well, 'red' means the color of the sky during the daytime." That is not what "red" means, and that does not give a reason for thinking the sky is red. Nor does the parallel validation work if you substitute "blue" for "red". If asked how I know the sky is blue, I also can not merely say, "'Blue' means the color of the sky during the daytime." That isn't the meaning of "blue" either, and it is not a reason for thinking the sky is blue. The only reason for thinking the sky is blue consists in going outside and looking up. You don't define the sky to be blue. You observe its color.
About my proof that everyone should produce chocolate ice cream: an Objectivist might say that the difference there is that he (the Objectivist) has given a correct definition, because it really is good to promote life, whereas the proposed ice-cream definition is not correct. But this just begs the question - how do you know that your definition is correct, and not the chocolate-ice-cream definition? Of course, anyone with any ethical views whatever is going to claim that his views are correct, and therefore, if the Objectivist strategy is permissible, may propose a 'definition' of good that makes his theory of ethics necessarily true, and may respond to all objections in exactly the same manner the Objectivist can.
Similar things might be said about the question, "How do we know that we ought to do only what furthers our own lives, as opposed to what furthers the lives of others?" An Objectivist might claim that this too is implicit in the meaning of "ought", and the above objections apply here too. An altruist might counter - as in fact, many of them do - that it is implicit in the meaning of "ought" that we ought to do what promotes the lives of others. How do we determine who is correct in this dispute?
I say, of course, that both are wrong. Neither conclusion is implied in the meaning of "ought". It is a synthetic claim that we should promote our own good, just as it is a synthetic claim that we should promote the good of others. But even if I am mistaken, and one of the claims is analytic, I think some argument would be required on the part of one or the other party, to show that it was his view that was analytically true. As far as I know, no ethical egoist has ever offered any non-question-begging argument for the conclusion that we ought only to promote our own interests, with perhaps the one exception of the quasi-argument to be considered immediately below. Perhaps they believe this ethical proposition to be self-evident - a possibility we should return to later.
It is important to discuss one, fallacious reason for thinking ethical egoism to be analytically true. It goes something like this:
Altruism, as an ethical theory, says that I should sometimes sacrifice my own good for the sake of something else. This means that I should sacrifice my own values for the sake of something else. That is, I should give up something I value, for the sake of something I do not value. That is, I should give up something I believe to be good, to achieve something I do not believe to be good. But this is obviously irrational.Ayn Rand comes close to offering this argument, when she discusses the meaning of "sacrifice" (in the bad sense of the word). The problem here is that "my good" does not mean the same as "my values". "My good" means that which benefits me. "My values" means that which I believe to be good. I can very well believe what benefits other people but not myself to be good. The issue between altruism and egoism is not over whether you should pursue what you value. It is over what you should value. The altruist says that you should value other people's lives and happiness. The egoist says that you should value only your own life and happiness (and value others' lives, then, only as means to promoting your own, and only insofar as they do promote your own life, just as you would value your car instrumentally). Both, obviously, hold that you should then proceed to pursue that which you value: the altruist, that you should promote the life and happiness of other people, the egoist that you should promote only your own life and happiness. There is no contradiction involved in either of these.
Now, one might propose to redefine egoism as just the view that one should always promote only what one values (or rather, to remove it from the realm of subjectivism, what one is correct in valuing). In this case, egoism becomes analytic. However, the other result of this redefinition, is that no one has ever denied egoism. In particular, the ethics proposed by Jesus, Kant, the Buddha, Mother Theresa, or any of the other people one normally thinks of as proponents of altruism, no longer conflict with 'egoism' in the least. None of these people has been so utterly confused as to say that we should pursue things that we do not correctly hold to be good. They have just proposed that it is good to give money to the poor, help to feed the hungry, and so on, spending a large proportion of our time and resources on these sorts of things. In other words, by thus making egoism analytic, one trivializes the thesis to the point where any action whatsoever is potentially consistent with the dictates of egoism taken just by itself, including all the things we usually call "altruistic". One can then redefine "altruism" as well, so that those actions are no longer called 'altruistic', or one can allow that altruism is consistent with egoism.
As a result, while one thus gets an easy validation for one's ethical theory, the ethical theory has become useless: it can not be used to determine whether a given action is right or wrong, because "selfish" will just mean "promotes whatever is good" - we don't any longer have a standard of what is good, which we originally thought that egoism was. If dying to save your community is morally good (or rather, the agent correctly believes it to be morally good), then that will count as "selfish", and any ethical theory whatsoever will count as a form of egoism. And if this is how one is going to use the word, then I certainly agree with 'egoism'.
One common thread in most discussions of ethics, especially amongst Objectivists, is the attempt to get something for nothing - i.e., the attempt to get substantive, action-guiding moral principles, principles that will tell you some particular action was right or wrong, that some moral principles advocated by some other philosophers were wrong; out of nothing more than judicious definitions of words and the manipulation of language. This attempt is as delusory in ethics as it is in economics. You can't make a tuna sandwich without any tuna (and it won't help to redefine the word "tuna"), you can't construct a geometrical system without laying down any geometrical axioms, and you can't get a moral conclusion out of an argument without moral premises.
If egoism is self-evident, that would be a reason for egoists' not offering any argument in favor of it. Unfortunately, if this is the case, there is not much one can say about it - one might be unable to show that it really is self-evident, to those who do not find it so, or to explain why it is true; while those who are unable to perceive this truth also may be unable to explain why they do not see it.
I do not find egoism self-evident. I do not even find it the least bit plausible prima facie. From what I can tell (based on what they say), few other people find it plausible, and very few if any find it self-evident. This does not prove that the Objectivist who considers egoism to be obvious on its face is wrong (perhaps it is obvious to him, and perhaps he thereby knows it to be true; and perhaps the other people's mental faculties are defective), but it seems to leave us in at best some kind of stalemate, unless one of us can find arguments to settle the issue.
How do we resolve a dispute when one person says that p is self-evident, and another says that the denial of p is self-evident? (I am not saying an Objectivist egoist would appeal to self-evidence; I am just considering the possibility.) One way is to test the principle in specific cases. That is, by examining certain more concrete examples of the principle, we can get a better view of what it entails. When we do this, it might no longer seem evident. Another way would be to draw out the logical relations of the principle to other principles that we hold. If a general principle that is in question is shown to conflict with other principles that are plausible, that is reason to reject it. By the same token, of course, if it is shown to follow from other, plausible principles, that is reason to accept it.
Both of these methods may be applied to the issue of egoism. As far as I know, they are the only ways to test the thesis of ethical egoism.
Unfortunately, Objectivists usually object to the use of hypothetical examples to test moral principles, on the ground that the hypothetical examples do not represent reality. How, one might ask, can we draw conclusions about how the world really is from purely hypothetical premises, i.e., premises about some imagined but not actual situations?
This objection is vaguely felt by many who object to thought experiments in philosophy in general, but it is a logical error. You can validly deduce a categorical proposition from hypothetical premises. For example:
A -> (B -> C).
B -> not-C.
Therefore, not-A.is a valid form of inference, where the "->" stands for the "if ... then" relation (i.e. "If x were true, y would be true") (N.B. not the so-called 'material conditional' of first-order formal logic). And this form of inference is relevant to the way hypotheticals are used in philosophy to test moral principles. The typical form of thought-experiment-based arguments in moral philosophy is as follows: "If moral theory T were true, then in situation S, it would be right to do A. But in situation S, it would surely be wrong to do A. Therefore, T is false." Notice that this form of argument is perfectly valid: the conclusion deductively follows from its premises (it's a variant on modus tollens). Notice also that both premises are hypothetical - i.e., both are about what would be right if so-and-so were the case. But the conclusion is categorical.
Some Objectivists refuse to even consider statements of the form "if A then B" where A is known to be false. I'm not sure whether they think that such statements are never true. If so, this would be another logical error. The proposition, "If A then B" does not assert A. To say, "If you lose your mittens, you will get no pie," is not to assert that you will lose your mittens. Likewise, to assert, "If I were a brain in a vat, I would have no knowledge of the external world," is not to assert that I am a brain in a vat; it is not even to suggest that I might be. This is obvious to anyone who understands the English word "if". In general, to say, "If A were true, then . . ." does not imply that A is true, it does not imply that A is likely to be true, and it does not even imply that A might be true (if anything, with the use of the subjunctive mood, it implies that A is false). For example, I can say, "If I lived in Alaska, I would have more clothes than I presently do." This is true. It is true in spite of the fact that I am not in Alaska, and I know I am not in Alaska. When I say that, I am not implying that I might really be in Alaska right now, and not in New Jersey as it appears.
Some will still want to know, reasonably enough, why thought experiments are useful. Even if they are in principle capable of proving conclusions about actuality, why are they necessary? Why can we not learn at least as well through the consideration of actual or at least realistic examples? Briefly, the reason is that hypothetical thought experiments provide a means for conceptual controls that often cannot be reproduced in reality. Or, in other words, they provide a way of mentally isolating a causal, explanatory, or logical factor for examination on its own which normally, in the real world, cannot be isolated, and to do so while still discussing a concrete situation.
Let me give an example to show what I mean. David Hume once came up with this thought experiment: suppose that in the middle of the night, the paper money in everyone's wallet, safe, or other stash, suddenly doubled in quantity - so there is twice as much money, but no other changes are made. Would the country then suddenly be enormously better off - would we all be twice as wealthy as we are now? No, in fact we would have exactly the same amount of wealth as we presently do, for there would be exactly the same amount of capital around, and the same availability of labor. (Everyone could then double their prices.) What this shows is that increases in the money supply do not translate to increased wealth; it can also be used to explain why increases in the money supply cause inflation.
Of course, such a scenario is impossible: all our money cannot magically double in quantity. But that is not the point. The reason the thought experiment is useful is that this way of thinking of it enables you to mentally isolate just the one factor desired for consideration: the quantity of money. We imagine just the quantity of money changed and nothing else. In the real world, one cannot do this. In the real world, it is not possible to change the money supply uniformly (i.e. increasing everyone's money, without redistribution) and it is impossible to change the money supply without affecting the economy in some other way at the same time. So I cannot cite a historical case in which nothing but the money supply was altered. This is why thought experiments are useful.
A similar thing is true of thought experiments in moral philosophy. If we want to examine the significance of one morally relevant factor for the evaluation of actions, people, or states of affairs, it is useful to be able to imagine and compare cases which differ only in respect of this one factor of interest, whereas there may be no actual cases of which this is true.
A thought experiment, in short, is not an exercise in fantasy but a tool of logical analysis that is necessitated by the need for conceptual clarity (sc. distinguishing different relevant factors from one another in your thought), together with general facts about the nature of reality (sc. that morally relevant, or otherwise explantorily relevant characteristics do not come isolated in the real world). It is a way of concretizing abstract reasoning.
Now, if someone gives an argument against your moral theory in which the premises are true, and the conclusion follows logically from the premises, you can not escape from the argument by refusing to entertain his premises, i.e., refusing to listen to the argument. I am going to give an argument against egoism which has those characteristics. The premises are true hypotheticals, i.e. true "if ... then" statements, and the conclusion that egoism is false logically follows from them. I will not regard the mere fact that my premises are hypothetical as showing that they must be irrelevant to (i.e. can not entail) their conclusion, which is categorical. I have shown above that hypothetical premises can entail a non-hypothetical conclusion. Nor will I regard the mere fact that the hypotheticals are counter-factual, i.e., the antecedents are false, and known to be so, as showing that the whole "if ... then" proposition must be false. Both of those would be gross logical errors. If, therefore, an Objectivist wishes to answer my argument, he will have to do more than point out one of the aforementioned facts, and he will have to do more than simply refuse to listen to or refuse to think about my premises.
Suppose that I am in a hurry to get somewhere. I am walking to work, and if I am late, my boss gets mad at me. Furthermore, I like to get to work on time, because I have a lot of work that I want to get done. It is in my interests to get to work on time, but I am running a little bit late this morning. I presume no Objectivist will object to this so far - i.e., surely it will be granted that it is in my interests to get to work on time. Otherwise, there would be no reason for setting my alarm clock or walking quickly.
Now as I walk down the street, there are a lot of people in my way, slowing me down. I just happen to have in my pocket a hand-held disintegrator ray, though. The gun will quickly disintegrate any person I aim it at. It is believed that victims of disintegration suffer brief but horrible agony while being disintegrated, but after that, no trace of them is left. I hold back on disintegrating the people in my path, though, because some of them might be potential clients for my business. But then I see this homeless guy ahead, just wandering down the street. He is not threatening me, and I could go around him, but that would take a second or two longer, and I'm in a hurry. So I pull out the gun and disintegrate him, and then continue on my way.
Assume that I live in a society in which homeless people are so little respected that my action is both legal and socially acceptable. Homeless people are regularly beaten up, set on fire, etc., with impunity. Passers-by even regard it as an amusing entertainment. So I will not be punished for my action. Assume further that I dislike homeless people and don't like to see them on the street. So I do not feel bad about seeing the homeless guy disintegrated. In fact, it amuses me. Nor will my conscience bother me, because I am an ethical egoist, and so I believe that my action was morally virtuous. Therefore, after destroying the homeless guy, I should feel proud, not guilty.
The question is: Was my action morally right? If egoism is true, it was. I saved some time and mildly entertained myself, just as if I had disintegrated a pile of trash that was lying on the sidewalk getting in everyone's way. The other people in my society, who are themselves also egoists, will thank me for performing this public service, just as they would thank me for removing any other kind of useless clutter from the street. On the egoistic view, a person who does not serve my interests either directly or indirectly is just that - a piece of useless clutter, getting in my way.
Now it seems to me that this is obviously wrong. It is obvious that it is morally wrong to kill a person who is posing no threat to you, and that's not because he might prove useful to you some day. The egoist can respond to this thought experiment in one of two ways:
(1) Reject the intuition: That is, he could claim that yes, it was morally permissible and even praiseworthy to disintegrate the homeless guy. If someone says this, then I have nothing further to say. One who would say this is either insincere or morally corrupt.
(2) Accommodate the intuition: That is, the egoist could argue that for some reason, it was really not in my interests to destroy the homeless person. You never know when a person, presently homeless, might become useful, after all. Some day in the future, for example, he might get a job, and then he might possibly work for my company, or be a client, or otherwise contribute to the economy of my society. Or he might someday be able to be an organ donor, if not for my destroying his body.
Or, to take another tack, this particular homeless person might happen to have friends who might come after me to get me back.
What enables egoists to make replies like this is that it is almost impossible to assess the probabilities of all these possibilities in any definitive manner. However, what needs to be kept in mind is that, on the egoist's view, the fact that the other person is a sentient being, with a life of his own, is not what counts. All that counts is that he has a potential to serve my life, or to hamper it if I destroy him. Therefore, how I treat him need not be, in principle, any different from the way I treat inanimate objects. Sure, if there's a heap of trash lying on the sidewalk, it's possible that the heap of trash will someday be useful for something. It's also possible that destroying it will have some negative effects on me. Some insane trash-lover might get mad at me, though I have no reason to think that this is so. But none of this would prevent me from removing a heap of trash that I found on the sidewalk, if it was getting in my way. You don't save just anything that might be useful. If egoism is true, I should take exactly the same fundamental attitude towards other human beings as to inanimate objects: if I decide that the likelihood of their being useful to me is sufficiently low and the likelihood of my suffering ill effects of destroying them also sufficiently low, then I will go ahead and remove them. Every day I throw away objects that have more likelihood of being useful to me some day than a homeless person on the street does. Every day I take actions, like crossing the street, that involve more risk to my person than is involved in destroying the homeless man in my hypothetical example.
But even if the egoist is able to think of some very plausible harm that I would be likely to suffer from killing another person, I will just modify the example to remove it. In other words, I stipulate that the homeless guy is not a potential client of my company, he is not going to get a job, he does not have a gang of friends to defend him, the passers-by on the street will not be angry with me, etc. And the question is, then does it seem that it's right to kill him?
Many Objectivists misunderstand the way hypothetical counter-examples work. The point is to test a general principle: "The only thing that ought to matter to me, is what promotes my own good." One tries to test this by imagining a specific situation in which an action promotes my own good, but it goes against some other thing that is often held to be valuable. The creator of the counter-example gets to stipulate what goes on in the example. So I get to stipulate, by fiat, that, in the hypothetical situation, I do not receive reprisals for my action, et cetera. The only thing that I do not get to stipulate is the verdict on the example, i.e., would the action thus described be right or wrong. That is where the reader or listener is supposed to exercise his own judgement. If the hypothetical action I describe seems to you to be morally right, then my argument has failed. If it seems to you to be morally wrong, however, then it shows that you are not truly an ethical egoist.
I can state my basic argument more abstractly, and also more starkly, like so:
(1) If egoism is true, then you should perform any action which benefits you (on balance).
(2) Therefore, if egoism is true, then if A benefits you only ever so slightly while killing 4 million innocent other people amidst gruesome agony, you should do A.
(3) You should not kill 4 million people, etc. just to achieve a minor benefit.
(4) Therefore, egoism is not true. (from 2,3)
#1 follows directly from the definition of egoism. Egoism is the view that benefit or harm to myself is the sole reason I can have either for or against any action, i.e. nothing other than my own benefit is ever relevant to assessing what I should do. It follows directly from this that for any A, if A benefits me (on balance), I should do A. In particular, if A benefits me only slightly (on balance), I should do A. It follows from that, finally, that if A benefits me slightly though it kills 4 million other people, I should still do A. That I never have been and never will be presented with such an action does not alter the truth of this conditional. Because egoism holds that my own benefit is the sole morally relevant factor for assessing my actions, and the benefit of others is not at all relevant, it follows that if a situation occurred in which I obtained a small benefit in spite of enormous harm to others, I should ignore the harm to others as not even the least bit relevant. This problem remains unaltered by the supposition that my interests do not actually conflict with those of others.
#3 I consider a self-evident moral fact (as egoism is not). If someone has a moral theory which contradicts (3), then I consider that a sufficient reductio ad absurdum of his theory. I think that some people might say that they don't agree with (3), but I find it somewhat difficult to believe that anyone would really mean it, nor to imagine what sort of moral claim he would aceept if he did. The only way I could see someone not finding (3) obvious (or claiming not to find it obvious) would be if he was caught in some theory that he was quite convinced of that he came to see contradicted (3). If this happens, though, I say it is time to check your premises. Were they really as certain as (3) itself is? If not, it is more reasonable to reject the theory than to reject (3).
On the issue of premise-checking, I should point out that I have seen alleged proofs that 1=0, that pi=2, that motion is impossible, that knowledge is impossible, and a good number of other absurdities. I think everyone has seen this sort of thing. They are often extremely difficult to refute. When you encounter an argument like this, i.e., an argument with an absurd conclusion, even if you can't explain what is wrong with it, you do not just accept the conclusion. You say there must be something wrong with the argument, whether you can see it or not. When you see Zeno's paradoxes, it would be irrational to conclude that motion is impossible, even if you can't see what is wrong with his arguments. You know that his arguments are wrong, because the conclusion is absurd.
But when it comes to philosophy, people are disturbingly reluctant to reconsider their own arguments - once a person sets down a path of reasoning, he is inclined to accept all of its logical implications, however absurd. This is the only explanation for how a person could hold an ethical theory that contradicts (3). I suggest that it is an adequacy condition on any moral theory that it should have (3) as a logical consequence, just as it is an adequacy condition on any complete physics that it should have the result that rocks exist. If it doesn't get at least that much right, there's no hope for the theory.
Ethical egoism is inconsistent with the idea that individuals have rights, for the same reason that utilitarianism is. The reason is that any principle of rights, properly so called, functions as a moral side constraint on action, not a moral goal. (The terminology is from Robert Nozick, in Anarchy, State, and Utopia.)
A moral goal is some good thing that our actions ought to aim at. Actions are judged, under a moral goal, by how much of this good they produce (if it comes in degrees), or by whether they produce the good or not. For example, the imperative of the hedonist, "Maximize pleasure," expresses a moral goal. So too the imperative of the utilitarian ("Produce the greatest happiness for the greatest number").
Rights are not like that. They do not identify a goal to aim at. Instead, they identify constraints on the permissible ways of pursuing your goals: "Pursue your goals without violating constraint C."
Thus, for example, "Never kill" would be a side constraint. "Minimize the number of killings that occur" would be a moral goal. Notice that these are different: the latter, but not the former, would permit you to sometimes kill people, if doing so was an efficient means to decreasing the overall number of killings in the world.
Now let's define "consequentialism" (a technical term from contemporary moral philosophy) as the view that says that only moral goals exist. That is, according to consequentialism, the only thing that is ever relevant to assessing whether an action is right or wrong is how well it promotes certain goals. Whatever means are most efficient for promoting the desirable goals are what ought to be done. (Of course, consequentialists can differ with one another over exactly what goals are legitimate.) This is the meaning of the slogan, "The ends justify the means." The contrary view, that the ends don't justify the means, holds that there exist constraints on the permissible means you can take, even for the pursuit of legitimate goals.
Egoism is a consequentialist view. It says that the sole factor relevant to the rightness of an action is how much it benefits the agent. Hence, an agent ought always to aim at this one goal, and he should do whatever best promotes it, without qualification.
The principles of individual rights are side constraints - they do not say, for instance, "Do not steal someone else's property, unless it's in your interests to do so." They just say, "Do not steal." That is why it is not an adequate defense, if you are brought on trial for theft, to explain that you expected to benefit by taking the victim's property. Courts do not even listen to that kind of 'defense', nor should they. Again, the non-initiation of force principle does not say, "Exercise force if and only if you can get some benefit by doing so." Rather, whatever benefits you are seeking for yourself, you have to do it within the constraints imposed by other people's rights.
Now, one might maintain that the principles of individual rights are just like rules of thumb designed for helping you to promote your interests - the Objectivist says "Don't steal" because he has found, as a general rule, that stealing hinders one's own interests. This makes it consistent with consequentialism, but it has the result that such principles are not absolute: you should violate them whenever, in the particular circumstances, you find that violating them furthers your interests. Furthermore, in order to show that you yourself have a right to do A, this would mean that you have to show that allowing you to do A serves everyone else's interests. If in a particular case, seizing your property benefits others, then your right to property is in abeyance, because it no longer functions as a side constraint, on the present view. Thus, eminent domain cases are in principle justifiable.
An Objectivist will try to argue that in most or all actual cases, it does not benefit others to seize my property. I will not take that up now, since it is very difficult to determine. I will only say that it seems to me that at this point the basic idea of our having rights has been abandoned - if my use of my property has to be justified by the usefulness to others of allowing it, then it is no longer being said that I have a right to property. This brings us to the next point . . .
Egoism is inconsistent with the idea that individuals are ends in themselves. My saying this will surprise some Objectivists, because they usually think that egoism either follows from or entails the proposition that individuals are ends in themselves. Here is why I say this:
If egoism is true, then the sole justification for my doing or refraining from doing anything is that it serves my interests. By the same token, it must be said that the sole justification for any other person's doing A or refraining from A is that it serves his interests.
Now how do rights fit in here? To say that I have a right to A, where A is something I can either do or have (as in "I have a right to free speech" or "I have a right to own a gun") is to say that it is morally wrong for others to forcibly interfere with my doing or having A. It is to say something about what is morally right for other people to do with respect to me. (It doesn't constrain my actions; if I have a right to do A, I may still interfere with my own doing of A.)
Now how can I defend my rights intellectually? How do I show that I have a right to do something? If egoism is true, in order to show this, I would have to show that it is in the interests of other people to allow me to do A! This seems outrageous from an individualist ethics point of view, but the consequence strictly follows: if egoism is true, the only possible justification for claiming that other people should do X would be that it serves their respective interests to do X; so the only justification for claiming that other people should not interfere with my doing A is that it is in others' interests not to interfere with my doing A.
Similarly, the only reason why other people should even allow me to live, is that it is in their interests to allow me to do so. I.e., I have a right to life because my life serves other people.
Why do Objectivists think that the opposite is the case; why do they think that egoism coheres with the principle that individuals are ends in themselves? Well, because they only look at one side of it: they see that egoism means that the justification of my own actions is always that they serve myself. My actions do not need to serve others. However, the other side of the coin is that other people's actions or inactions need take no account of my good and in fact they should not. So while I get to regard my existence as an end in itself, and as serving no one and nothing other than me, other people get to equally legitimately, if egoism is true, regard my life as merely a potential resource serving them, just as they should regard everything in the world. That is, from the point of view of my next door neighbor, my life is only good insofar as it serves my next door neighbor. From the point of view of Mikhail Gorbachev, my life is only significant insofar as it furthers Mikhail Gorbachev's interests. And so on. While this result sounds paradoxical, perhaps even contradictory, it is justly drawn from the theory. What matters to each person is solely what serves that person's interests.
I do not understand how Objectivists are able to maintain that there are no conflicts of interest in a rational society, but they seem to regard it as a fundamental point in their ethics. I suspect they so regard it because they think this principle enables their ethical system to 5.3.4 and 5.3.2.
Suppose I own a store, a small market. Across from the street there is another store of the same kind, owned by Bill. When a customer comes down the street, it is in my interests for the customer to enter my store. It is in Bill's interest for the customer to enter Bill's store. The customer will not enter both stores; if he goes to my store, he will not go to Bill's, and if he goes to Bill's store, he will not go to mine - a conflict of interests, pure and simple.
Since the result that Bill's and my interests have come into conflict follows from just three propositions, there are only three ways for an Objectivist to counter this argument. The Objectivist would have to argue:
(1) That it is not in my interests for the customer to enter my store.
But this is highly implausible. If it isn't in a store-owner's interests for a customer to enter his store, why do they spend money on advertising, try to offer a wider selection or lower price than competitors, et cetera?
(2) That it is not in Bill's interests for the customer to enter his store.
This is implausible for the same reason.
(3) That the two prospective events named in #1 and #2 are not in conflict.
And this is implausible also, on a reasonable construal of "conflict" - namely, if one occurs, the other does not occur. Normally, a customer will enter one or another store but not both. Therefore, not both Bill's and my interests can be satisfied in this case - i.e. they are 'in conflict.'
I do not see how one can hope to avoid this conclusion. Please note that there are no other possible responses to this argument. Any response to the argument that does not argue either (1), (2) or (3) above must be irrelevant, since my conclusion that Bill's and my interests conflict follows strictly from the three premises that it's in my interests for the customer to patronize my store, it's in Bill's interests for the customer to patronize Bill's store, and the customer's patronizing my store conflicts with his patronizing Bill's store. In particular, to point out that I recognize Bill as having property rights, that I shouldn't attack Bill, that it is in my interests to have a free, capitalist society, and so on, is irrelevant. Nothing along those lines refutes (1), or (2), or (3).
G.E. Moore identified the following as the fundamental contradiction of egoism (Principia Ethica, section 59): The egoist says that each person ought rationally to hold, "My own happiness is the sole good": "What egoism holds, therefore, is that each man's happiness is the sole good - that a number of different things are each of them the only good thing there is - an absolute contradiction!" (emphasis Moore's).
This is a criticism that still seems to me, as it did when I first read it, exactly on the mark. Let's look at it more closely, though. The ethical egoist is one who believes that he ought to aim only at promoting his own happiness (it does not matter if we substitute "interests" or anything else for "happiness"). Certainly, then, he thinks that it is good that he should be happy. What does he think everyone else should do?
He might maintain, "Everyone else also ought to serve my interests," but this would be implausible. Then he would have to answer "What's so special about you?" Unless he thinks he himself has some kind of special status, some characteristics that no one else in the world has, he must grant that, if his happiness is good, the happiness of others is also good.
Therefore, to maintain the plausibility of his theory, the egoist has to say that everyone's happiness is good, and that each person ought to aim at that person's own happiness. But if other people's happiness is also good, then the egoist must be hard put to explain why he does not aim at it in the same way he aims at his own. In other words, how can he justify acting as if his own happiness were the only good thing there is, given that he grants that every other person's happiness is good in just the same way that his own happiness is?
We can phrase the conflict another way, in terms of the idea that individuals are ends in themselves. Let A be an egoist, and let B be the egoist's next-door neighbor. The egoist regards his own life as an end in itself, and he says B ought to regard B's life as an end in itself. But, insofar as A is concerned only for furthering his own life, A can not, himself, treat B's life as an end in itself. A's sole value is A's life; therefore, A can value B's life, if at all, only as a means (i.e. if B's life furthers A's). Similarly, when A recommends to B that B should be an egoist, he is recommending that B should regard A as being only valuable as a means. This necessarily follows from the supposition that B should regard B's life as the sole end in itself, which is the meaning of egoism. A therefore seems to be caught in a contradiction: A holds that A's own life is an end in itself, but at the same time A thinks that no one else ought to recognize A's life as being an end in itself. In a parallel contradiction, A holds that other people are valuable only as means, but he holds that other people are correct in regarding themselves as valuable not merely as means but as ends in themselves. In other words: Each individual is correct in a belief which directly contradicts what every other individual correctly believes. A is correct to believe P, but B is correct to believe not-P. Is this not, in Moore's words, "an absolute contradiction"?
Notice that here, the Objectivist doctrine that rational people's interests never conflict, even if it were true, would provide no help. That the life of my next door neighbor should be valuable as an end in itself and that also, it should be valuable only as a means to further my own happiness, is a contradiction, regardless of how well my and my neighbor's happiness may harmonize.
My moral theory is known as "ethical intuitionism". "Intuition", in Western philosophy, refers to the kind of direct awareness that reason provides us - i.e., foundational, a priori knowledge. It does not refer to a kind of supernatural sixth sense, it does not have anything to do with "women's intuition", it does not refer to an inarticulate sense of something caused by one's experience with similar situations. It is a technical term in epistemology.
In my view, value is a universal, a property that some things have, just as 'white' or 'length' are (see section 4). The term "good" or "value" can not be defined. There must be some terms that are indefinable, because every definition is in terms of simpler concepts, and there can't be an infinite regress. There is therefore no intrinsic difficulty when I say that 'good' is one of these indefinable concepts because it is absolutely simple; it does not have any constituents. In the same way, 'white', which you also can not define, is a simple concept.
And since, in my view, the faculty of reason allows us to grasp universals as such and to understand facts about them, it also allows us to understand the relationships between this universal, goodness, and other universals, such as 'life' or 'happiness'. Ethics, in my view, is a body of rational, a priori knowledge, just as mathematics, logic, and metaphysics are.
If, as I believe, some moral principles are self-evident, then there is no need to derive ethics from biology, physics, or any other descriptive facts. This is how my theory resolves the is/ought problem.
One objection to the above is that moral knowledge cannot be self-evident, since what appears right to some people may not seem right to others. If intuitionism is true, then how, it may be asked, could differences of opinion concerning moral issues occur?
I reply in four parts:
(I) Intuitionism does not hold that all true moral principles are self-evident. Intuitionism holds only that some moral principles are self-evident. As instances, consider the following:
(1) It is unjust to punish a person for a crime he didn't commit.
(2) Happiness is preferable to suffering.
(3) It is wrong to torture other creatures just for the fun of it.
(4) If it is bad for one person to suffer x, then it is worse for two people to suffer x.
(5) It is better to have a longer period of pleasure, rather than a shorter period.
(6) Courage is a virtue, not a vice.The reader can doubtless extend the list further himself, given these examples. I have been careful not to put down there any item which is merely my opinion, such as that capital punishment is just, but only things which really are self-evident. It is not merely my opinion that courage is a virtue, so that perhaps a reasonable person might think it to be a moral vice instead.
Is there really a rational person, who understands each of (1) - (6), who disagrees with them? (N.B. if even one of the above propositions is self-evident, my point is made.)
Now it may reasonably be doubted whether more controversial ethical judgements, such as "abortion is murder" can be derived from moral propositions which have the degree of self-evidence of the above. This, however, I do not perceive as an objection to intuitionism. As in all other fields of study, if it is impossible to derive certain propositions from self-evident facts, then those propositions simply can not be known to be true. This is no objection to ethical intuitionism, any more than the fact that the continuum hypothesis is undecidable is an objection to set theory. I would caution the reader, however, against drawing any such conclusion too hastily. At first glance, and perhaps even after considerable reflection, it is not apparent that the Fundamental Theorem of Calculus can be derived from the axioms of Peano Arithmetic. Similarly, we cannot say at first glance what principles might possibly be derived from ethical axioms.
(II) The argument from disagreement, if valid, would refute nearly all of philosophy, including nearly all non-intuitionist ethical systems (with exceptions to be noted). For the fundamental challenge is this: If one claims that there is a means of knowing p, which is available to any rational person, then how does one explain the fact that many people, otherwise apparently rational, do not agree with p? Notice that this challenge arises regardless of whether the means of knowledge in question is said to be "intuition", or reasoning, or observation, or anything else. The only feature of it that is essential to the argument from disagreement is that it be a means of knowledge available to all (rational) people. If any such means of knowledge exists, how come everyone has not by now discovered the truth? And notice also that this question is not specific to ethics but applies to any controversial subject, including all of philosophy.
The only (quasi-)philosophical theories that would not be subject to this objection are those that maintain that some individuals have a special access to the truth that others lack - such as divine revelation. Those who hold, for example, that God only chooses certain people, from time to time, to reveal metaphysical and ethical truths to by direct inspiration, are able to answer the argument from disagreement thusly: There is continuing disagreement because not everyone has been subjected to the necessary revelatory experience. They may use this reply if, and only if, they claim themselves to be among those who have received divine inspiration (otherwise, they would have no way of explaining how they know ethical truths).
If, therefore, the argument from disagreement is valid, then it refutes Objectivism just as surely as it refutes intuitionism. Not even the moral skeptic is safe from the objection from disagreement. For in asserting moral skepticism, he implies that we have some way of knowing moral skepticism itself to be true. But if this is the case, then how come (again, barring divine revelation as the requisite means of knowledge) not all rational people agree with moral skepticism?
The argument from disagreement is thus a double-edged sword - if one uses it to attack intuitionism, it will cut equally well against one's own moral theories (including itself).
(III) But the argument from disagreement is not valid, even if we discount the point previously made in (I) above. The fact that some people fail to perceive a fact simply does not destroy the possibility that other people do perceive it. This would be to invoke what I call "the idiot's veto":
The Idiot's Veto: The thesis that any individual has the power to block a fact from the realm of objectivity or knowledge, merely by persistently refusing to agree with it, and resisting all efforts to educate him.This principle would entail, for example, that if there is any person (perhaps the mentally retarded would fit this description) who fails to understand that multiplication is commutative, then the Commutativity Axiom is null and void: i.e., no one can know that multiplication is commutative.
Needless to say, I do not accept the Idiot's Veto; nor should any objectivist.
(IV) There are, of course, numerous explanations available for the existence of disagreement in ethics, none of which is incompatible with intuitionism:
(a) Different individuals have differing levels of intelligence;
(b) Some have reflected more than others;
(c) Most moral facts are derivative (see point (I) above) and not self-evident;
(d) Many people have various biases, which ethics is particularly prone to bring out, since people's personal interests are often very much at stake in moral issues. In addition, people are prone to be emotional about moral issues, and it is well-known that emotions often tend to bias one's judgement. Take the abortion issue as a case in point: This is one of the most controversial of moral issues. It should be no surprise that it is also an emotionally charged issue, involving as it does reproduction and babies. It takes little insight to see that many if not most of the arguments on abortion are emotional appeals (e.g. images of babies being killed). I therefore find it difficult to believe that the source of the continuing disagreements is primarily intellectual in nature.
(e) Many people base their moral beliefs on religion, just as in previous ages they often based their beliefs about physics, cosmology, and geology on religion. This led to numerous firmly-held but erroneous beliefs about those sciences, and it is really no surprise at all that it has led to the same sort of beliefs about morality. The disagreements due to this cause do not create an objection to moral intuition, any more than they create an objection to science. It just means that it is time to stop basing our beliefs on religion.
(V) Finally, human beings simply are fallible creatures. None of our mental faculties is exempt from error. Errors in perceptual judgements, in memory, and even in introspection periodically occur. Science has frequently made errors. Even attempted mathematical proofs and other deductive reasoning can go wrong - as in the case of the 'proofs' that 1=0, and other fallacies too numerous to mention. But this does not prove that we can never know anything. Therefore, the fallibility of moral intuition does not indicate that it is not a legitimate means of knowledge.
If intuitionism is true, we can resolve disagreements about ethics in the same way (mutatis mutandis) that we can presently resolve disagreements about Objectivism if Objectivism is true: namely, try to find other principles, not in dispute, from which the desired moral conclusion can be derived. If that doesn't work, i.e., there is no common ground between the two parties, then the disagreement cannot be resolved, period. This is not a peculiarity of ethics under the intuitionist theory; this applies to any claim whatsoever, whether in philosophy or outside it: i.e., if you and your interlocutor have no intellectual common ground, if he flatly denies what you take as axiomatic, then your dispute cannot be resolved. So what? Does that prove that nobody can ever know anything? If an Objectivist is unable to convince a Kantian to change his mind and become an Objectivist (and vice versa), does that mean that Objectivism isn't true? No. Does it mean that we'll never know if Objectivism is true? No. Does it mean that reason is impotent to discover philosophical truth? No. Then the possibility of unresolved ethical dispute does not show any of those things about ethics. It does not show that one of the moral principles in dispute isn't true, it doesn't show that we can't know which of them is true, and it doesn't show that intuition is impotent to discover moral truth. Philosophy does not need to provide a technology for inducing belief in all true propositions in everyone. Indeed, the hope of such a method would be delusory.
It is not intuitionism that generates the possibility of unresolvable dispute. It is the volitional nature of consciousness. If no matter what you say, your interlocutor refuses to agree, always demanding proof behind proof behind proof, then there is nothing you can do to make him agree.
Again, however, I would caution the reader against concluding too hastily as to the unresolvability of a particular ethical (or other) dispute. Both philosophical thought experiments and derivation from other principles can have a surprisingly long reach. You cannot conclude that since you have not yet found a way to convince someone of a moral conclusion, you will never find one. Most people are not so stubborn as my imagined skeptic who denies everything, and it is highly unlikely that two people will disagree about every moral issue (Cf. section 5.4.1, part (I)).
1. The free will thesis: The thesis that people have free will.
2. Free will: A person has free will if and only if he sometimes is in situations in which he can choose between two or more available actions, and which action he performs is determined by his choice.
3. The law of causality: The thesis that every event has a sufficient cause.
4. Sufficient cause: A sufficient cause of an effect is a cause that, if it occurs, renders it impossible that the effect fail to occur. I.e., if the cause occurs, the effect must occur. (Distinguished from a necessary cause, in which the reverse is true: i.e., the effect cannot occur unless the necessary cause occurs.)
5. Indeterminism: The denial of the law of causality, esp. the thesis that some human actions lack sufficient causes.
Now here's what's so puzzling about free will:
(1) It seems clear that we have it.
The Objectivists and I are in complete agreement about that. Nearly every voluntary choice that I make is, if any credit is to be given to the data of introspection and experience, an exercise of free will, i.e., a choice between two or more available alternatives, in which the alternative that is actualized depends on my choice.
(2) It seems that free will is incompatible with the law of causality.
For consider any action I perform, A. From the law of causality, we know that there was a cause of this action, call it B, and given that B happened, A has to occur. If A has to occur, then no alternatives to A are truly available, or in other words, I can not fail to do A; so I lack free will with respect to A.
Now this last conclusion can be avoided if, but only if, it can be successfully argued that I had a choice about whether B occurred. If I had no choice about whether B happened, and given that B happened A has to happen, then I neither have nor ever had any choice about whether A would occur.
Suppose, then, that I do have a choice about whether B occurs. We know from the law of causality, again, that there must be another event, call it C, which was a sufficient cause of B. We can repeat the same argument above: if I had no choice about C, then I had no choice about B, so I have no choice about A. And so on.
This series can be iterated indefinitely. We can see now that there are only three possibilities:
(i) An infinite regress: i.e., there is an infinite series of events stretching back, and I have a choice about each and every one of them.
But surely this is not the case. At some point, the series is going to go back to a point before I was born.
(ii) The series of causes traces back, sooner or later, to an event (call it Z) that I do not have a choice about.
And in that case, I do not have free will. I have no choice about Z; therefore, I have no choice about whatever events necessarily follow from Z. Or in other words, I could not have avoided Z (by hypothesis); therefore, I could not have avoided any of the necessary consequences of Z.
(iii) At some point in the series the law of causality is violated: either we have an event that has no cause, or we have an event that has a cause but the cause is not sufficient for its effect (i.e. the effect could have failed to follow the cause).
Now it's clear these are the only three logical possibilities, since as long as the law of causality holds, the series of causes continues to stretch back; and if it stretches back indefinitely, it either eventually reaches something I have no choice about, or it is an infinite series of things I do have a choice about.
Assuming that (i) is not the case (as the Objectivist theory agrees - for Objectivism holds that there is a primary choice), the only two remaining possibilities are (ii) I have no free will and (iii) the law of causality is violated.
At this point, for Objectivists, the paradox is already apparent - for all Objectivists accept the law of causality. An Objectivist cannot accept (ii) and he cannot accept (iii).
However, other philosophers who wished to defend free will have sometimes chosen to accept (iii). To complete the paradox, we turn to:
(3) It seems that free will is incompatible with indeterminism as well.
For suppose that A is an action of mine that has no sufficient cause. In that case, it would appear that there is no explanation for why A occurred - it just happened at random. This does not seem to make my action free, and at any rate, if this is the nature of 'free will,' it seems that free will is something we would be better off without. F.H. Bradley pillories the notion of contra-causal freedom (in Ethical Studies):
[T]he will is not determined to act by anything else; and further, it is not determined to act by anything at all .... Freedom means chance; you are free because there is no reason which will account for your particular acts, because no one in the world, not even yourself, can possibly say what you will or will not do next.Contra-causal freedom seems to entail that my actions are unconnected to my beliefs, desires, or personality traits. Therefore, it means that (for instance), regardless of the fact that I do not believe in God and that I have no desire to promote religion or to make a fool of myself, since I have 'free will,' I might, in the next minute, find myself running outside yelling about the coming of the Lord. It seems to me that if this is possible, so far from giving me free will, it means that I do not have free will, specifically because it means that I do not have control over my actions.
And now the paradox is complete: We have free will, but free will is incompatible with determinism, and free will is incompatible with indeterminism. One of these propositions must be false.
An Objectivist's options on the free will puzzle are limited. Objectivism holds the Law of Causality to be a corollary of the Axiom of Identity. Therefore, the law of causality can have no exceptions whatsoever. I do not think that one could reject the law of causality and still call oneself an Objectivist. Nor could one reject the free will thesis and still call oneself an Objectivist. The Objectivist ¡ö8¢ - therefore must reject (2). And in fact Peikoff is quite explicit on this (in OPAR): Free will is a kind of causality, not an exception to causality. However, this is probably the sketchiest part of the Objectivist philosophy, at least among issues that are of central importance. Endorsement of the free will thesis is an essential tenet of Objectivism, but the Objectivist account of free will, i.e. of how it works, seems to me very poorly thought out.
The problems all come down to the 'primary choice'. What Peikoff calls the 'primary choice' is a choice which does not rest on any previous choice, i.e. it is a choice that is not caused or explained by any other choice, but which can explain other choices. Now it's clear that there has to be such a thing, if there are any choices at all, since otherwise there would be an infinite regress. In the Objectivist theory, the primary choice is always the choice whether to focus one's consciousness. And this is a plausible candidate for a primary choice (though I remain noncommittal on whether there may also be other choices that are primary), for, until one focuses one's consciousness, one is generally not in a position to understand and evaluate one's alternatives for action; therefore, it seems that focusing one's consciousness is a precondition on all (other) choice.
Now the success or failure of the Objectivist theory of free will is clearly going to turn on the primary choice: if it can be successfully maintained that the primary choice is free and does not violate the law of causality, then we might as well grant the rest to Objectivism (i.e. that all or most of the rest of our actions are free) and consider the problem of free will solved. But here is the problem: Is the primary choice caused? That is, suppose I have chosen to focus my consciousness; was there a cause of my doing this? This question is just an instance of the question about free will that we started with. Now either the primary choice is caused, in which case it appears that it is not free, or it is uncaused, in which case it violates the law of causality. The dilemma can not be evaded.
To look at the matter more closely, suppose the Objectivist answers yes, the primary choice is caused. Now by hypothesis, it is not caused by anything else that I have chosen (that is the import of calling it the primary choice). Therefore, it is caused by something I have not chosen. Call this something, whatever it is, "C". By hypothesis, I did not choose C and could not have avoided C's occurrence, and given that C occurred, it was necessary that I focus my consciousness at this time (by the nature of a sufficient cause); therefore, I could not fail to focus my consciousness at this time, i.e. I was not free to refrain.
Notice that this is exactly the deterministic threat that started the problem of free will in the first place, now applied to the 'primary choice' - and Objectivism accepts in the general case that determinism is incompatible with free will. That is, in general, Objectivists accept that for me to have free will, my actions have to not be caused by my environment, my genes, or anything else that is external and outside my control. So they must accept the validity of the present argument - i.e., if the primary choice is caused by external factors, then it is not a free choice.
We already know that the primary choice, by definition, is not caused by any other choice of mine. There therefore appears to be but one alternative left:
Suppose the Objectivist answers no, the primary choice is not caused. Then he gives up the law of causality, and, if the law of causality is a corollary of Identity, he gives up the law of identity. It is no good at this point to protest that choosing freely is part of the nature of man and therefore not a violation of identity - for what that really amounts to, on the present view, is this: It is part of the nature of man to violate the law of causality; i.e., it is part of our identity to violate the law of identity. No amount of stress on the word "nature" in that sentence can remove the contradiction.
Of the law of causality, Peikoff writes, with characteristic explicitness:
[A]n entity must act in accordance with its nature. The only alternatives would be for an entity to act apart from its nature or against it; both of these are impossible. . . . In any given set of circumstances, therefore, there is only one action possible to an entity, the action expressive of its identity. [OPAR, 14; emphasis in original]Of free will, he writes:
A course of thought or action is 'free,' if it is selected from two or more courses possible under the circumstances. In such a case, the difference is made by the individual's decision, which did not have to be what it is, i.e., which could have been otherwise. [OPAR, 55]I admire the clarity of writing (this is not a facetious remark) that makes the contradiction absolutely palpable: the law of causality entails that an entity always has only one course of action possible in any given circumstances; free will entails that we sometimes have more than one course of action possible given our circumstances.
Now perhaps I am being slightly uncharitable. Perhaps I should have read the first passage with an implicit qualifier, "...except in the case of free will." That is, perhaps Peikoff would rephrase his description of the law of causality to say that any entity, except for a consciousness having free will, always has only one course of action possible to it. In that case, the formal contradiction is removed.
But this won't work, and not only because it is merely a Pickwickian way of avoiding the notion of free will as an 'exception' to the law of causality. The more fundamental problem is the Objectivist doctrine that the law of causality is a corollary of identity. Look back at the quotation from Peikoff on causality. The "therefore" in the third sentence is indicating that the conclusion that there is only one possible course of action follows from the facts that an entity cannot act apart from its nature and that an entity cannot act against its nature. If, therefore, Objectivism maintains that human beings, alone among entities, sometimes have more than one possible course of action, it is fair to ask the Objectivist which of those two premises is not true of the human consciousness: Are we able to act apart from our nature, or are we able to act against our nature?
It therefore appears that introducing the theory of the primary choice produces no progress whatsoever on the problem that we started with. We still have exactly the same apparent conflict between free will and the law of causality.
How might one respond to the dilemma? One approach might be to deny that the question of whether it has a cause is applicable to the primary choice, i.e., to claim that for some reason it doesn't make sense to ask what caused the primary choice. But this hardly seems defensible. The primary choice is something that happens, i.e., an event. The Law of Causality says that every event has a sufficient cause. So either the primary choice has a sufficient cause, or the primary choice is an exception to the Law of Causality. If it somehow doesn't make sense to ask whether the primary choice is caused, then it is an even more stark exception to the law of causality, because it does make sense to ask about any other event whether (and by what) it was caused.
A more promising line of approach would be the agent causation approach. This is the view that a free choice is not caused by any other event, but it is caused by the agent. In this view, events can be caused not only (or perhaps not at all) by other events, but also by entities, and one might go on here to define an 'action' as an event that is caused by an entity. This enables one to maintain the law of causality as "every event has a cause" without leading to an infinite regress, since the cause is not always another event. We might then say that an action is free when it is caused by the agent, and not free when it is caused by something external to the agent.
This may be a good way of resolving the problem, but I have doubts about whether it is an Objectivist way of resolving the problem. If the law of causality says only that every event has a cause, and the cause may be an entity, then it says nothing more than that every event is the action of some entity. In that case, it does not rule out the possibility that an entity might have two or more courses of action available to it at some time - which is why this view allows the possibility of free will. But at the same time, this formulation of the law of causality also allows the possibility of chance events, such as are contemplated in most interpretations of quantum mechanics. The radioactive atom is capable of decaying, or not decaying. Whichever it does will be the action of the atom, so the law of causality is not violated (whatever happens has a cause: viz., the atom itself is the cause). So I think that it is essential to the law of causality that not only is every action caused by an entity, but the causal factors present (the entity's characteristics, plus its circumstances) are sufficient to determine which action it performs - something like that is what is required to rule out random events.
Objectivism, as Peikoff expounds it (and I am here assuming, perhaps mistakenly, that Peikoff is being an accurate exponent), wants to hold on to all three of the following claims:
(1) Free will is not an exception to the law of causality.
(2) The law of causality entails, for all entities other than beings having free will, that they only ever have one course of action possible to them.
(3) Beings having free will often have more than one course of action possible to them.
If (2) is true, I do not see how it can be said that (3) does not form an exception to the law of causality.
Finally, as to how I would propose to resolve the problem: I don't claim to know the answer to the problem of free will, but I will just outline the two possible routes that I see:
On the one hand, we could deny that the Law of Causality is a corollary of Identity, and say instead that it's just an empirical generalization formed by the observation of inanimate nature. The observation (introspection) of our own consciousness, however, suggests that it is an exception to this law; or in other words, that the law doesn't extend to consciousness. Though this approach is inconsistent with Objectivism, I don't see compelling reason for rejecting it, as I have never been able to see causality as a corollary of identity. As to Peikoff's argument (quoted above): I do not see why there could not be two courses of action that are both equally consistent with an entity's nature. In that case, the conclusion that there's only ever one course of action possible simply doesn't follow from the (apparently tautological) premise that an entity has to act in accordance with its nature. And no argument has been given to show that there can only ever be one action that accords with an entity's nature.
Alternately, one could argue that the argument in section 6.2, (2) involves an equivocation on the modal notions like "can" and "must". (N.B. modal notions are those that have to do with 'possibility'.) This is what the so-called 'compatibilists' do. According to this tack, there is one sense in which an entity has only one 'possible' action (this is the sense employed in the law of causality), but there is another sense of "possible" (the sense relevant to free will) in which people do often have more than one action 'possible' to them. This is not implausible prima facie, as multiple kinds of modality are already known (among professional philosophers) to exist. As to what the two senses are, here's a crude first pass: perhaps the first sense of possibility is to be explained in terms of consistency with the laws of nature and the present state of the world. Perhaps the second sense of possibility is to be explained in terms of what one would be able to do if one tried (i.e., you 'can' do A = you would do A if you tried to do A). Or perhaps, as Hobbes suggested, the modality relevant to free will should be explained in terms of the absence of external impediments to motion. Also, one might want to bring in things that our faculties were designed for doing. (E.g., you can do A only if (1) A is the kind of thing that your faculties are naturally adapted for doing; (2) there are no external obstacles interfering with your doing A; and (3) if you tried to do A, you would succeed.) My purpose here, however, is not to give a complete theory of free will, but just to point out how one should approach the issue, in contrast to how Peikoff approaches it in OPAR. Now notice that under my last description of what is necessary for A to be 'possible' in the sense relevant to free will, this is different from the first sense of "possible" (consistency with the laws of nature + initial conditions). Therefore, the argument I initially gave to show that free will is incompatible with the law of causality fails: it is based on an equivocation.