Principles

I thought it might be interesting, insofar as navel-gazing can be interesting, to start sticking all the normative principles that I hold to be true in a list somewhere (here), so I made this page. Let's see to what degree, and in what sense, I am a principled person!

First of all, I should clear up what I count as a principle, and what I mean by "normative." I'll take on the second term first. I think I can settle for a pretty standard definition of normative: something is normative just in case it carries a sense of ought. So a normative principle is a principle which ought not be violated--I hold that there is something wrong with you if you disagree with these, or if you act in such a way that is incompatible with them. So the claims that we should expect to find expressed here are probably relatively uncontroversial ones. Most of this will probably sound pretty unoriginal to someone who has taken a critical thinking class.

And of course there are different shades of normativity: violation of norms of rationality mean you are some amount of crazy (irrational), and violations of moral or ethical norms mean you are some amount of evil. (Are there other flavors? None that presently come to mind.) These norms apply to everybody, by which word I refer to the total population of mentally competent humans (leaving "mentally competent" as an unanalyzed term).

What do I mean by a principle? I mean this as something like an axiom, something which is not itself provable but is taken to be self-evidently true. It is a real possibility that after I list these things here, I'll be prompted to think on them some more and decide that some or all of them cannot be called "principles," but are consequences of some deeper principle. But that would be a welcome result; the more principles I list here, the more chances I have to nominate something that isn't fundamental for the label of principle.

Note that, while it is true that there may be several good practical reasons which recommend the adoption of these principles, I don't think that suffices to account for their normative character. I am not interested so much in a list of prudential principles. (If there were a normative principle to do as well as you can in every endeavor, then all practical principles would come into the fold, I suppose. But I don't think there is something normatively wrong with under-performers, just room for an improved performance.)

I suspect that I will eventually generate a fairly long list of these principles, but I do very poorly at generating lists on the fly. (I come into this project with only a couple of principles in mind.) I daresay my mind works well in certain ways, but generating anything like exhaustive lists of examples of things is not one of them. The strategy I am employing in filling out this list is to add a monitoring loop to my daily thought process, and earmark anything that slips past that seems to belong here.

One of my motivations in creating this page is so that if I were ever (for some reason) curious to survey the landscape of my beliefs, I could start with a review of these accumulated realizations. I can also use that as a way to investigate whether these principles are truly fundamental, or whether some of them collapse into others. Or perhaps reflection on these putative principles will lead me to conclude that none of them have any kind of axiomatic character. I hardly know the contents of my own mind, and I hereby invoke the power of written language in order to allow me to discover the extent and nature of said contents.



The Principle of Charity
One of the first things they (hopefully) drill into you when you take a critical thinking class is the following really good piece of advice. When you receive an argument that is somewhat vague or ambiguous (and you can't ask for a clarification), your response should assume the strongest possible interpretation of the unclear terms. (This principle helps prevent you from making straw man arguments.)

There are a lot of practical advantages to this, of course. You get a much more interesting exchange if you adhere to this principle. It also seems more likely that a charitable interpretation will hit nearer the intended mark than an uncharitable one, so you can save yourself from wasting a lot of your time and energy defending against a threat, so to speak, that does not exist. And even if your charitable interpretation is not on the mark, it may lead others to see a more position than they originally held, even if they are unmoved by your argument.

However, despite whatever practical advantages exist for the charitable interpreter, I think the importance of this type of charity runs deeper than mere prudence. When someone consciously attacks a possible sense in which another's words might be taken, they aren't just risking a waste of time and effort. There seems to be a certain kind of dishonesty inherent in such a move. In essence, this principle seems to be an advocation of that special form of honesty. (I doubt that there is any normative principle which requires honesty, full stop. Practical concerns may override the general obligation that we have to represent our own beliefs in good faith.)

Of course, there are certain types of exceptions to this principle. Academic settings are what I have in mind. Here an uncharitable reading is to be expected, because part of the basis on which your argument is being graded is for clarity, precision, etc. I consider that training in the art of not requiring charity. Of course, such readers probably know well what the correct charitable reading should be, but deliberately restrict themselves to the information explicitly given. But that doesn't mean there is not a general principle at work, of course.


The Golden Rule
I pretty much can't help but think there's some kind of traditional "golden rule" that applies to everyone. For spit & bubble gum purposes, this is the "Do unto others as you would have them do unto you" rule that your momma hopefully taught you. It is, however, a little bit challenging to formulate this precisely. Let me sketch out what I'm thinking about this, and perhaps I can improve it soon.

I've already said that the, say, target population of these principles are, in essence, normal adult people. So maybe that eliminates the bulk of weirdos who would love to be beat up at random, who if we applied that verbiage to literally, would be perfectly in the clear if they beat strangers up at random. Perhaps we can just disqualify those people, and say, "Do unto others as a normal person would have you do unto her."

Is that enough? I'm not sure. Kant (love him or hate him) suggests some significant distinctions that we can make. First, consider a difference between positive and negative obligations toward people? What do I mean there? A positive obligation is something like, "help people out," and a negative obligation is something like, "don't screw people over." What are the limits, though? To borrow from Kant again, we might also distinguish between perfect and imperfect obligations. A perfect obligation holds 100% of the time: don't ever murder anyone. An imperfect obligation is something that you can't ignore, but don't have to drop everything in order to fulfill at every opportunity: you should go help out at your local soup kitchen when you can, but you don't have to stay there 24/7. (Imperfect obligations are a little bit disturbing. Can't one basically put them off indefinitely, as long as they have a good story to tell about why right now just isn't a good time? I guess you can't really fool yourself, though, and I'm not sure who else is fit to be a great judge of your character.)

If we're going to bring Kant into the room, we may as well compare this golden rule to his categorical imperative, I suppose. I don't think they are quite the same. I don't think we can call it a principle (even if we agree with it--and I'm uncertain that I do) that you are only acting morally if you are acting from duty (as opposed to happening to act in a way compatible with duty). Maybe it does seem a little bit better (somehow) to do something because you think it is the right thing to do than to act thus for some other reason. I mean, if duty demands you bring your grandma to the hospital, and you do bring her there, but you did it only because that way you were going to get your hands on something you wanted for yourself (even if that thing is something as innocuous as a happy grandmother), that doesn't seem quite as meritable as bringing her there just because you think that you should. But on the other hand, it seems that you could be a "naturally" kind person who consistently acts in a helpful manner because you enjoy helping people--it makes you happy--and I'm not so sure that you are in some way worse than a person who says, "Ah! Duty demands that I help this person, if it's not too much of a burden on me--and it's not so big of a burden, so I shall."

Anyway, I don't really want to bring Kant's story in here; I just want to say that there is a principle like the golden rule that I think applies. Not everyone would agree with that. (Nietzsche, for whom I have found I have a lot of respect, would probably have had his sister stop reading this page to him long before this point.) But in my opinion, there is something wrong with you if you act without a certain kind of general empathy.


Requirement of Belief
I need to work this out, but it seems pretty clear that given sufficient evidence for some fact, you ought to believe that fact is true. The question of course is what counts as sufficient evidence. This has to involve something like a survey of evidence for and against a given fact, and the (cognitive) consequences of (dis)belief. Perhaps some general sort of "explanatory value" can be invoked, but I'm not sure that anything outside strength of evidence would really come into play.

One might fear some sort of vicious regress threatens us here. How do the changes in belief get off the ground in the first place, so that we can develop our first beliefs? This of course is where rationalist accounts like Kant's start to shine. But I will have to leave this until I think of something sufficiently clever to say.


Burden of Proof
If I hold some reasoned belief (in the sense I mean to develop for the principle of requirement of belief), and you want to convince me that the truth is otherwise, the onus is on you to convince me to change my mind. You had better give me either (a) a better reason to believe the new idea than my previous one, or (b) as good of a reason plus better explanatory value or prudential utility, or something like that, if I am to change my views.

Maybe this is just part of the previous principle, really. I think that what I am saying here reduces to judging the whole body of evidence available and only picking the most plausible belief (if any) in light of it. But it also seems right to say that there is something wrong with you if you present me with an argument for some conclusion contrary to my belief which I am unmoved by, and then demand that I defend my belief to you in order to justify not adopting your view. That is not how it works.



I have a lot more pondering to do on all of these points. As of right now, these are pretty rough sketches.

There are probably some things to say along the tradition of Grice, who I recently heard suggested certain reasonable expectations about communication: that one provides the right level of specificity in the information one gives, honestly represents what they believe, etc.

There are probably also a few surprising non-principles I can mention. (I already mentioned that I don't think there is a categorical obligation against lying.) But who knows, yet, what all I've got in mind here.

No comments:

Post a Comment