So for the past couple of days you’ve all been very indulgent as I have worked by way through a rhetorical device that I have been pondering for a couple of weeks now. The idea can be summarized as follows:
Many disputes can be expressed as being grounded in two opposing myths: that the world (relative to the topic under discussion) is fundamentally fair, and that the world is fundamentally unfair. Based on those beliefs, moral arguments are developed that either require the preservation of the status quo (f-myth) or its abolition/modification (u-myth). From within each mythical perspective, the opposing argument becomes immoral as a necessary consequence.
What I think this framework (which is really more of a rhetorical device than anything else) allows us to do takes two principal forms. First, it may allow us to gain insight into the positions of people we find in opposition to whatever we are trying to do, connecting the dots between beginning and end rather than just focusing on the end’s immorality. Second, by making explicit all (or at least many) of the steps along the way to the conclusion, it provides us with opportunities to either re-evaluate our own position or attack those of others by injecting different types of evidence into their logical process.
It is important, I think, to jump back to the beginning for a moment and recognize that because these are myth-based beliefs, neither can be said to be more true than the other. Since “fair” is subjectively defined (an assertion that is certainly open to critique), my belief that X is ‘fair’ can be, at its most basic level, no more objectively true than someone’s belief that X is ‘unfair’. This remains the case up until the moment that we can both establish a common definition of ‘fair’. It is then and I think only then that any attempt to counter-argue can have any kind of usefulness.
While I have my own doubts about this, I also imagine that it might be possible to show inconsistencies in people’s arguments by tying one f-myth (or u-myth) based argument to another that the person might not agree with. For example, I would imagine that someone who thinks that affirmative action programs are ‘fair’ would have a hard time arguing that harassment policies are ‘unfair’, since many of the antecedent justifications are identical. Exposing to this type of cognitive dissonance may be a powerful motivator for either change or at least modification.
We also have the potential to use this framework to explore exactly how far back in the line of logic our disagreements go. Our most persuasive arguments, it would seem, would be those that focus on the exact place where our beliefs diverge. Spending a lot of time fighting about the conclusions of a process would, I would think, result in a more productive conversation than expending that energy trying to work backwards from the end. Once again, this is entirely conjecture on my part, but it seems to stand to reason.
I am going to have to test this framework ‘in the field’, so to speak. Not only as a way of interpreting arguments made by others, but over the course of disagreements to see how useful it is in real time. As profoundly uninterested as I am in wasting my breath on the herd of jackals that spew faux outrage at the very idea that women are discriminated against, I am unlikely to insert myself into those particular conversations (especially since I believe the ‘other side’ just doesn’t know it’s already lost). If I am feeling particularly masochistic, I might find myself using this in conversations about whether or not people in social justice conversations have an obligation to adjust their “tone” – a conversation about which there is profoundly deep disagreement even among people who I otherwise like.
In terms of how to test the factual basis for my assumptions (namely, that my ‘fair myths’ idea describes reality in any way, rather than me simply forcing the facts to fit my theory), that’s a much tricker task to tackle. If it is the case that these myths are correlated with (or indeed, the same construct as) system justification, then psychological studies dealing with how fairness is manipulated might shed some insight into how to test this model experimentally. In my mind’s eye, I imagine asking people to evaluate the extent to which they endorse a given course of action (e.g., adopting an affirmative action program), and manipulating their levels of system justification (i.e., prompt them to think of it as ‘fair’ or ‘unfair’ experimentally). If my framework is correct, we should see modifications in endorsement as a result of making system justification more or less salient.
In terms of how to test the efficacy of this framework as an argument, I am less familiar with what would be an appropriate research design. What I can imagine, though, is that if two parties who have a fundamental disagreement both agree to adopt this style, it would make it far easier to walk your way through an argument without as much acrimony. That being said, I don’t think acrimony is de facto a bad thing, but depending on what you’re trying to do, it can be problematic. It’s possible that this provides us with a method of having conversations that are more productive.
My chief concern in all of this is that I have done nothing particularly useful by spelling this out. I am not at all convinced that I haven’t just restated a bunch of things that are really obvious. If this series has been helpful for you, then I am glad because then it wasn’t a total waste of time. I am not by any means suggesting this as a ‘best practice’, nor do I claim to have created something revolutionary (or, for that matter, even correct), but it was helpful for me to get it down ‘on paper’, so I appreciate your bearing with me. This bit of writing accomplished, I will return you to our regularly-scheduled programming.
Like this article? Follow me on Twitter!