Cognitive Semantics – pt. II


This post originally appeared on Facebook on May 25th, 2009

After re-reading my post of a couple days ago, I realize there is a big piece missing from my discussion of “smart” decision-making: the concept of Value. I allude to it in my example about driving 2 hours to run for 30 minutes, but I feel it needs more explanation.

What I seemed to suggest in my last post is that there is a process of intellect/sagacity/acumen calculation that will help people make smart decisions. If I were to try and boil that process down into a mathematical equation, it would look something like this:

Good = A + B + C + D + E + …

Where “Good” is a ranking of how positive or negative the decision is for the decider, and “A – E” are the various likely outcomes of the decision (i.e., A is money spent, B is fun had, C is social status gained, and so on hypothetically). However, this assumes that all of the outcomes are equally important, which they may not be. For example, a person on welfare can obtain a great deal of social status and fun from buying a brand new car. However, the amount of money spent is prohibitive (indeed, it would impossible for this person to eat or pay rent or do anything… even buy gas). On the other hand, a millionnaire would not mind paying for a new car, but may not gain as much social status (“Oh, you bought a new Camry. How… common. Excuse me, I need to finish my arugula and monocle sandwich”) Clearly each outcome does not carry the same weight for each person. Two people with different values may reach the same (or indeed, different) decisions using the same intellect and acumen, but via very distinct processes.

Perhaps a better equation might look like this:

Good = Aa + Bb + Cc + Dd + Ee + …

Where “a – e” represent the VALUE the decider places on each outcome. For example, our two would-be car buyers: the welfare recipient places a much higher value on making prudent financial decisions (insofar as a $20,000 purchase is concerned) than he/she does on achieving social status and having fun than the millionnaire, for whom social status is real currency. Consequently, the value of A+B+C(welfare) and A+B+C(millionnaire) are equal, but a(welfare) is extremely large, while a(millionnaire) is much smaller. Conversely, b(welfare) and c(welfare) are small, while being larger values for the millionnaire.

This form of the equation quickly becomes ridiculous as one realizes that there are an immeasurable number of potential outcomes that rank from trivial to potentially catastrophic. Giving each one of these outcomes an equal weight in the decision-making process would necessarily give preference to decisions which had extremely small but positive outcomes, and made no impact at all. A third piece is required, which is the PROBABILITY of each outcome occurring.

An equation might then look like this:

Good = Aa1 + Bb2 + Cc3 + Dd4 + Ee5 + …

Fans of British philosophy will recognize this as a re-hashing of Utilitarian calculus, the principle of “the greatest good for the greatest number” without the ethical connotations. This isn’t confined to making moral decisions, but a suggestion for a crude way in which people make decisions (or should make decisions). It is fairly evident that the value that people place on different outcomes is a significant component of what decisions are made that is completely independent of intellect, wisdom or intelligence.

So who cares? I guess I just wanted to point out that a decision that seems stupid by some standards (most usually my own standards) might in fact be motivated by the value the decider places on different outcomes. If I don’t think something is important (for example, I don’t put a lot of value on fitting into a crowd), I will question (and usually insult) the decision that someone else has made. However, this is a difference in values, not in cognitive ability.

HOWEVER, this issue of values does not side-step the first post’s point, which is that when making decisions, one should spend time and be aware of the ramifications of their decision, then consider the value they put on each. Making decisions from gut-feeling or “emotional reasoning” will cause you to end up deciding on different courses of action from the same set of principles, instead of making the decision that has the greatest good the greatest number of times.

Obviously decision-making is far more complicated, and we are people, not computers. However, if the goal is to make well-informed and prudent decisions, it would benefit us to put more time into thinking about why we do the things we do, rather than just doing them and sorting out the problems afterward.

Leave a Reply

Your email address will not be published. Required fields are marked *