Truthseeking about controversial things (status: indecisive rambling)

Suppose you ask me a question that is fraught (politically, or for some other reason), such as “Does the minimum wage do more good than harm?” or “Could a computer be conscious?” or “Is Lockheed Martin stock going to go up?”

(Not that you should ask me any of those things.)

What makes any particular answer I might give “objective” or “honest”? Or alternatively, what makes the process by which I arrive at my answer “objective” or “honest”?

Here are some prima facie plausible answers:

  1. The answer is objective & honest if it is, in fact, justified & true (faithfulness to facts).
  2. The answer is objective & honest if it fairly presents all reliable points of view, weighted by their reliability and by consensus of expert opinion (NPOV).
  3. The answer is objective & honest if it was arrived at by my best effort as an epistemic agent, either eliminating or acknowledging personal biases, conflicts of interest etc., and steelmanning alternative positions (process objectivity).
    1. People who have read Hanson or Taleb may want to turn the “no conflicts of interest” part on its head and assert a need to make a bet or to have “skin in the game”; i.e., a real-world incentive aligned with truth-seeking.
    2. Some have suggested discussing updates on evidence, rather than discussing posteriors directly.
  4. All possible answers are necessarily factional. “Objectivity” is not a coherent goal, but the most honest answer simply presents my factional view and reasons for it, while neither attempting to conceal which faction I belong to or the existence of other factional views, nor making special efforts to do them justice (factionalism a la Moldbug).
  5. The answer itself is the wrong level of analysis; you would be better off scrutinizing whether the answerer is an epistemically virtuous and responsible person (virtue epistemology).

I’m not really thrilled with any of these, but I don’t have a great alternative to offer up. (1) is charmingly simple, but too outcome-oriented; I don’t want to condemn a wrong answer that’s due to bad epistemic luck. (2) is exploded by the need to cash out “reliable” and “expert” and “consensus” in ways that aren’t blatantly factional. (3) is the position I am most attracted to (is that too obvious?). My problem with it is not its unattainability – after all, this definition is only meant to be an ideal to aim at. Rather, I fear that each additional attempt to eliminate bias represents another free variable for Rationalization to play with, in service of the Bottom Line. (4) seems unattractively defeatist or self-serving, and “denies the phenomenon” – I can remember occasions on which I believe people were genuinely objective and honest in their presentation of evidence/beliefs to me, although it’s hard to put my finger on what convinced me of this. (5) comes in second place, but I’m skeptical that people are reliably epistemically responsible from day to day or across domains.

What do you think? I suppose defining objectivity was just a jumping off point (perhaps an excessively abstract one); I’m more interested in the conditions under which you are willing to trust somebody’s statements on a controversial question.

Author: Simplicio

Engineer, dilettante.

7 thoughts on “Truthseeking about controversial things (status: indecisive rambling)”

  1. The honesty and objectivity of an answer is provided by the ratio (cognitive energy expended in assessing the question and communicating the assessment process clearly)/(cognitive energy expended in assessing and manipulating the questioner).

    Like

  2. Why don’t you want to condemn an argument based on bad epistemic luck? I don’t care about the argument’s feelings. It’s not a moral judgement to say that unlucky arguments are still incorrect arguments. Have I misunderstood you?

    Like

    1. If you ask me if a coin flip will be heads or tails and I bluff and tell you heads and am correct, my advice was at best lucky. At the time the decision was made it was silly to trust it. And it seems to me that is what the question is addressing.

      Liked by 1 person

  3. “Could a computer be conscious?”

    The bigger question as alluded to here https://carcinisation.com/2014/10/11/gnon-and-antignon/ is if humans are mere machines, shouldn’t destroying a computer be tried in a court of law as a more serious crime than destroying a human being, since computers are far and above more useful to collective humanity and its progress than any one individual human?

    Like

  4. We are seeking reliable advice in complex matters. As such there are potential problems with both the advice giver and the advice.

    1). Does the advice giver have a background or track record which makes their advice more reliable than the available alternatives (coin flip, guess, vote, etc)

    2). Do they have the incentive to be objective (as opposed to selfish interests, signalling to peers, etc)

    3). The “expert problem” of complex systems rears its head. An expert in a narrow discipline (say climate) is rarely also an expert in other interdependent fields ( economics, politics, philosophy, human welfare, etc). This can lead to false confidence in answers.

    I agree with Taleb and others suggesting procedural approaches to the problem. Assuming you want reliable advice, you want to go to those with a proven track record of accomplishment and success, who are well aware and up front with the limits of their knowledge, and whose interests are personally aligned with the answer (skin in the game).

    This of course makes experts questionable in extremely complex phenomena which are hard to measure, open to contradictory explanations after the fact or which have long time frames and poor feedback. Such as stock advice, solutions to climate change and effects of minimum wage changes.

    Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.