25 Comments
User's avatar
Dana Ramos's avatar

I just watched the video. Well done! This course should be required in schools so that the indoctrinated students will stand a chance against all the media bias.

Expand full comment
Kaizen Asiedu's avatar

Thank you! I wish we had all received this kind of education in school.

Expand full comment
Rich's avatar

What kind of bias is it when we are presented with a suffix that means an irrational fear, which then morphs into a shortcut synonym for hate and bigotry; but a parallel suffix is not available to express a reasonable fear, one frequently based upon experience?

Expand full comment
Kaizen Asiedu's avatar

Great question Rich.

If I understand you correctly, you’re asking what to do when someone labels an argument with a pejorative or dismissive label, like [name]-phobic, rather than engaging with the argument itself, which could be a reasonable one.

For example: labeling someone as xenophobic for having concerns about illegal immigration, or racist for observing per capita crime rates among African-Americans, etc.

When someone labels an argument pejoratively rather than engaging with it, it’s a form of ad hominem. A rhetorical tactic that makes an argument too toxic to engage with by labeling it and implicitly labeling you as toxic for making the argument.

It’s a variation of ad hominem because instead of addressing what someone actually said, it attacks their assumed mental state or motivations.

Here’s how to respond when this happens to you:

‘I understand you may have heard similar-sounding arguments from people who are genuinely prejudiced. But can we focus on the strengths and weaknesses of the points I’m making right now, rather than assumptions about my mental state or motivations?’

This redirects the conversation back to the actual argument instead of getting trapped in a debate about labels.

Ad Hominem is one of the lessons covered in Clear Thinker :)

Expand full comment
Rich's avatar

Thanks for the reply. That's sort of it.

However, "Islamophobia", and similar terms, operates outside of the personal sphere. "These ______ people are Islamophobic!" Whether they may be, or not, it closes consideration, or discussion, of the possibility that the fear may be valid and justified. In other words, it can be readily expressed that people may have an irrational fear of of some particular group or other (add "phobic" and stir); however, REAL fears, by implication of lack of a descriptor, are vanished. Maybe it would be helpful to turn the idea upside down: How many people are referred to as Nazi-phobic? Perhaps all fears are just imaginary and the products of hatred and intolerance?

Expand full comment
David Mandel's avatar

Kaizen, I watched your video with interest. I am a cognitive psychologist who specializes in research on human judgment and decision making. In particular, I have conducted many studies on framing effects and have discussed the topic on multiple occasions with Daniel Kahneman. In prospect theory, framing effects emerge from the psychophysics of valuation and probability weighting. According to the theory, bias from framing is not amenable to training. As well, the extensional equivalence assumption underlying the Asian disease problem you referenced is often violated since linguistic numerical quantifiers are often lower bounded. The literature on debiasing doesn't inspire great optimism. Moreover, some techniques that are meant to improve good judgement seem to make it worse. The question always should be what empirical evidence do you have that your solution works. Would you take a medical treatment that hadn't gone through proper clinical trials? I tend to have skeptical priors since I know that most good ideas fail, but people fail to see that. In recent papers I've dubbed this the goodness heuristic.

Expand full comment
Kaizen Asiedu's avatar

Really appreciate this expertise and the challenge! And it’s really cool that you’ve actually discussed this with Kahneman - that’s incredible.

You’re absolutely right - I don’t have randomized control group data on this specific program since it’s new.

My core thesis is simpler than traditional debiasing: that awareness of cognitive biases and logical fallacies itself has value.

When people can recognize ‘oh, this headline is using sensationalism’ or ‘this is a false dichotomy,’ they make more informed choices about how much weight to give that information.

You’re spot on that we can’t eliminate framing effects - that’s not the goal. It’s more like developing pattern recognition for media manipulation techniques, similar to how financial literacy doesn’t eliminate all bad money decisions but helps people spot obvious scams.

We do have a practical testing component where people can track their own pattern recognition improvement through real-world examples, which at least gives them feedback on whether the frameworks are clicking for them personally.

I'll be sharing more about that testing in the coming week.

I’d be genuinely curious about your take: do you think there’s value in media literacy education even if it doesn’t eliminate underlying cognitive biases?

And if you were designing outcome measures for something like this, what would you look at?

Thanks for keeping me honest about the evidence question - it’s exactly the kind of scrutiny this type of work should face.

Expand full comment
David Mandel's avatar

Hi Kaizen, I certainly agree that the hypothesis has plausibility -- teach people about logical fallacies and cognitive (and motivational!) biases, and they should be better information consumers. The problem is that we really do not know the depth of expertise needed in such training to yield a just-noticeable difference in judgment quality. To give you an example, I recently conducted a pre-post evaluation of a commercial course designed to improve your calibration (i.e., to reduce over- and under-confidence). The course seemed to work in that overall calibration was better after training, but what we noticed was that this was entirely attributable to a task in which intelligence analysts (our participants) were overconfident to begin with. On another task where they were underconfident to begin with, what do you think we found? That's right--worse calibration. They got even more underconfident! So did the course improve calibration? Not in my view. It probably taught people that to be calibrated, they should give lower confidence ratings than they think they should give, but they didn't really have any better meta-cognitive insight into their calibration. The paper's here in case you're interested: https://doi.org/10.1002/acp.4236. Does that mean calibration training is bunk? Not necessarily, but perhaps the training needs to be practiced not over 3 hours (roughly the length of that course) but over 3 months or 3 years.

I'll give you another example, where I'm skeptical, and this time it's from my own intervention. Some years ago, I developed a short workshop designed to teach Bayesian reasoning to intelligence analysts. Again, we conducted a pre-post evaluation, and it showed success (the paper's here: https://doi.org/10.3389/fpsyg.2015.00387). Analysts' probability judgments were more coherent after taking the course. But what I didn't do and would still like to is redo the test and run a post-test several weeks after training. I am much less confident that what analysts learned in the workshop will be retained that long after. This is what I am getting at. It's what Richard Feynman emphasized (e.g., in his classic Caltech commencement speech on cargo cult science). You can't take anything for granted. In this case, the point is: can you generalize from an immediate post-test effect to one that's of practical significance? [No, but people do it all the time.] I think these comments might give you some ideas for how to structure efficacy tests.

As for the question about whether I think the project is worthwhile, yes, I do. I just continue to have skeptical priors. I also think the analysis of current events that you do is vitally important, and maybe more so. We need people who can break it down in clear, accessible terms, and you are great at doing that. It's why I subscribed to you in the first place. A mix of those content and process interventions is an interesting idea. In any case, I've gone on long, but I'm happy to take it up offline.

Expand full comment
Chief Shamus's avatar

David I didn't understand anything you wrote

Expand full comment
David Mandel's avatar

My apologies. The comment was directed at Kaizen, whom I presume understands what I wrote. An empirical paper that goes into the problem with the standard interpretation of framing effects is: https://doi.org/10.1037/a0034207. A recent commentary can be found here: https://doi.org/10.1037/dec0000206. As for the latter point--that empirical evidence of a debiasing strategy is required--since most such strategies fail (or at least, don't replicate successes with high probability), I believe the point is straightforward. It's what Edison learned the hard way.

Expand full comment
Chief Shamus's avatar

in layman's terms...people evaluate another person's observations from their existing biases, thereby lessening the chances of them being convinced by the other's observations. Am I close?

Expand full comment
David Mandel's avatar

This wasn't what I was getting at but what you described is what, roughly speaking, is known as "naive realism", a term coined by my former postdoc advisor, Lee Ross, who used it to refer to the fact that most people think their perceptions reflect reality and therefore to the extent that others disagree with you, you infer that they are biased or ignorant. Among other things, this fans group polarization.

Expand full comment
Some Old Guy's avatar

It kinda boils down to facts vs feelz. I want to know what Kaizen did today, and maybe I want to know how and why he did it. Telling me how Kaizen felt while doing it, or how is sister felt, or how Grandma felt is almost all meaningless to me. Putting up a headline with fee-fees in it is just wasted on me, and I probably won't read the story.

Expand full comment
Kaizen Asiedu's avatar

Agreed - there's a lot of ad hominem, mind-reading, and emotionalism in our news when it should be serving us news, not narratives.

Expand full comment
Karen's avatar

While watching the video, it occurred to me that framing can work on multiple levels simultaneously. For the "which picture looks happier?" question, I realised immediately that they were identical. Why? Because given the context and rationale of this example, I strongly suspected that it was a trick question. Therefore, the framing became a kind of "meta-framing."

Expand full comment
MichelleHasQuestions's avatar

How do I share this?

Expand full comment
Kaizen Asiedu's avatar

The easiest way is to send people to clearthinkeracademy.com

Expand full comment
MichelleHasQuestions's avatar

Thank you!

Expand full comment
Brandon Parks's avatar

Don’t bother

Expand full comment
Ebenezer's avatar

This sounds like a great project. I only wish it could be widely accessible instead of being behind a paywall. That would increase the chance of having a positive impact on our political conversation.

Have you thought about creating a Youtube channel for monetization? Youtubers can make a lot of money on ads, with no paywall.

Another idea is to reach out to philanthropic groups which are looking to improve the quality of political discourse. I know there are groups such as BridgeUSA, Forward Party, and Braver Angels. You might be able to get funding from them, or else make friends with them and see if you can get a warm introduction to their philanthropic funders.

I'm sure some conversations with Grok or ChatGPT could generate many more ideas along these lines. In any case, regardless of the course you pursue with this project, I wish you the best of luck!

Expand full comment
Andrew Moser's avatar

Looks great, and much-needed in our current discourse! Nice hat tip to Elon on the name as well.

Expand full comment
Cathrine's avatar

Kaizen you are the reason I signed up for Substack & I love your work! Thank you!

Expand full comment
Gene Michelsen's avatar

God bless you Kaizen.

Expand full comment
Brandon Parks's avatar

Fox News is rampant with media bias.

Expand full comment
Kent's avatar

Well said!

Expand full comment