DFW
James Russell
Summary: The AFF must quantify the benefits (their severity and likelihood) and weigh them against the costs (and their severity and likelihood) and create compelling reasoning for a rational policy executive to make a reasonable decision.
There are a lot of different debate paradigms that all claim to provide a holistic conceptual understanding of debate. From my perspective, more than one of these paradigms is logically valid. Adopting one usually relies on some fundamental assumptions that rest outside of debate as an institution. This means there is plenty of room for reasonable disagreement, which is good! Debate would be boring if the answer was cut and dried.
My fundamental assumption is based around pragmatic training: the purpose of policy debate (and speech and debate as a whole) is pragmatically oriented to building transferable communication skills.
With this in mind, policy debate is reasonably presumed to be a reflection of real-world policy making. Some coaches like to reduce this even further to making it a congressional simulation, but I think that is a less reasonable presumption. It’s pretty biased actually - many policies are passed by organizations and bodies very unlike congress with very different rules and expectations. It would be pretty presumptive to reduce debate to simulating the American form of Congress. Instead, I prefer to stay at just “real world policy making.”
If we’re debating a real world policy, we need a real-world judge! A rational policy executive utilizes basic risk-assessment that mirrors most everyday decisions. We intuitively weigh potential risks with potential benefits literally all the time. Thus, debate exists as a perfect tool for teaching students pragmatic ways to convince an audience to side with them on issues far beyond just “debate.” The judge is the rational policy executive and we’re the ones providing the reasoning that should persuade the rational policy executive.
From this point, it is actually remarkably flexible as to what the affirmative “must” prove. The bottom line is they must do enough to convince an ordinary rational policy executive to vote for them. At this point, the round usually gets bogged down in “a standard of proof.” But these discussions often miss the bigger picture. The two most common standards of proof are “beyond a reasonable doubt” and “preponderance.” Beyond a reasonable doubt is usually meant to mean 90% certainty. Preponderance is usually meant to be 51% certainty. But certainty of what? It’s up to the debaters to explain the standard of proof that should be used and the context for its use.
For example, we have a 90% certainty that a SWAT team would successfully execute a warrant. The 10% chance of failure could lead to children dying. The affirmative team could prove their plan would work beyond a reasonable doubt, but as a policy executive I would have legitimate questions about if I would implement the plan. I could prefer the current system for now as we gather more information that would raise the probability of success, I could prefer a different tactical approach, or I could be appalled at the morality of SWAT operations that could endanger children and dismiss it out of hand.
The secret isn’t in fulfilling a burden of proof without context, it’s in proving enough probability of a benefit that can contextually outweigh the benefit of risk. Basically... impact calculus (which is a formal system, not just a catchphrase. I can clarify more if necessary!) And impact calculus is deliberately designed to be flexible. Sometimes we end up in a grey area where the answer isn’t clear. But a skilled affirmative knows how to leverage impact calculus that justifies policy action. A skilled affirmative needs to quantify value for their policy and quantify risk alongside it. If you frame that well, rational policy makers will consistently vote in your favor. Viola!
Incidentally, most teams don’t realize that every policy action has real costs. From opportunity costs to political capital, there are compelling reasons to reject policies that provide very small likelihood of success even if there are no apparent disadvantages. It requires a shift in perspective to real-world policy making and the ability to think out of the box. However, it’s also challenging to communicate these ideas to the judge and requires a high-level of communication acumen and deep understanding of the arguments. I think this has informed NCFCA’s meta this year, where many very skilled teams are running cases with smaller likelihood benefits but virtually no likelihood of compelling disadvantages. In lieu of negatives learning explain risk in these scenarios, these cases will be very successful. Not because affirmatives have fulfilled some clear burden of proof (preponderance vs. beyond a reasonable doubt), but because negatives are not adept at creating risk in the judges mind. And smart AFF’s have picked that up this year! So of course, most rational policy makers will try when there isn’t real risk involved!
Y’all’s case is a great example of that. I think there are real solvency concerns that would seriously detract from the likelihood of benefits occurring. But risks to y’all’s plan are difficult to frame. NEG would have to pitch a near-perfect game under solvency or reframe the moral high ground your case takes. Both are challenging avenues for NEG to attack. As the year has gone on, y’all have tightened up those avenues and forced negatives to walk a dangerous path. Unless NEGs are master framers, they will struggle to convince the ordinary policy executive to strike down your plan. Why would they? What could possibly go wrong? The worst case scenario would apparently be that the plan doesn’t work. But if it works at all... well that’s a much more attractive system! That’s a smart way of leveraging how people pragmatically think in the real world. It’s intuitive impact calculus (which I don’t know if y’all leverage in your rounds, but if you don’t, you should consider it - it may give you language to really drive home y’all’s reasoning in the judges’ mind).
TL;DR is the AFF must quantify the benefits (their severity and likelihood) and weigh them against the costs (and their severity and likelihood) and create compelling reasoning for a rational policy executive to make a reasonable decision. It’s literally that simple. Failure to do so clearly and concisely will often result in poor reasons for decision where a rational policy executive wades through minimal knowledge of theory, minimal understanding of context, a maze of argumentation without a clear narrative, and perhaps wide ranging biases to create the best decision they know how. Small wonder judges make poor decisions ;) But the secret is that is truly isn’t their fault. They aren’t usually given the tools they need to make a better decision.