Title: Dialog Theory for Critical Argumentation
Author: Douglas Walton
Publisher: John Benjamins
How do we ‘export’ argumentation to computer science? Is it possible in the first place? Do we have strong reasons to even try it? And how much autonomy are we ready to share with human-like computer systems when it comes to arguing? With these questions we are just scratching mousily the tip of an iceberg some choose to call computational dialectics (cracking, huh?). It must be said be said that the most influential philosopher to provide answers for these basic questions is undeniably Douglas N. Walton. Dialog Theory for Critical Argumentation (Walton, 2007b) is a book that organizes and reaffirms the importance of these, by now, well-trodden routes which a few decades ago felt queer and dusky.
Agency and artificial intelligence are still notions which hopefully the readers of this blog – shoulder to shoulder with the author – have trouble coping with. (Solidarity! That’s what I always say. Or l’esprit du corps, au pire, that’s what I’d say if I knew any … nevermind). However, it has been illustrated in earlier posts that someone interested in argumentation theory can (fortunately) skim over the technicalities without significantly damaging of the big picture. This is what this post will accordingly try to achieve: a big picture of what D. Walton dubs dialog theory and its applications in computer science.
We are then ‘exporting’ argumentation. To argue is, notwithstanding, a complex language feature that humans learn to identify and process mainly through social practice and experience. The road back from practice to ‘form’ is therefore bound to be a difficult one. Aside from obvious regularities (e.g. assertions are made, questions are asked) and norms (e.g. argumentation proceeds by means of language behaviour), as Walton puts it, “the principles making for efficient communication are not explicitly stated anywhere” (Walton, 2007b, p. 18). What we need is a system, a theory. What would such theory speak of and what would it need in order to work?
1. A theory of reasoning
The central type of reasoning involved in argumentation could generally be described as defeasible (or plausible) reasoning . Enouncing this has some major advantages which can hardly be equated by the traditional (deductive or inductive) modes of reasoning: (1) agents constructed on these bases are more flexible, for the result of every inference is susceptible of being left out once the environment has provided conflicting information, (2) the analyst could model argumentation pertaining to values, i.e. evaluative statements, which are “inherently defeasible” (2007b, p. 95), (3) the ‘criteria’ of evaluation are not a set of monologic rules of inference but a set of (dialogic) critical questions formulated by a second party, and thus (4) the notions of ‘weight of presumption’ and ‘burden of proof’ are capable of being added to the picture. The Statements Proved Beyond any Doubt is an extinct species of outcome, which now require little (if any) attention.
Defining a ‘good argument’ by these lines is consequently not so much a problem of ‘form’ (though still an argument must be structurally correct), as a problem of ‘contribution to’ and ‘orientation towards’ the goals of the dialogue. At this point, we should note that most of the trailblazing was done quite early in the 70’s and 80’s, starting with Charles Hamblin’s work on fallacies. With this, we arrive at the second requirement.
2. A system of rules
Much of what today we can name ‘dialectic system’ started with Hamblin’s chapter Formal Dialectics (Hamblin, 1970, 253-282), in which he set about positing a few ground-rules of language behaviour for speakers acting within the frame of a dialog. Although Hamblin’s intention was to produce a normative framework for the study of fallacies (Walton, 2007b, p. 167), he preserved a certain vagueness as regards the goal of the dialogues he studied.
“Sometimes Hamblin writes as though the purpose of a formal dialectical system is to seek or exchange information. At other times it appears that what he has in mind is more like what would be classified in chapter 1 as a persuasion dialog.” (2007b, p. 79, see also p. 73)
Hamblin constructed different types of dialogues, each with its peculiarities, but made no attempt to group these dialogues into a classification. The “Why-Because System with Questions”, for example, contains the following types of (allowed) moves: the making of assertions, the asking of questions, the retracting of commitments, the request for justification, the making of a resolution request. In a tableau-style view, such a dialogue would look like this:
Other attempts to model language by making use of formal systems could be cited (Hintikka, Barth & Krabbe etc.), although one should first notice that not all of them are fully oriented towards argumentation, i.e. a type of communication oriented towards persuasion. It shoud be noted that persuasion, within these formal systems, has little to do with the term as used in psychology, and it is understood rather as an outcome of specific moves in a dialogue framework: “persuasion, in this sense, refers to the respondent’s ‘conversion’ so to speak, or the change in his commitments. Before he was not committed to this particular statement, but now he is” (Walton, 2007b, p. 29, my italics).
Assuming that these systems work, that they are capable of providing the canons of ‘(procedurally) good argumentation’, what we now need is someone to follow such rules. Someone or, something.
3. Critical agents
I have indicated some elementary characteristics of agents in this earlier post. What I want to do now is follow Walton in adding some specific features an agent would need in order to function accurately in an argumentative situation. So besides autonomy, situatedness and feedback, a critical agent (i.e. an agent engaged in what Walton titles ‘critical argumentation’) must be, we could say, practical: it must be able to take action on the basis of practical reasoning. The goal-directed type of reasoning could be slightly formalized starting from Aristotle’s practical syllogism:
- G is my goal
To bring about G, I need to bring about action A.
Therefore, I need to bring about action A.
To complicate things a bit further, in order to arrive at a certain goal (G), the action (A) need not be sufficient; several other actions could be necessary in order to bring about (G). This means that the agent must also be tactical: not only that it should be capable of planning (i.e. developing a strategy) but it should be capable of recognizing the plans of others, a type of simulative reasoning called, quite intuitively, plan recognition. An agent should be capable of expecting certain pathways that the dialogue could follow and take (or avoid) those pathways according to its aims (Walton, 2007b, p. 190). Proactivity is an important feature in arguing as well as other types of communication, most notably deliberation. Several other features could be discussed. For instance, Walton speaks of a property called deceptiveness – or, the other way around, qualities of character (i.e. honesty, integrity etc.) – (2007b, p. 200) which could be useful for accounting for ad verecundiam and ad hominem arguments. Now, an agent with all these features will be useless. What he needs is some brothers with which to interact. Obviously, he cannot simply argue, he needs to argue with his peers. But before settling the issue of arguing, one should be able to resolve the more general problem of … communication.
To put things in a nice, abbrev-ed manner:
Establishing a standardized agent communication language (ACL) is therefore a highly important part of MAS development. The first significant widely established attempt was KQML (Knowledge Query and Manipulation Language) proposed by the DARPA (Defense Advanced Research Projects Agency) knowledge-sharing effort. […] The most recent effort is the FIPA (Foundation for Intelligent Physical Agents). […] The standardized ACL developed by FIPA is similar to KQML. (Walton, 2007b, p. 135)
Now, before panicking, the reader should note that these languages are mainly based on more familiar notions like performatives (echoing the pragmatics of speech acts) or conversation policies (echoing Grice’s CP). Some differences exist, nonetheless. For example, FIPA is built on a BDI (belief-desire-intention) model, whereas pragmatics and implicature basically stand on a commitment-based model of human communication. Another thing, while any two assertions (for instance) could be pragmatically and semantically described as being identical, depending on the dialogue in which they occur, they can have different functions (in an information-seeking dialogue and persuasion dialogue).
5. Things to argue about
Once the above points have been reliably settled, implementing argumentation processes shouldn’t be rocket surgery. What one needs to stipulate is (a) a set of dialogue types – along with the notion of dialectical shift, and (b) argumentation schemes – along with their critical questions. The concept of dialogue type has been explained here and the problems involved in its usage here. The idea of dialectical shift – and the criteria of division between the ones that are based on embeddings and the ones that are not will be discussed in a subsequent post. As regards argumentation schemes, they are based on presumptive reasoning of the kind that shifts the burden of proof thus: “Each argumentation scheme has a matching set of critical questions. If an argument is put forward in a dialog by a proponent, and it meets the requirements of the argumentation scheme, and the premises are acceptable, then a weight of presumptive acceptability is thrown onto the conclusion. If the respondent asks an appropriate critical question, however, that weight of acceptability is withdrawn, until the question is given a satisfactory answer by the proponent.” (Walton, 2007b, 226, see also Walton, 1995, pp. 130-162). Further distinctions have been sketched here.
Switching to our Asimov-like foreshadowing, with the help of (Walton, 2007b, p. 191) we will imagine the following scenario. Let us call our agent Willie. Willie acts in his environment not as a program, but as an agent. This means not only that he is waaay smarter, but that he can justify his moves retroactively. In order to do that, he must have acted reasonably when he interacted with other agents, and pursued his goal(s) by means of persuading other agents either to commit to something or do something in some way. Willie’s friends are agents as well. And be sure they are not at all unintelligent. What they’ll do is try and achieve their own goals in what now becomes a dialogue. In this dialogue, positions are justified by argumentation which shifts the weight of presumptions by means of asking/answering critical questions. Believe it or not, Willie and his friends are even smarter than this. They can each simulate hypothetical variants of dialogues which could supposedly occur and establish a strategy based on the outcomes of such envisioned pathways. The strategy (or plan) will not be definitive, for as the dialogue unfolds, unexpected elements and moves from the other party can come into the battle. Which can hardly be called a battle, because what Willie and his friends are doing is argue cooperatively. In some instances, the type of interaction can even be seen as a clear-cut cooperation, when no set of opposite commitments is involved and some goals coincide. Nonetheless, as much as his friends, Willie can be capable of deceitful behaviour. In this cases, fallacies (and illicit dialectical shifts) occur.
 I will stick to the orthography of ‘dialog’ as ‘dialogue’
 (Hamblin, 1970) For a small history of dialectic, see “What is dialectic?” here. I also found Hamblin’s chapter “The Concept of Argument” here.