A Journal of Rhetoric, Writing, and Culture

Materialist Rhetoric and ‘For Us(e)’ Assessment

James Beasley, University of North Florida

(Published July 24, 2017)

In a February 2016 Chronicle of Higher Education article, “Don’t Be Cruel,” Thomas Batt wrote about the dangers of being too harsh on student papers. While Batts’ article repeats many already known ideas about the best ways of responding to student work, it presumes that teacher commentary is always generated by the teachers themselves, when this position does not account for their relation within the institutional apparatus—specifically, the institutional apparatus of state-mandated assessments. In fact, the place of teachers within this apparatus has not been studied in depth; specifically, there has been little attention paid to how the narrative of programmatic progress can actually require teachers to artificially grade student writing much lower in students’ initial drafts in order to demonstrate their “improvement” on subsequent state-mandated assessments. After this “improvement” miraculously occurs, writing programs are then able to apply for more funding, staffing, and other incentives, which Christopher Carter degines as “the ideas most useful to hierarchies of decision making and money-gathering” (188).The artificial lowering or raising of assessments to merely demonstrate improvement has its critics in contemporary assessment scholarship. One of the first researchers to draw attention to this validity problem was Grant Wiggins. In the inaugural issue of Assessing Writing, he wrote, “In performance tests of writing we are too often sacrificing validity for reliability; we sacrifice insight for efficiency; we sacrifice authenticity for ease of scoring” (129). Since 1994, many scholars have sought to demonstrate how assessments that neglect validity are mere exercises at best and methodologically bankrupt at worst, and some of the most important work in assessment right now centers around broader understandings of validity in writing assessments. Kathleen Yancey writes: 

Chief among the definitions [of validity] however, are three that work together. First, validity refers to the idea that an assessment measures what it purports to measure. Second, a measure is valid to the extent that it is ecological or consequential: that is, it contributes to a learner. And third, a measure is valid if the interpretations and inferences it leads to are appropriate. (171)  

While the first part of Yancey’s validity definition is pretty standard, she expands the traditional concept of validity to its effects on students, both in the classroom and beyond. It is worth looking at three current assessment scholars who expand Yancey’s validity definition in order to demonstrate the particular consequences of that expansion.

The traditional definition of validity, or how an assessment “measures what it purports to measure,” is complicated by the work of William Condon when he writes:

Traditional assessments have tested only for certain aspects of writing performance that the test makers have decided students ought to have learned. The latter standard is far narrower and more arbitrary than the standard set in colleges and universities around the country over the past three decades. As a result, assessment practices—in classrooms and across the curriculum—have emerged that are far more open to discovering what students can do. These assessments are fairer by orders of magnitude than traditional tests. (ix, emphasis added) 

For Condon, the traditional definition of validity is expanded from what the assessors want to measure to what the students demonstrate can be measured. This shift from assessor to student demonstrates the limitations of validity for assessments that “measure what they intend to measure.” In other words, the more limited the measurement, the easier it is to achieve validity. However, if the focus is merely on achieving validity, it is tempting to set the measurement too small, so that they do not need to take in the variety of student differenences. 

The second part of Yancey’s definition expands validity to the contributions of the learner in a much more specific way, and Asao B. Inoue’s “Articulating Sophistic Rhetoric as a Validity Heuristic for Writing Assessment” demonstrates the importance of presenting this validity rather than representing this validity. Inoue writes:

How is the assessment and its results working toward the interests of those being assessed, namely students (and secondarily programs and faculty), and not simply reinforcing the interests of those with power (or those who control the ‘land,’ the nomos, of assessment)? Are the interests and needs of students being represented by students, or are these interests merely represented for them? (38-39) 

While the groundbreaking assessment scholarship of Ed White and Brian Huot had given writing assessors more agency in dealing with other stakeholders, Inoue’s call challenges that agency, demonstrating how an assessment that merely benefits programs and faculty could be just as hegemonic as one that benefits the institution or the state. 

The third characteristic of Yancey’s expanded validity definition is that assessments are valid only if they are appropriate in their interpretations and inferences. In other words, collecting data on a program’s improvement is not merely enough because it must also account for how that “improvement” affects other areas of a student’s learning and their well-being. Sandra Murphy argues, “the issue of validity is critical because test scores provide the basis for long term-decisions about students concerning placement, selection, certification, and promotion. These long-term decisions can have significant consequences for students and, as we all know, not all of those consequences are good” (228). Artificially creating lower scores for students at the beginning of a semester only works if the students stay around long enough to improve that score. We are starting to see how these artificially lower assessments of student writing have affected other areas of student development, even university retention itself. 

The problem of an over-reliance on reliability in high-stakes testing is only one side of the problem, however. Despite the difficult work undertaken by assessment scholars to increase validity, there are simply too few scholars to keep up with the increase in state-mandated assessments. In many ways, the effects of these rearticulations have only made assessment more conspicuous and its effect on writing pedagogies more obtrusive. Using Levi Bryant’s conception of material ontology, this article demonstrates how examining objects in-use can open new possibilities in assessment theory. This study also demonstrates how examining objects in-themselves can diminish the authority that assessment tools can have over today’s student texts.

In his 2013 article, Chris Gallagher describes the predicament of the current assessment situation in higher education. Gallagher says, “Despite the wealth of compelling research on and descriptions of local assessments, standardized testing continues to make inroads in higher education, and upper administrators, policymakers, and the general public continue to imagine faculty and students as targets of assessments rather than generators of it” (452). Gallagher’s claim is based on the ineffectiveness of even the most broadminded of assessment specialists to change this situation. Gallagher utilizes Burkean strategies for demonstrating how a focus on the “agent” has minimized the effectiveness of the assessment “scene.”  What is also useful in Gallagher’s critique, though, is how he describes changes in assessment’s materiality. Assessment, as a material being, now acts on us, rather than us acting on it. He also describes the corresponding attenuation to materiality as a construct of accountability. For Gallagher, it would seem that the commitment to materiality, through buzzwords such as “accountability” or “transparency,” has become a driver of assessment. Such neoliberal values are written into the large-scale educational policy of the United States, so much so that attempts to critique those values are beyond the abilities of even the best scholars of writing and educational assessment. Gallagher writes the following:

Hewing to a ”stakeholder” theory of power that functions quite comfortably within the neoliberal circumference, we jockey for position within a scene that assigns us little agency. Until we disencumber ourselves from the stakeholder theory of power and rewrite the assessment scene, we will not be able to exert leadership for assessment, no matter how mature our assessment expertise. (458)

Not only does Gallagher describe a materiality at work, but he identifies ways in which composition’s assessment history has been complicit in this alteration. In fact, in some cases the assessment proposals that we vilify have their theoretical underpinnings in rhetoric and composition’s own assessment theories. Gallagher historicizes the beginnings of neoliberal values within the attempts of writing program administrators to validate and extend their influence. His history includes even more recent attempts to “rearticulate” and “re-signify” the effect of assessment in the teaching of writing. Gallagher questions whether attempts at “rearticulation” and “re-signifying” have exacerbated neoliberal values that are embedded within new strategies aimed at strengthening the authority of teachers and writing program administrators, most notably in the work of assessment scholar Brian Huot. Gallagher writes the following: 

Huot wants to see compositionists collaborate with educational measurement experts to create a “unified” field of writing assessment and here things get tricky. Huot accedes to the traditional bifurcation between teachers and content experts and testers as assessment experts. If this construction of respective roles prevails, teachers and students will never have the primary role Huot wishes for them—at least not in a technocratic culture like ours. (459) 

For Gallagher, the effect that Huot aspires to—a unified field of assessment—still relies on the “stakeholder” theory of power that maintains the brightness of “accountability” that has plagued educational reform. In Gallaher’s call for the “rewriting of the current assessment scene in ways that reject stakeholder theory” (461), he calls for an attention not to “institutional position but from location within the central activity of the enterprise and relation to others undertaking this activity” (464). In other words, while both White and Huot have sought to increase the validity and effect of writing program administrators by increasing the brightness or the importance of writing assessment, Gallagher introduces the possibility that decreasing the importance of assessment might actually have a greater impact on making assessment both more valid and effective. Gallagher writes, “Network thinking focuses attention on the patterns of relations that shape interactions within it rather than focusing only on the attributes of individuality” (465). One of the ways that the “patterns of relation” can be examined, therefore, is by utilizing Levi Bryant’s ontological materiality. 

In this next section, therefore, I attempt to demonstrate how utilizing specific components of Bryant’s “ontological materiality” can be applied to Brian Huot’s 2002 (Re)Articulating Assessment. As one of the most popular works on redesigning assessment, Huot seems to articulate some of the most accessible positions on subjectivity and materiality that may provide important insights into assessment’s current material brightness. In this work, Huot advocates a more “conscious, theoretical and practical link between the teaching, research and theorizing about writing, recognizing that assessment is a vital component in the act of writing, in the teaching of writing, and in the ways in which we define our students, courses, and programs” (11).  In other words, if we do not know how to define our students, courses, and programs, our assessment procedures can do the defining for us. What is hidden in this “rearticulation” is an assumption that our students, courses, and programs should be reduced to definition. What is worse is the assumption that assessment should be the tool to give these quasi-subjects their material borders. 

Huot went even further in his attempt to make assessment an even “brighter” object by demonizing holistic scoring as driven by positivism: 

Rather than basing assessment decisions on the abstract and inaccurate notion of writing quality as a fixed entity—a notion that is driven by a positivistic view of reality—we should define each evaluative situation and judge students upon their ability to accomplish a specific communicative task, much like the basic tenants of primary trait scoring. (102) 

But in the defense of primary-trait scoring, Huot concludes, “Because assessment is a direct representation of what we value and how we assign that value, it says much about our identities as teachers, researchers and theorists” (11). In other words, “we” should articulate our assessment measures accurately because “we” want them to accurately represent “us.” The very positivism that Huot wishes to escape is the means by which assessment objects are given even brighter prominence. In other words, what seems to happen in the field of rhetoric and composition is that whenever theorists attempt to decenter, re-signify or rearticulate assessment, the brighter assessment becomes as an object, increasing in its significance. While an analysis of “stakeholder” assessment demonstrates the difficulty in escaping positivistic frameworks, Levi Bryant’s distinction between epistemological and realist materialities offers alternatives to those wishing to reduce the effects of assessment measurements. For Bryant, claims about epistemological materiality are “never claims about beings-in-themselves or beings apart from us, but are always and only claims about beings as they manifest themselves to us” (38). This distinction, therefore, illuminates the significance in the subject-object hierarchy found in Huot’s claim that assessment measurements should accurately represent its subjects. In contrast to epistemological materiality, Bryant writes that claims about realist materiality “really are claims about objects and not objects as they are for-us or only in relation to us” (38).  Therefore, when Huot critiques assessment plans that utilize holistic scoring in favor of the differing epistemologies of trait-specific scoring, it is easy to see how epistemological materiality itself has prevented any significant change to assessment’s material brightness.

Rather than attempting to understand assessment or even the assessors through realist materiality, Huot doubles down on subjectivity. His rearticulation of assessment re-signifies assessment within the rhetoric of research activity:

If we see our task in writing assessment as research, it not only changes the focus of the activity, it also changes the role of the assessors. Instead of just being technicians who administer the technological apparatus of holistic or other methods of scoring, writing teachers and program administrators become autonomous agents who articulate research questions and derive the methods to answer those questions. (151) 

For Huot, WPAs who act as autonomous agents articulating research questions do not escape the stakeholder theory of power that Gallagher critiques. Christopher Carter writes that the most valuable students in the managed university are just that: more potential managers (190).  In a stakeholder theory of power, it will be easier to have this kind of view of students rather than one that isn’t from a stakeholder theory of power. By utilizing Levi Bryant’s conception of epistemological materialism, however, one can see how this increasing commitment to subjectivity masks the materiality of the object from the subject. Bryant writes: 

For the epistemological materialist, objects take on the status of fictions. Because objects can no longer be equated with things-in-themselves, because objects are only ever objects for-us and never things as they are independent of us, objects become phenomena or are reduced to actual or possible manifestations to us. (38)

As assessors’ commitments to subjectivity increase, the more objects are only objects for us, and the more oblivious the subject becomes to the objects own potentialities of influence. This has certainly been the case for trait-specific rubrics and the use of trait-specific rubric assessment under the guise of research. The more that trait-specific rubrics are “for us,” whatever that “for us” may be—increased funding, survival of classes or departments, tenure and promotion, research publication—the more difficult it will be to understand their effects that escape “our” needs, such as how they affect student thinking regarding the nature of truth, inquiry, social justice, and many other unforeseeable consequences that are not “for us.” Epistemological materialities will fail at this, but not for lack of trying. Huot and his challenge to “not only turn our research gaze outward toward our students and programs but inward toward the methods we are using to research and evaluate our students in programs” demonstrates how powerful the language of subjectivity and epistemological materiality can be (155). Bryant, however, demonstrates that simply turning one’s gaze in a different direction is problematic simply because it is one’s gaze in the first place, and what we think we distinguish in that “for us” gaze is even more problematic. Bryant writes, “In addition to the unmarked space of a distinction, the distinction itself is a blind-spot. In the use of a distinction, the distinction itself becomes invisible insofar as one passes ‘through’ the distinction to make indications” (21). Distinctions “observed” are therefore passed over for distinctions “for use.” Assessments, therefore, are better understood as distinctions observed rather than distinctions “for use for us.”

This might seem like an impossible task for writing program administrators charged with demonstrating the success of a writing program with limited resources and an extreme “for use” state of urgency. However, recent attenuations to materiality and new media technologies may make such dilemmas easier to negotiate. Jeff Rice describes the epistemological materiality accepted by the discourse of writing program administration:

In some Writing Program Administration discourse on assessment, we see a fairly consistent topos repeated from study to study. We see researchers confirming some commonplace assumptions. These assumptions typically employ topoi in order to offer validations of program worth (i.e. student writing improved from year to year or the program improved as a whole). These topoi are not necessarily bad, but they do point to a repetitive way of explaining assessment results. In many assessment studies, similar topoi circulate: students write for multiple media, multiple audiences, in multiple genres, at different points in time. (30) 

In what Rice describes as “repetitive,” the language of object-oriented ontology could be used to describe as “bright.” They are bright because they are repetitive: the same goals, the same results. In fact, the need to find these results requires them to be bright. Rice’s antidote to avoid repetitive goals and their subsequent findings is a study of new media networks and their relationships to materiality. In the same way that the superiority of print culture in the nineteenth century made the materiality of Hill’s assessment program “bright,” the domination of new media and the diminished influence of print culture today could have significant implications on manipulating the effects of programmatic writing assessment. Rice concludes,  “Although we live in an age dominated by new media technologies as varied as word processing and social networking, we spend little time considering how the logics and rhetorics of such technologies might shape institutional practices like assessment that attempt to gather information into a space” (28).  In other words, networked assessment would examine objects in relationship to other objects or objects in-themselves as opposed to objects in relationship to us or objects for use.

While Rice does not draw a specific link to Bryant’s realist materiality, he reimagines assessment through the network theory promoted by Delanda, and the similarities between Bryant and Delanda hinge on the distinction between objects for-use and objects in-themselves. Rice writes, “In A New Philosophy of Society, Manuel DeLanda argues that ‘in network theory the emphasis is always on relations of exteriority. That is, it is the pattern of recurring links, as well as the properties of those links, which forms the subject of study, not the attributes of the persons occupying positions in a network” (29). Unlike Huot, whose epistemological materialities focus on increasing subjectivity and its correlating increase in material brightness, Rice’s utilization of network materiality accomplishes three tasks. First, a focus on objects in-themselves darkens the influence of the subject because objects are not seen for their use, but for their non-use. This in turn accomplishes the second goal, the ultimate darkening of the effect on assessment measurements in the teaching of writing. Third, it allows writing program administrators to see the relative brightness of their effects on each other in a twenty-first-century reconsideration of the Burkean formulation: how the what influences the what.

While some might consider the suspension of the subjectivity of the writing program administrator as an act of apostasy, a rejection of materiality for use would accomplish the purposes that many writing program administrators have in the first place: the desire to promote effective student writing. The effect of the subjectivity of the writing program administrator is to emphasize the “Program” in WPA, and not the “Writing” characteristics. Rice concludes:

Unlike many other forms of assessment, when I began this tracing, I am not beginning from the position of proof (i.e. our students write well, our program is exceptional, we do a lot of work with our students) nor as confirmation of circulated topoi (our students write with multiple media, our students write in different genres, our students write over a period of two to four years. (33)

By de-emphasizing the role of the program and the administrator, the role of writing is recovered. In other words, the effectiveness of a writing program administrator does not have to be tied to institutional values that we neither have sought nor condone. Rice contends with those who would “teach an assessment that will allow us to assert ourselves with more confidence and competence as writing assessment experts and leaders” (33). WPAs are effective not because they can confirm conclusions that their programs, departments, and institutions already want to demonstrate, but because they can subvert those obvious conclusions and investigate new lines of inquiry that the assessment objects themselves have shown to us. 

In order for writing program administrators to avoid seeing assessment objects for their own use, they must be able to suspend the logic of “for use” that assessment has embraced in its recent attempts at “rearticulation.” Rice concludes, “Assessment, therefore, may not have to depend on the outcome of success or failure, good or bad, right or wrong, value or lack of value in order to be meaningful to one’s program or superiors” (31).  Until one suspends the logic of “for use,” it is nearly impossible to imagine any other “for anything,” and the method by which Rice activates an assessment tool’s in-themselves materiality is through Latour’s act of tracing:

What we are searching for is not validation, but rather tracing. I call this lack of dependence on conclusion, one that favors tracing over finality, a new media logic because its focus is on shifts in connectivity (or lack of connectivity) rather than on conclusive moments that remain fixed. (31-32) 

Tracing is the activity of first of all seeing a specific skill or affect and then following through a corpus of text, whether that be in an individual student’s writing or in the writing of a program, department, university, or even larger groups of texts. In this way, the brightness of assessment tools such as rubrics are diminished and made increasingly unobtrusive, while the writing of the student is made brighter through its unfolding traceability. Even writing program administrators that are required to darken student writing and to brighten assessment tools have developed ways around this seemingly difficult bind. In their article, “Democracy, Struggle, and the Praxis of Assessment,” Tony Scott and Lil Brannon conducted two concurrent programmatic assessments, with the first assessment protocol following the prescribed commonplaces and the second assessment protocol following faculty and student networks. They write that the “assessment met the basic requirements of the mandated assessment, but allowed them to show how differing values sort students in different ways” (290). If student writing is therefore associated with “bright” objects, then according to Bryant, those writings will “strongly manifest themselves and heavily impact other objects” (“Dark Objects”). If assessment tools such as rubrics become more associated with “dark” objects, then according to Bryant, they will become “so completely withdrawn that they [will] produce no local manifestations and [will] not affect any other objects” (“Dark Objects”). Scott and Brannon describe the effect of darkening their own assessments when they write the following:

[The] report delivered two sets of numbers, without favoring one stance over the other. They also forwarded a detailed account of their qualitative research on what faculty value to show the wide varieties of values that faculty inside the program and across the disciplines bring to bear on students’ drafts. The overall strategy was to provide numbers in satisfaction of the assessment mandate, even as they emphasized that any standard, any set of values and criteria, is constructed, materially situated, and in contention. They wanted to show that students’ work sorts out differently, depending on which value system dominates. (290)

The more that we are committed to assessment measurements “for us,” the less we will understand its impact on our students, our field, and the nature of assessment itself. The more that we are committed to assessment measurements “in-themselves,” such as tracing, the more that today’s students will resemble the networked learners of the twenty-first century rather than the unrealized dreams of twentieth-century assessment stakeholders.

Works Cited

Batt, Thomas. “Don’t Be Cruel.” Chronicle of Higher Education, 3 Feb 2016.

Bryant, Levi R. “Dark Objects.” Larval Subjects Blog, 25 May 2011. https://larvalsubjects.wordpress.com/about/

---. The Democracy of Objects. Open Humanities Press, 2011. New Metaphysics.

Carter, Christopher. “Bureaucratic Essentialism and the Corporatization of Composition.” Tenured Bosses and Disposable Teachers: Writing Instruction in the Managed University. Edited by Marc Bousquet, Tony Scott, and Leo Parascondola. Southern IllinoisUP, 2004. 186-192.

Condon, William. Introduction. Race and Writing Assessment. Edited by Asao B. Inoue and Mya Poe. Peter Lang, 2012. Studies in Composition and Rhetoric.

Gallagher, Chris W. “Being There: (Re)Making the Assessment Scene.”  College Composition and Communication, vol. 62, no. 3, 2013, pp. 450-476. 

Huot, Brian. (Re)Articulating Writing Assessment for Teaching and Learning, Utah State University Press, 2002. 

Inoue, Asao B. “Articulating Sophistic Rhetoric as a Validity Heuristic for Writing Assessment.” Journal of Writing Assessment, vol. 3, no. 1, 2007, pp. 31-54. 

Murphy, Sandra. “Culture and Consequences: The Canaries in the Coal Mine.” Research in the Teaching of English, vol. 42, no. 2, Feb 2008, pp. 228-244. 

Rice, Jeff. “Networked Assessment.” Computers and Composition, vol. 28, no. 1, 2011, pp. 28-39. 

Scott, Tony and Lil Brannon. “Democracy, Struggle, and the Praxis of Assessment.” College Composition and Communication, vol. 65, no, 2, Dec 2013, pp. 273-298. 

Wiggins, Grant. “The Constant Danger of Sacrificing Validity to Reliability: Making Writing Assessment Serve Writers.” Assessing Writing, vol. 1., no. 1, 1994. pp. 129-139.

Yancey, Kathleen Blake. “College Admissions and the Insight Resume: Writing, Reflection, and Students’ Lived Curriculum as a Site of Equitable Assessment.” Race and Writing Assessment. Edited by Asao B. Inoue and Mya Poe. Peter Lang, 2012. Studies in Composition and Rhetoric.