I’ve been involved with writing as a teacher and an author much of my life. I’ve written for academia and personal interest. I’ve had my writing evaluated for entrance into programs; resumes I’ve written that have been scrutinized for the workplace; I’ve had stories evaluated by editors for publication, but I have never had my writing evaluated by artificial intelligence (AI). In my training as a teacher, I wasn’t taught to teach writing for an audience that wasn’t human. I wasn’t taught to look at writing as it would be seen by artificial intelligence. Yet, that is what the students in my classroom will face at the end of their coursework. It will not be up to me to evaluate their writing. They will face a standardized test and an automated essay grader. The purpose of this book is not to debate the use of Automated Essay Graders (AEG) or the pros and cons of AEG. I am creating this book in 2019 because AEG is a fact of life for the students I teach.
Automated essay graders, or Robo-graders as they are sometimes called, are cheaper and faster than human readers, and testing is a rapidly growing industry. Automated essay graders are programmed to “read” for certain types of words that signal the content and structure of the essay. AEG is looking for a specific type of organization and is limited in the types of essays it can effectively score. Using automated essay graders puts an emphasis on argumentative and informational essays, styles that are evidence-based. Building from that concept, I began researching the type and organizational structure that would best suit an automated essay grader. The difficulty in trying to discover the “best practices” for helping students prepare to face Robo-grading is that much of the information regarding how the systems are designed is proprietary. It is the intellectual property of the testing industry and not something to be shared. The testing industry is privatizing the educational preparation for the tests they administer.
As I began digging deeper into the topic of AEG, the topic took on a new meaning to me. At first it was a kind of sadness about teaching what seems like a joyless writing format. From my perspective, writing has always been a rhetorical art, the transfer of information, feelings, and opinions from one mind to another. Writing has been about communication and interaction and teaching writing has always been fun for me, but with automated essay graders as the final evaluators of my students’ skill level, that does not seem to be the priority, because it is not something artificial intelligence can assess. My students’ futures may depend on the score they receive on their standardized test. The score may impact college placement or workplace job offers. I began to see the issue of AEG as one of social justice and something I need to better understand, so I can help students understand the nature of the “audience” they will be facing when their final writing topic will not be assessed by a human reader.
In the process of writing this book, I have felt like a detective. The information students need to be successful is not easy to find, unless you pay the company providing the test fees for their private information. And even then, the information provided is not transparent. As a result, I have gone behind the scenes looking at research from the artificial intelligence programming side of the house as well as literature regarding linguistics as it relates to artificial intelligence. AEG is based on comparing artificial intelligence scoring essays to humans scoring the same essays. It has been a challenge to try to find out where the sample essays came from and the diversity of the essay writers is in question. In addition, the background of the human graders is proprietary and not disclosed.
Based on the research I have been able to find, my goal for this book is to create an understanding of what AEG can assess and provide tips for the best practices and skills to develop when facing AEG systems. There are many arguments regarding teaching to a test, and that Robo-grading is harming writing instruction, but regardless of those opinions, students are being evaluated on the basis of artificial intelligence and their transition to college or the workplace is being impacted. The testing industry is the clear winner in the standardized testing movement. Rather than making software recognize “good” writing, they will redefine “good” writing according to what the software can recognize. Considering the resources being put into perfecting Robo-grading, it’s likely that we will see rapid expansion in the use of artificial intelligence as an evaluation tool. It’s important to give students a chance to learn to “think” like a Robo-grader.