Section 4

The testing and refining process


The CD we developed for Writing in the Electronic Age was based on principles of user-centered design. We had established that our target audience was first-year students in Humanities, some of whom had considerable knowledge of grammar and editing, and others who did not. Since the course is a entry-level prerequisite for other courses in the Multimedia program, it was essential to develop software that would both appeal to the users and fill in gaps in their education in fundamentals of writing and editing. While I was faced with the necessity of adapting an existing software product to my needs, I did have opportunities to collaborate with the software designers to make sure that the software itself could be made more accommodating to the special needs of the course.

 

Although we were not primarily concerned with the software's competitiveness, we did try to analyze how a user would make the software part of his or her learning. We intended it to supplement the composition textbook used in the course, and so sections of the software occurred in the same order as they did in the textbook, though in considerably more detail on the software since the textbook assumed that fundamentals were not an impediment to the learner. In our beta version of the software for the course, we discovered that some questions were too complex for the learners and needed to be broken down into components that could be more easily understood. More advanced users were always free to skip things that pre-tests revealed they were already acquainted with, so the class as a whole was not hampered by the difference in pacing.

 

Throughout the process, our main goal was to understand the needs of our users and design a CD that would be easy to use at home or in the labs, with yearly upgrades to correct problems and build in improvements and new materials. I had determined that students needed to become more acquainted with terminology and needed to fill in gaps in their understanding of basic grammar and composition, but given that the course was now part of the Multimedia program, we could not assume interest in or aptitude for grammar as one of the characteristics of the students who were taking it. Levels of ability differed more in the new Multimedia program than they did previously when most of the students came from a background in English. Students themselves identified the main virtue of this software as its immediate feedback and its glossary of terms. These were encouraging even to students who knew the answers, but were unsure of the reasons, or who recognized a problem in writing, but had no language to describe it. The software enabled us to be repetitive in ways that allowed the students plenty of opportunities to pick up information, and in an environment made more pleasant by privacy or, at least, limited interaction with peers, rather than large class situations that might prove intimidating.

 

Our testing was done by trial and error. The programmers typically took away small sections of sample material and tried to generate workable lessons. Then my research assistant and others would read through the materials, playing the role of student, making notes of errors and problems of the point-and-click variety. The first version we used in a classroom appeared on a server only; the second was the original CD; the third a revised version of that CD, subject to the helpful revisions of students who completed questionnaires for that purpose, as well as to the input of all the collaborators. Students generally ask for increasing amounts of repetition. In other words, they demand, "tell me why I was right." Hence, we found ourselves adding more and more feedback even to reassure those who had somehow picked the right answer. Gordon Roberts, as research assistant, dedicated many hours to thinking of new ways to explain old points and to dreaming up encouraging and otherwise helpful responses. We discovered along the way that "cute" palls very quickly in our responses to students. As a result, he created a long list of positive and negative responses that could be used as variants. The aim was to avoid too much slang, but at the same time to create a relaxed and friendly tone, and a piece of software with a sense of humor.

 

Our main method of evaluating the software was to solicit user feedback in the form of a questionnaire that tried to determine how much the students had used the software and how satisfied they were with it, compared to the textbook or to in-person consultations with the professor. The questionnaire we developed consisted of checklists, comparison scales, and sections for comments. Overall, the questionnaire has been used three times, and students have repeatedly preferred the software to the textbook as a means of learning, mainly because it allowed them to quickly test their knowledge, and it provided quick (and tested) explanations of why an answer they had selected was correct or incorrect. And, although we had originally intended the software to be completely self-pacing, students insisted on having a schedule of lessons to follow from week to week, a requirement that meant some of our lessons needed to be amended so that they fit the time demands of no more than one hour per week of computer time. This recommended one hour per week with the software yielded a better performance result than we had previously recorded when grammar and composition were taught in the classroom by the instructor, perhaps because individual students were able to choose how much to do and how fast and were free to review the materials or reinforce them with reference to the textbook. The partnering that was encouraged in the labs also meant that students felt more responsible for this component of the course than if they were left completely on their own at all times.

 

Another thing that students were insistent on was the necessity of scoring.  Given that there were few motivations provided to get students to work through masses of material--from exercises designed to help students recognize parts of speech to thesis and topic sentence development, through to advice on how to avoid biassed language--this was clearly an important matter.  None of this software includes any reporting mechanisms to the instructors (though that is the province of our ongoing work in WebMILE), and the instructor didn't really need more scoring information, so it is relegated to the status of information for the student's eyes only.  Nevertheless, this part of immediate feedback is crucial to a student's assessment of how well he or she was doing in a given lesson in the course.



Section 3

Section 5