Using Pros and Cons

I recently had an interesting and productive discussion on Twitter with some of my colleagues about the use of, and since I’ve been a user for about ten years now, the discussion prompted me to think again about my use of this educational technology and to make explicit, at least to myself, my reasons for using it.

I also think this discussion is important to higher education in general in that is one of many vendors associated with the higher education industry, and it’s a significant one. As of the time of this writing, it boasts being used by 15,000 educational institutions and thirty million students on the front page of its website., like many other vendors, provides products or services designed to support higher education in a number of ways (not all are directly related to instruction), and they all work on a for-profit model.

Since higher education is for the most part non-profit, sometimes these partnerships can be uneasy, sometimes exploitative, sometimes at cross-purposes for student service, but also sometimes beneficial to varying degrees. Some vendors provide excellent products. So if a university chooses to use a vendor to serve its students in any way, it needs to pay close attention to its own reasons for doing so, to the quality of that service, and to how much reliance on this vendor actually benefits students and instructors.

First, a bit of discussion about how works. is a web-based “student paper processing service” that runs externally to a college or university website. Colleges or universities who use this service have to contact the service to receive a customized quote, so there aren’t any solid figures on how much the service costs. Financial Times, however, estimated in 2012 that it costs about $2.00 per student per year. Other articles have since indicated significant price increases over the last couple of years, so let’s assume, for the sake of discussion, that the service now typically costs $4.00 per student per year. I do not know how much it costs my own institution, and I suspect institutions sign non-disclosure agreements about their specific costs, so if I did know its actual cost I probably would not be allowed to report it. can now be integrated with learning management systems (LMS) such as Moodle so that it will appear to be fully integrated into the online component of any student’s course. Despite that appearance, however, it’s still an off-site service. When it is fully integrated into an LMS, students just click on a link and upload their papers. When it is used off-site, instructors have to log in to the service, create a course, create a course-specific password, and then either share that password with their students or upload a list of student email addresses to enroll students in their specific course.

What happens once a student’s paper is uploaded? The instructor can use the service for a number of purposes:

  1. Plagiarism detection. was originally created for this purpose. When a student’s paper is uploaded to the website, the student’s paper is saved in a repository with other student papers and compared to all other student papers in that repository. It is also compared to journals, periodicals, publications, and to readily accessible material on the internet.
    1. What does it do when it makes this comparison? It generates an “Originality Report” score in the form of a percentage of material on the student’s paper that matches other sources. Matching text is highlighted in different colors by source. Links back to the original sources are also provided.
    2. What it does not do: tell instructors if the student plagiarized. Remember, we are allowed to quote other people’s works. How we signal those quotations determines whether or not we’re plagiarizing, so a match by itself is not plagiarism. Determinations about plagiarism are always made by the instructor, not the service
    3. Is plagiarism checking optional? Yes. It’s possible to use the service and opt out of storing student papers, and to opt out of checking them against any specific type of source (such as the repository of student papers, the internet, and publications).
    4. Instructors can also ask the service to ignore small matches, such as three words or fewer, and they can set the number. I always ask it to do this.
    5. Instructors can also ask the service to ignore the paper’s bibliography, which will always come up with matches when a bunch of students are writing about the same material from the same texts. I set this up too.

      How does this part of the service work? Spotty, but not bad overall. There are problems with false or irrelevant matches fairly regularly. These can be caused by the use of block quotes, as the service seems to look for quotation marks to exclude matches, by the use of long titles (more than three words), and sometimes even by the student’s own header information. If a student puts an incorrect space between a quotation mark and quoted material, the quotation might be read as a match. Overall, it’s very important that the instructor not just read the originality report score, but actually read the student paper before making a determination about plagiarism.

      It also provides the unexpected benefit of telling instructors how much of the student’s paper is quoted, which can be useful pedagogically as well.

  2. Providing feedback on student papers. This is the reason why I use the service, which can perform the following tasks. Take note, though, that some of these services are only available through the external website, not the LMS embedded version:
    1. Allow instructors to provide voice comments.
    2. Allow instructors to provide their own custom comments on the student paper in the form of little bubbles. Students mouse over the bubbles to see instructor comments.
    3. Allow instructors to pre-set paper comments and drag and drop them onto the student’s paper. The service comes with three or four dozen preset comments, and instructors can create their own as well.
    4. Allow instructors to set up any number of rubrics and score and grade the paper using this rubric.
    5. Link instructor comments to rubric measures. When you do this, the rubric will show the number of instructor comments linked to each rubric point.
    6. Allow instructors to provide long text feedback.
    7. Allow instructors to set up peer review assignments — students submitting a peer reviewed assignment will have their paper emailed to two peers, have two of their peers’ papers emailed to them, and they will be able to leave comments on their peers’ papers just like their instructor.
    8. Keep the student grade book.
    9. Keep a course blog.
    10. Allow access for teaching assistants to grade papers.
    11. Built-in grammar checker. Every time I’ve used it, it sucked, but it’s still there.
    12. Download feedback and originality reports in the form of .pdf files.
  3. What are the drawbacks to this service? Here’s where we get into the details of my Twitter discussion. Some of these points have also been raised in other discussions of around the web.
    1. The service creates the impression that students aren’t to be trusted.
      1. This concern is legitimate, but I think it varies by institution. I have seen places where high premiums were placed on student course evaluations, and as a result many instructors got into the habit of looking the other way at plagiarism. These very dysfunctional institutions worked on an implicit agreement between students and teachers in which teachers looked the other way at cheating and students gave these teachers stellar course evaluations in return (a situation which by itself justifies the tenure system, as this institution did not have tenure). Some students at this institution plagiarized on every paper and then just rewrote it when they got caught — which means that since they were only made to do the work initially assigned, they always came out “ahead” by plagiarizing in the sense of getting a grade for a course without doing any real work. This is an environment devoted to breeding criminals, and its students are stealing from themselves with the institution’s help.
      2. But what about better institutions? Even there, some students will plagiarize, but I think instructor dialog with students about the service is very important. I really do use it primarily for grading. I’m teaching a 5000/400 level English course right now in which I can honestly say I have no fear of a single student plagiarizing: I trust each one, personally, that much. But I still use the service because of all of its feedback functions, and I tried to let my students know that. I prefer it to Google docs or directly emailed Word files.
    2. The service makes instructors grade to the comments. This concern is about instructors only looking for items defined by pre-written comments rather than truly providing individualized feedback based on student need. I think this concern is 100% legitimate, and anyone who chooses to use the service needs to watch out for letting the service take over his or her feedback on student papers. Now that I’ve had this idea planted, I’m going to watch myself grade.
    3. The service exploits students. The argument here is that the service has value only because students are contributing papers to it, and then the service charges students to use it (through their institutions, of course — once the institution pays for a subscription, instructors and students use it for no additional charge).
      1. Defining “exploitation” as uncompensated or under-compensated labor, I think this argument doesn’t quite work for the following reasons:
        1. Student papers typically have no economic value apart from the service except to be sold to other students (so a dishonest one). The service itself therefore creates the economic value of student papers for the service, so it’s hard to say that students are being ripped off.
        2. If a student’s paper does have monetary value (e.g., can be sold for payment by the student for publication), the service does not prevent students from realizing that value. doesn’t own student work. Publish away. Get rich.
        3. The service provides value to student users in the form of a permanent, informal copyright on their work: once a student uploads a paper to the service from a account linked to their own email address, the student’s work is protected as their own. I uploaded my dissertation to for this very reason.
        4. Uploading student papers to’s repository is optional, as is plagiarism detection, therefore there’s no necessary link between using the service and uploading a student’s paper to the repository.
        5. The service provides many useful tools apart from plagiarism detection.
        6. The service provides a service in exchange for pay, so it isn’t exploiting students. If we reject this argument, we also have to affirm that teachers are exploiting students by taking a salary for their work. Since everyone deserves to be paid for their work, this service is non-exploitative.
        7. The service only costs students a very small amount: maybe $2.00 to $4.00 a year. If we’re really worried about student exploitation, maybe we should look at sports programs instead.
        8. Students don’t have a choice about use of the service. Yeah…so? They don’t have a choice about writing papers, getting graded, showing up for class, etc. What matters is whether or not these required activities benefit the student. What matters most of all is explaining to students the benefits of required activities. All of them

That’s my overview of the service. I intend to keep using it for many of the reasons described above. But I want to emphasize — we should use it deliberately, carefully, and consciously. It is not perfect. The bottom line is that is just a computer system, and computer systems don’t know how to read. They don’t understand meaning or context. Only instructors can do that. As a result, it’s a supplement to an instructor’s work and care with student papers, not a replacement for instructor care and attention.

Any comments? I’d love to hear from you.

Reading Print Books Is Better than Reading E-Books

Yes, it’s true: the latest research indicates that reading material in print rather than in an electronic reader is better for you in the following ways:

  • Increased comprehension. The tactile experience of reading a printed book actually matters. Check out the research.
  • Related to the above, we’re more likely to read every line of printed material. When we read e-books, we tend to read the first line and then just the words near the beginning of the line after that.
  • We lose the ability to engage in linear reading if we don’t do it often.
  • Reading printed material for about an hour before bedtime helps us sleep. Reading ebooks keeps us awake.

I read both e-books and print books, and I’m grateful for my e-readers (really, the apps on my iPad) when I’m traveling. It’s easier to carry 1000 books on one iPad than it is to carry five in a backpack.

But I know what the researchers mean by the tactile elements of memory, the feeling of better control over your media with pages, etc. I do remember where to find things in books by their physical location in the book, which isn’t possible with an e-reader: you can only search terms and page numbers. I think the point here isn’t which search method is more efficient, but which reading style engages more of the brain by engaging more of our physical senses.

I’d like you to consider a few things about the way we developed our technologies:

  • The people who developed our technologies didn’t have our technologies. In other words, the people who built the first computer didn’t have computers.
  • The engineers who landed men on the moon did most of their work on slide rules.
  • The computers that they did use had less computing power than our telephones.

LJN Radio Interview — Technically Speaking: Utilizing Technology in Education

I’m happy to announce that my second interview with LJN Radio, “Technically Speaking: Utilizing Technology in Education” is now available on the LJN Radio website.

From the website:

Various uses of technology can be invaluable when it comes to educational success and improved learning. At the same time, people need to be cautious in seeing all forms of technology as an easy fix to how people are taught. Jim Rovira, associate professor of English at Tiffin University, explains to Tim Muma how important it is to ensure students are matched with the appropriate use of technology. Whether it’s taking online classes or utilizing in-class technology for lessons and assessment, it’s imperative educators understand that each student has different needs and will succeed or fail based on the fit of the technology they use.

On Technically Speaking, we explore the latest social media applications for the modern day workplace. Together we’ll discover the hottest technology jobs on the market and keep up with the latest high-tech trends.

Duration: 18 Minutes

Yes, Employers Still Want Writing Skills…

The truth is that employers have been complaining about M.B.A. writing skills for more than ten years now. And not just M.B.As.

But the problem is not that writing and communication skills are “difficult skills to teach,” as the article suggests. I think this kind of claim comes from a panacea view of writing instruction: students take a writing class, so they learn how to write. Writing instruction doesn’t usually work that way. Developing writing ability is a matter of cognitive development, not just a matter of taking in information, so it takes time to develop. If a program wants to develop students’ writing skills, students need to be made to read and write and to receive writing instruction in most of their classes, not just their English classes. The problem is that business and other programs don’t invest in practices that develop communication skills; e.g., high reading and writing requirements.


One Skill Recruiters Say Is Lacking With Recent M.B.A. ….

Evaluating Course Evaluations

There’s a recent interesting study out of UC Berkeley evaluating the validity of student course evaluations in measuring teaching effectiveness. The results are similar to the results of the many other studies conducted in the past: student course evaluations are not reliable indicators of teacher effectiveness:

Student ratings of teaching have been used, studied, and debated for almost a century. This article examines student ratings of teaching from a statistical perspective. The common practice of relying on averages of student teaching evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned for substantive and statistical reasons: There is strong evidence that student responses to questions of “effectiveness” do  not measure teaching effectiveness. Response rates and response variability  matter. And comparing averages of categorical responses, even if the categories  are represented by numbers, makes little sense. Student ratings of teaching are valuable when they ask the right questions, report response rates and score distributions, and are balanced by a variety of other sources and methods to evaluate teaching.

What do student course evaluations measure, then? The authors of this study summarize the findings of previous studies here:

  • Student teaching evaluation scores are highly correlated with students’ grade expectations (Marsh and Cooper 1980; Short et al. 2012; Worthington 2002). WHAT THIS MEANS:
    • If you’re an instructor and want high course evaluations, pass out As like candy.
    • Adjunct instructors, having the least job security and the most job retention anxiety, are most likely to inflate grades to get high course evaluations.
    • Net result: over-reliance on adjunct instructors and on student course evaluations to evaluate teachers leads to grade inflation and low course rigor; i.e., poor educational quality.
  • Effectiveness scores and enjoyment scores  are related. In a pilot of online  course evaluations in the UC Berkeley Department of Statistics in Fall 2012, among the 1486 students who rated the instructor’s overall effectiveness and their enjoyment of the  course on a 7-point scale, the correlation between instructor effectiveness and course enjoyment was 0.75, and the correlation between course effectiveness and course enjoyment was 0.8.
    • WHAT THIS MEANS: If students enjoyed the course, they will rate it highly. But enjoyment by itself isn’t a measure of learning. The instructor may just be a good performer.
    • Conversely, lack of enjoyment doesn’t mean the student didn’t learn. The types of assessments and activities that promote long term retention, in fact, lead to low course evaluations. The practices that students like the least actually help them learn and retain the most. See the link right above.
  • Students’ ratings of instructors  can be predicted from the students’ reaction to 30 seconds of silent video of the instructor: first impressions may dictate end-of-course evaluation scores, and physical attractiveness matters (Ambady and Rosenthal 1993).
    • WHAT THIS MEANS: student course evaluations are, more than anything else, superficial measures of instructor popularity.
  • Gender, ethnicity, and the instructor’s age matter (Anderson and Miller 1997;  Basow 1995; Cramer and  Alexitch 2000; Marsh and Dunkin 1992;  Wachtel 1998; Weinberg et al. 2007; Worthington 2002).
    • WHAT THIS MEANS: student course evaluations are, more than anything else, racist, elitist, ageist, and sexist superficial measures of instructor popularity.

So how do we rate teaching effectiveness? I’d recommend the following:

  • Worry less about evaluating the teacher for promotion and focus on gauging effectiveness for the sake of seeking out the most effective strategies for that specific student population.
  • Rely in part on peer evaluations — teachers in the field conducting this evaluation. Field specific knowledge matters, as teaching isn’t just a matter of technique, but of careful selection of content.
  • We still do want to hear from students, of course, so use course evaluation tools that focus on teaching effectiveness, such as those provided by the IDEA Center.

Just for the record, I’m an engaging instructor who generally gets high course evaluations, so I’m not worried about myself here. I am, however, worried about how effectively students are being educated. Reliance on student course evaluations, at present, is working against educational quality.

You can read the study below: