Notes from the VisCom Classroom: Reflections on the Semester

It’s the end of the semester. As usual, when I reflect on the past 14 weeks, I’m left with more questions than answers. In this column, I will cover some of the issues that I have been thinking about as an instructor in the University of South Carolina’s School of Journalism and Mass Communication.

Is Learning Randomly Distributed?

We hear a lot these days about failing elementary and high schools and the efforts being made to improve them. One suggestion often mentioned is to link the rewards given to teachers and administrators with the performance of their students. The most drastic example of this is the top-to-bottom replacement of personnel in schools whose students don’t make the grade.

At the college and university level, however, we are constantly being warned against grade inflation. In other words, it is considered a warning sign if too many students get high marks. This can reflect badly on the faculty member teaching the course.

In an effort to counteract this, some professors ration the number of As they award. The most extreme example of this was related to me by a colleague, whose graduate-school professor announced at the beginning of the semester that only one student would receive an A.

The premise behind grading at my university is that a C represents average work. Above average work earns a student a B, and an A represents excellent work. The logical extension of this, if you believe in statistics and the bell-shaped curve, is that a majority of students will earn Cs, a much smaller number will earn Bs (and Ds), and only a few will earn As and Fs.

Thus, if we apply the same logic being considered for elementary and high schools, it is a bad sign that so many of our students earn merely average grades. But, if we follow the logic of combating grade inflation, it is a good sign that so few of our students earn Bs and As.

How to resolve this seeming contradiction? What if learning is not randomly distributed? In other words, given excellent instruction, is it possible that a majority of students could meet the learning outcomes at levels worthy of As and Bs? Would this then lead inevitably to a toughening of the course requirements, or an increase in the points needed to earn specific grades?

When Cheating Rears Its Ugly Head

I can’t fathom why a university student would risk their career to gain a few extra points on their final grade. Nevertheless, it happens nearly every semester in my Introduction to Visual Communications course. Despite explicit warnings in the syllabus, which I repeat verbally on the first day of class, some students decide to copy the design assignments produced by other students.

When I say “copy,” I don’t mean one student looking over another’s shoulder. I mean sitting down at one of the shared computers in our computer lab, purloining the digital file left on the computer by another student, and putting their name on it.

These duplicates are usually easy to spot, because everything matches perfectly, even the mistakes. Unfortunately, this implicates both students, and both receive a zero grade for the assignment. Both also get reported to our Office of Academic Integrity. Multiple offenses can result in suspension or expulsion.

The reporting process is time consuming, involves filing paperwork, and may take weeks or months to resolve. As a result, some professors prefer to handle the matter themselves, which means no official record of the incident exists. Whether this informal method deters further cheating, I do not know. Nor do I know if the formal method works any better.

This semester, I taught two sections of this course, each with about 80 students. There are five assignments, which translates into about 800 separate print-outs for me and my graduate assistant to grade. I don’t know if I am smart enough to design “cheat-proof” assignments — but it would certainly help if we could figure out a way to make the digital file vanish from the computer as soon as the student printed it!

Course Evaluations

A colleague passed along an article from the Chronicle of Higher Education (April 25, 2010) called “Rating Your Professors: Scholars Test Improved Course Evaluations,” by David Glenn. The article is about an end-of-semester ritual in which most of us take part. This is the 15 minutes or so we give our students to fill out a survey about the course and the instructor, and to write any comments they so desire.

We pass out the surveys and the Scantron forms. We tell the students how important these evaluations are and how much we appreciate their input, which is both voluntary and anonymous. We designate a trustworthy student to collect the forms and take them to the main office. Then we leave the room.

In truth, as the article states, many faculty members dread these evaluations. For those on the tenure track, the evaluations represent a potential pothole on the road to a lifetime job. For instructors and adjuncts, poor evaluations could result in not being rehired. And I know of at least one tenured faculty member (not at my university) who told me he simply puts his evaluations in a drawer, unread — another “benefit” of tenure.

Some blame course evaluations for pernicious grade inflation (see above). Why risk the negative impact of poor evaluations when you can perhaps head them off at the pass by liberally handing out As and Bs?

One professor interviewed for the Chronicle article dismissed the whole notion of asking students to evaluate their teachers. Comparing students to assembly-line products, he said you might as well ask the cars being produced to evaluate the factory workers who made them.

Better to ask the end users of the “product,” i.e., the folks who hire or otherwise interact with our students after they graduate, he said. Society, not individual students, should ultimately judge our success or failure. As long as our students become productive members of society, we must be doing something right. Perhaps an extreme view?

We are in the process of trying to redesign our evaluations. Currently, we use a single questionnaire for all courses in the school. Print, broadcast, advertising, public relations, visual communications, introductory survey course to senior capstone seminar — all get the same survey.

If the goal is actually to improve course content and teaching methods, this one-size-fits-all approach seems counterproductive. Why not tailor the evaluations to the course content and the teaching method? Shouldn’t we be asking students completing a basic journalistic writing course questions that are different from those we ask students completing an advanced photography course?

Tossing and Turning

As you can see, teaching involves much more than just standing and delivering. Although it is extremely rewarding compared to many other careers, there are still issues that keep you tossing and turning all night.

Any answers from fellow academics — or anyone else — out there? I’d love to hear from you!

Leave a Reply