For Good Reason: Analyzing How Students Define Difficulty in RateMyProfessor.com Comments
Main Article Content
Abstract
Can student comments help solve a problem that student ratings helped create? We argue that the comment section of student evaluations of teaching (SET) offers a rich site for studying student perspectives on teaching and learning, particularly how students define and value course and instructor difficulty. Employing rhetorically grounded approaches to computer-assisted corpus analysis, we compared 4,600 RateMyProfessors.com instructor profiles meeting the criteria of 1) instructors with high difficulty and high overall quality scores or 2) instructors with high difficulty but low overall quality scores. We identify recurring argumentative patterns in both corpora. In contrast to SET scholarship which often assumes students favor ease over all other course characteristics, we see commenters providing a more nuanced evaluation: condemning contrived forms of difficulty but commending authentic ones. Our findings contribute to discussions of student perspectives on learning and their relationship to course evaluations. The results offer evidence in support of the validity hypothesis in SET scholarship and provide avenues for helping faculty better understand their course evaluations and their students.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
- Authors retain copyright and grant the Journal of the Scholarship of Teaching and Learning (JoSoTL) right of first publication with the work simultaneously licensed under a Creative Commons Attribution License, (CC-BY) 4.0 International, allowing others to share the work with proper acknowledgement and citation of the work's authorship and initial publication in the Journal of the Scholarship of Teaching and Learning.
- Authors are able to enter separate, additional contractual agreements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in the Journal of the Scholarship of Teaching and Learning.
- In pursuit of manuscripts of the highest quality, multiple opportunities for mentoring, and greater reach and citation of JoSoTL publications, JoSoTL encourages authors to share their drafts to seek feedback from relevant communities unless the manuscript is already under review or in the publication queue after being accepted. In other words, to be eligible for publication in JoSoTL, manuscripts should not be shared publicly (e.g., online), while under review (after being initially submitted, or after being revised and resubmitted for reconsideration), or upon notice of acceptance and before publication. Once published, authors are strongly encouraged to share the published version widely, with an acknowledgement of its initial publication in the Journal of the Scholarship of Teaching and Learning.
References
Anthony, L. (2020). AntConc (3.5.9) [Computer software]. http://www.laurenceanthony.net/software/antconc/
ASSE, A. S. for E. E. (2020). Engineering & Engineering Technology By the Numbers. https://ira.asee.org/by-the-numbers/engineering-faculty/
Benton, S. L., Guo, M., Li, D., & Gross, A. (2013). Student ratings, teacher standards, and critical thinking skills. Annual Meeting of the American Educational Research Association, San Francisco.
Benton, S. L., & Ryalls, K. (2016). Challenging Misconceptions about Student Ratings of Instruction. The IDEA Center, IDEA Paper 58.
Bleske-Rechek, A., & Michels, K. (2010). RateMyProfessors com: Testing Assumptions about Student Use and Misuse. Practical Assessment, Research, and Evaluation, 15(1), 5.
Brockx, B., Van Roy, K., & Mortelmans, D. (2012). The Student as a Commentator: Students’ Comments in Student Evaluations of Teaching. Procedia - Social and Behavioral Sciences, 69, 1122–1133. https://doi.org/10.1016/j.sbspro.2012.12.042
Carlozzi, M. (2018). Rate My Attitude: Research Agendas and RateMyProfessor Scores. Assessment & Evaluation in Higher Education, 43(3), 359–368. https://doi.org/doi.org/10.1080/02602938.2017.1348465
Chaney, S. B. (2011). Rankings and ravings in the academic public. Rhetoric Review, 30(2), 191–207.
Clayson, D. E. (2014). What does ratemyprofessors.com actually rate? Assessment & Evaluation in Higher Education, 39(6), 678–698.
Clayson, D. E., Frost, T. F., & Sheffet, M. J. (2006). Grades and the student evaluation of instruction: A test of the reciprocity effect. Academy of Management Learning & Education, 5(1), 52–65.
Clayson, D. E., & Sheffet, M. J. (2006). Personality and the student evaluation of teaching. Journal of Marketing Education, 28(2), 149–160.
Coladarci, T., & Kornfield, I. (2019). RateMyProfessors com versus formal in-class student evaluations of teaching. Practical Assessment, Research, and Evaluation, 12(1). https://doi.org/10.7275/26ke-yz55
Constand, R. L., Pace, R. D., & Clarke, N. (2016). Accounting Faculty Teaching Ratings: Are They Lower Because Accounting Classes Are More Difficult? Journal of Accounting and Finance; West Palm Beach, 16(4), 70–86.
Dayton, A. E. (2015). Making Sense (and Making Use) of Student Evaluations. In Assessing the Teaching of Writing: Twenty-First Century Trends and Technologies (pp. 31–44). UP Colorado.
Delaney, J., Johnson, A., Johnson, T., & Treslan, D. (2010). Students’ Perceptions of Effective Teaching in Higher Education (p. 90). St. John’s, NL: Distance Education and Learning Technologies. https://research.library.mun.ca/8370/1/SPETHE_Final_Report.pdf
Hoyt, D. P., & Lee, E.-J. (2002). Basic Data for the Revised IDEA System (Technical Report No. 12; p. 87). Individual Development and Educational Assessment. https://www.ferris.edu/administration/academicaffairs/Resources/CourseEval/Administration/Documents/IDEA12_techreport_2002.pdf
Lauer, C. (2012). A Comparison of Faculty and Student Perspectives on Course Evaluation Terminology. To Improve the Academy, 31(1), 194–211.
Li, D., & Benton, S. L. (2017). The Effects of Instructor Gender and Discipline Group on Student Ratings of Instruction: IDEA Research Report #10. In IDEA Center, Inc. IDEA Center, Inc. https://eric.ed.gov/?id=ED588355
Marks, N. B., & O’Connell, R. T. (2003). Using Statistical Control Charts to Analyze Data from Student Evaluations of Teaching. Decision Sciences Journal of Innovative Education, 1(2), 259–272. https://doi.org/10.1111/j.1540-4609.2003.00020.x
Marsh, H. W. (2001). Distinguishing between good (useful) and bad workloads on students’ evaluations of teaching. American Educational Research Journal, 38(1), 183–212.
Marsh, H. W. (2007). Students’ Evaluations of University Teaching: Dimensionality, Reliability, Validity, Potential Biases and Usefulness. In R. P. Perry & J. C. Smart (Eds.), The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective (pp. 319–383). Springer Netherlands. https://doi.org/10.1007/1-4020-5742-3_9
Otto, J., Sanford Jr, D. A., & Ross, D. N. (2008). Does ratemyprofessor. Com really rate my professor? Assessment & Evaluation in Higher Education, 33(4), 355–368.
Ritter, K. (2008). E-valuating learning: Rate my professors and public rhetorics of pedagogy. Rhetoric Review, 27(3), 259–280.
Subtirelu, N. C. (2015). “She does have an accent but…”: Race and language ideology in students’ evaluations of mathematics instructors on RateMyProfessors.com. Language in Society, 44(1), 35–62. https://doi.org/10.1017/S0047404514000736
Timmerman, T. (2008). On the validity of RateMyProfessors. Com. Journal of Education for Business, 84(1), 55–61.