Thesis: “Educators will (should) stop insisting on the inequality of outcomes”


In the following series of videos, Dr. Bill Cope of the College of Education at the University of Illinois, calls for educators to move away from student assessments that attempt to put students along a bell-shaped curve of educational achievement, seeking rather to move all students towards similar proficiency (mastery).

Parts 3 and 4 describe in detail the social learning platform that we have employed with veterinary students to analyze their ability to write a case analysis, perform peer reviews of each others work, and even self-reflect on their own engagement in improvement of their work. In other words, rather than evaluate only a single timepoint of the “final product,” the student’s progress and participation in identifying communication or content issues with other students’ work and seeks to lead to a more uniform high quality outcome for all students following review and revision of a first draft.
As a participating instructor and collaborator of Dr. Cope, I should add that our goal was to improve all students’ ability to reflect and think critically about what they know and don’t know, and to teach them about the practical aspects of evidence-based medicine. To be honest, our experience with first year veterinary students was similar to that described by academics attempting to teaching evidence-based medicine at 22 North American veterinary schools described in the J. Vet Med Education paper on which I commented in the previous blog post. That is, perhaps because it is limited to 5-10% of the students’ instructional time (and yes, participation grade), the critical clinical thinking (problem-solving) exercises do stick out “like a sore thumb” within the largely content and fact-driven curriculum as delivered mostly in lectures. As Dr. Cope points out, the final versions of the student case analyses reflect impressive depth of thought, research and writing for first year veterinary students. Nonetheless, a fundamental question remains for us…if we don’t alter curricular expectations throughout the 4 year experience, can we expect students to take the lesson of team-based peer review and quality assurance from the first year of vet school through into their careers as a problem-solving veterinarian?

With that preamble, I encourage you to at least review the first and last of these short videos to gain an impression of what Dr. Cope is encouraging all educators to consider…that is mastery-based education powered by the affordances of new computational learning platforms.

5.1: Mastery Learning Perspective

5.2: The Not So New School

5.3 Learning Analytics in Scholar – specific example of use of Scholar learning analytics platform to follow veterinary student progress through a case analysis writing project with peer review.

5.4: Visualizing Learning

5.5: Summary: New Learning: Because We Can We Should

Is there a lesson for vet medicine?: “Educating in a Neurodiverse World”


If you haven’t taken the opportunity before, I encourage you to take a look at the learning platform developed by TED-Ed, as well as to listen to a very insightful presentation by Brian Kinghorn accompanied by review and thought questions, and links allowing the learner to “Dig Deeper” into the subject.  I’ve even added a veterinary medicine-centric open-ended thought question under the “Discuss” section at the end of those by the author of this lesson. I’d be interested in your thoughts posted via that lesson or as comments to this blog post on the following:

How should veterinary medicine consider the neurodiversity amongst its students when designing and refining curricula? In your comments, go beyond the issue of students needing testing accomodations, and consider what Brian Kinghorn notes as the full spectrum of neurodiversity, in particular, how this range of students best learns, and whether progression rates and pathways through a currriculum might be customized. Finally, how might considerations of neurodiversity inform how we should be directing students towards different areas of veterinary medicine?

Teaching Evidence-Based Veterinary Medicine


Shurtz et al. recently published “Teaching Evidence-based Veterinary Medicine in the U.S. and Canada” in the Journal of Veterinary Medical Education.

The authors reported on their survey of faculty and librarians of 22 out of 35 of the North American veterinary colleges. An average of 4 representatives of each institution responded to questions about where and how they taught evidence-based veterinary medicine (EBVM) in their curricula. They reported that the most common skill taught was literature searching (61%), followed by applying evidence to patient care (48%), critically appraising the literature (40%), formulating a clinical question (38%), and assessment of success of clinical application of evidence (19%). Most of the skills were taught in didactic clinical courses, with the exception of critical appraisal, which was generally taught in basic science courses.

The top 3 information resources they taught and/or students used were PubMed/MEDLINE (96%), CAB Abstracts (42%), and Veterinary Information Network (41%). Surprisingly, librarians were engaged in the instruction only 41% of the time.

The article also asked instructors about the challenges they faced in implementing EBVM instruction. Key problems mentioned were:
1. Students were resistant to use the literature to answer clinical questions.
2. Students often felt that the answers should be provided to them by instructors, not obtained following a search.
3. Faculty and students saw that faculty didn’t often overtly practice EBVM, “relying on their own clinical experiences and expert opinion rather than research-based information.”
4. Students often searched Google and VIN steering toward findings not necessarily focused upon research results.
5. Faculty were resistant to assigning students literature searches on clinical rotations.
6. Faculty just didn’t see that there was enough time for EBVM in the curriculum as it competed with primary content instruction.

Perhaps one quote provided by a survey respondent summarizes the real challenge:
‘‘The veterinary curriculum is overfull. . . . Having too much material to commit to memory leaves little time for students to focus on the critical role of research in advancing clinical practice.’’

So, it appears that most veterinary medical curricula still hold “content as king.” Shouldn’t we be looking for ways to put process such as critical thinking and evaluation of current evidence a bit higher on priority list? Content and knowledge evolve…the practice we might give our students during their training on application of information WILL transfer 5-10 years into the future. Much of the content we assiduously lecture to them about, will NOT.

The above image is based on the EBM Page Generator (2006) from Dartmouth College and Yale University and the Coursera MOOC “Understanding Clinical Research: Behind the Statistics“ (2016)

Competing for a Learner’s Attention – from LearnDash blog


A nice concise review by Justin Ferriman, CEO of LearnDash, of what it takes today to developing engaging online content.

Of particular note for today’s students, he says:

To combat this challenge you have to design your course in a way that holds user attention in short bursts. Videos need to be concise (eight minutes max) and so does the content (use graphics instead of text where possible).

Let a learner build up some momentum in your course. Make the lessons “snappy”, bite-sized chunks that can be consumed easily. If you have checkpoint quizzes keep them to three questions maximum so users can cruise to the next lesson.

These recommendations certainly apply to today’s busy medical professional student also!

Continuing Reflection on PISA Problem-Solving Test Results


Continuing in our analysis of the PISA 2012 problem-solving test results, the OECD chose to compare individual problem-solving tasks to the overall average. It should be noted that that average might not necessarily be the ideal balance of strengths of problem-solving skill subsets. Students from Europe and North America tended to perform less well than those from top-performing countries primarily in tasks associated with knowledge-acquisition. In most countries, there is lower performance in student ability to “turn information into useful knowledge,” as evidenced by performance in the dimensions “exploring and understanding and “representing and formulating” problems.

Irish and U.S. students performed better on interactive items than students in Nordic and Central European countries. The OECD suggested that educators in the latter areas might need to foster student behaviors such as “being open to novelty, tolerating doubt and uncertainty, and daring to use intuition to initiate a solution.” For many of the countries (see, for example, Spain, UK and Germany in graphic), there were no particular areas that were stronger or weaker across the spectrum of problem-solving skills.

Again, these findings were intended to simply allow schools in participating countries to examine their curricula for 21st century relevance. Clearly constructed on real-life skillsets related to problem-solving, this study showed how well prepared 15 year olds (now potentially university students) are to solve novel problems not addressed by rote curricular offerings. The study directors conclude that problem-solving skills must be strengthened by addressing them in context of other studies. Teachers, for example, might spend more time reflecting openly with students as a model for students to do the same thing. Essentially, it supports the metacognitive aspects of learning. So, instructors should encourage students to describe the steps they took toward a solution, and not simply be satisfied with a correct answer!

Reflecting on International PISA Problem-Solving Test


The Programme for International Student Assessment (PISA) tests evaluate the academic skills of subset of 15 year old students to countries in the Organization for Economic Cooperation and Development (OECP), consisting of 35 countries worldwide. This organization also tracks economic, educational and equity data and a full read of the results document will find correlations of PISA results with these parameters. An overall geographical map of 2015 PISA findings is found at ( The 2015 version of the test, offered every 3 years, included results in science and mathematics, which should be of interest to medical educators. A direct connection to the full report and database of country results is found at

The next few postings in this blog will attempt to take a deeper dive into some of this data, focusing on its potential relevance to medical educators.

One of the missions of is to cultivate a dialogue on the role of intentional training of our students in problem-solving. You’ll even see a learning module on the site entitled Blended Learning and Critical Thinking which suggests strategies relevant to veterinary educators.  The PISA test last evaluated individual problem-solving skills in 2012, and then evaluated collaborative problem-solving in 2015. Let’s take a look first at the individual problem-solving test results from the PISA 2012 test We might be disappointed that these results are “dated” but should note that the students evaluated are the cadre of students currently in veterinary schools.

Firstly, the test purported to evaluate through a series of static and interactive computer exercises of escalating difficulty the following processes:
“• Exploring and understanding • Representing and formulating • Planning and executing
• Monitoring and reflecting”
Some interesting patterns were revealed. Students in Singapore, Korea and Japan scored highest on the problem-solving section. About 20% of students worldwide were only able to solve only simple problems that referred to familiar situations. Likewise, about 11% of 15-year-olds in this sample could systematically explore a complex scenario, requiring multiple steps including consideration of constraints and feedback on results. Problem-solving performance correlates positively to other assessed subjects but more weakly than do traditional academic areas like math, science and reading. We’ll come back to additional data on this later.

Some countries perform more strongly on problem-solving than might be predicted by performance in these traditional areas. This implies that some educational systems may be providing some experience in problem-solving in greater proportion than other countries. This also implies that we might want to look at these systems for clues, and that each country might consider adjusting its approach to seek the ideal balance of educational priorities.

For examples, students in China, Korea, and Singapore perform strongest on “static”problems that require understanding, formulating or representing new knowledge, compared to other types of problems. In contrast, students in Brazil, Ireland, Korea and the United States perform the best on “interactive”problems requiring discovery of some of the information to solve the problem. Static problems are those typically found in textbooks. “Real-life” problems are more interactive than that, and solving them may not benefit as much from rote practice of certain types of problems. However, the “static” problem-solving skillset is also needed for solving “real-life”problems.

The static or interactive nature of the problem situation is related to how information is presented. Static problems, where all relevant information is disclosed at the outset, are the typical textbook problems encountered in schools, whereas in most contexts outside of schools, the relevant information to solve the problem has to be obtained by interacting with the environment. Static problems can be regarded as a special case of interactive problems. This highlights the fact that the set of skills that are required to solve static tasks is a subset of the skills required for interactive tasks. However, in addition, students must also “be open to novelty, tolerate doubt and uncertainty, and dare to use intuitions (‘hunches and feelings’)” to initiate a solution. The PISA report suggested that students with poorer performance on interactive problems might benefit from opportunities to practice “curiosity, perserverance and creativity.”

Lest educators from any country become sanguine about these results, it is most sobering to move away from looking at averages, and pay attention to the fact that we would certainly want all of our students to perform at a higher level than is represented by the following percentiles. For more detail on the nature of each level of problem-solving ability, see page 57 of the reportand the graphic at the top of this post:
91.8% at Level 1: can explore problem generally only when they have prior encounters with similar situations.
78.6% at Level 2: can explore and evaluate a small part of a problem, and can test a simple hypothesis in linear fashion
56.6%: at Level 3: can explore a problem and infer only simple relationships between components, and can fully deal with one condition, and can develop tests for a hypothesis.
31.0% at Level 4: can explore a “moderately complex problem in a focused way,” and understands connections between components of a problem, and are able to adjust plans following feedback, and can formulate a hypothesis and develop plans to test it.
11.4% at Level 5: can explore a scenario to understand the structure of underlying information, can react quickly to feedback, and think ahead to develop a plan that accomodates given constraints.
2.5% at Level 6: can develop models of diverse problem scenarios, facilitating efficient problem-solving, can set up general hypotheses about a system and propose a way to test them through multiple steps with appropriate monitoring,, and can quickly establish when inadequate information is available to reach a conclusion, and modify strategies accomodating both explcit and implicit constraints.

Which of these student performance levels would YOU want treating you or your animals? 

For the sake of argument, we’d argue that minimal performance for a medical professional should be a 5, if not a 6!  So, there is clearly lots of room for development for almost 90% of students worldwide, but do we develop our medical professional students towards these ends? More later on correlations between educational systems and PISA results, and on the 2015 results in collaborative problem-solving.

Article in J. Vet. Med. Education on Development of Clinical Reasoning Skills


A recent study by Vinten et al. in the J. Vet. Med. Education documented the outcome of focus groups among faculty, staff, and recent graduates of the  University of Nottingham School of Veterinary Medicine and Science (SVMS) on the adequacy of current training on clinical reasoning skill.  One conclusion was that student development in this skill would be enhanced through increased responsibility for the outcome of clinical decisions within the training program.  Furthermore, financial issues were felt to be given too little shrift during training leading to less preparedness when faced later with actual practice decisions.

The fundamental message from this study supports the idea of consistently providing students practice at “real-life” practice in clinical decision-making.  A parallel question follows naturally, but is not addressed by this study; that is, since veterinarians and other medical professionals are constantly making decisions, some with quite high stakes, why are we as medical educators not carving out more time for students to practice these skills with cases that include all of the messy aspects of reality?  If we did, we would include cases with distracting information (inconsistent or missing data from lab tests), weave in the uncomfortable reality of a client’s financial situation, and other ethical dilemmas of a professional. The dilemma for many veterinary educators becomes….what can be cut from the curriculum to make room for this kind of training?