AudiologyDentalMedical

The Competency Manifesto: Part 1

Part 1: What Are Competencies and Their Challenges to Medical Education and Admissions?

 
Among the students I have known as a peer or a professor, the phrase “Book smart but common sense dumb” could be applied to many of them (trust me, I was one).  For quite a long time, pre-health students placed their faith on the maxim, “make the grades and test scores, and just about everything else would come easier for you.”  While I cannot see that advice holding any less weight, there are always a few people that have amazing credentials who are much less than what their profile suggested.
For a number of years, this issue of looking at the candidate as “more than just numbers” has permeated through industry and education.  Numbers did not predict performance in areas where performance cannot easily be quantified.  Patient satisfaction with their interactions with health care providers did not improve, and for those in under-resourced environments, some could politely describe the professional-patient relationship as “indifferent.”
Medical education finally realized that professionalism expectations for behavior and conduct had to be part of the equation.  In 2001, six core competencies were adopted by the Accreditation Council for Graduate Medical Education (ACGME) which began a domino effect of looking at medical education as an acquisition of competencies.  Resident directors must now show that their residents had acquired and further developed these competencies based on direct observation and feedback in addition to any board exams or other opportunities to present cases before faculty and peers.
The competency list shocked medical schools into thinking about the competencies they were developing in their students, and it even permeated to training postdoctoral researchers, where I had a role developing the professionalism domain.  Now competency expectations have finally emerged in undergraduate science education and pre-health admissions, and the challenge now falls on students, applicants, advisors, faculty, and admissions officers to develop curricula to address those competencies and tools to evaluate competency acquisition.

What is a competency?

“…competencies will be more meaningful if trainees understand specifically how they relate to important professional activities in their own specialty.”  (Acad Med 86(2):161-165, February 2011).

“Competency is the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served.”  (JAMA 287:226-235, 2002; cited by AAMC/HHMI Scientific Foundations for Future Physicians, 2009).

About the Ads

In short, a competency encompasses not just acquisition of knowledge but also a consistent application of that knowledge, especially in a relevant context, whose mastery can be assessed or measured.  To use the above example, my students could recite the central dogma back to me on exams, but a more competent individual can show how it is fundamental when discussing a current research topic or explaining how certain drugs in HIV treatment interfere with these processes.  To use another example, knowing all the words in a dictionary won’t help you if you don’t know how to put those words together in the right situations (“where is your bathroom?”).  The more competent an individual is with a language, the more complex one’s sentences can be to convey ideas or messages to others.

AAMC/HHMI Scientific Foundations for Future Physicians

To extend this analogy, the future of medicine requires health professionals to be fully conversant in a rapidly more complex scientific and technological world.  If one were to try to teach “everything” about medicine to a class of medical students, one can argue that it would clearly take more than four years without breaks.
In a world where a significantly updated smartphone or tablet is released every year, much of the information taught to students will likely be irrelevant or proven wrong within ten years after graduation.  Clearly, it is not so important for students to have biblical knowledge of medicine – as was expected of physicians in the early 20th century – but to recognize how to manage the four or five new bibles of knowledge that will be created and re-edited during their professional lives.
In 2009, a blue ribbon panel of scientists, clinicians, and educators convened by the Association of American Medical Colleges and the Howard Hughes Medical Institute published a set of science competencies (foundations) for future physicians (SFFP).  In a way, these competencies update the decades-old discrete, didactic, course-based curriculum that was taught to all health professionals (gross anatomy, biochemistry, histology, etc.) which was less effective in helping physicians (and their patients) in clinical applications.  With the scientific competencies, it is now expected that students develop not only a general understanding of those scientific principles but also apply that knowledge in their clinical experiences (as often seen in systems-based curricula and case-based curricula).
Medical schools must be able to assess their students’ acquisition of these competencies in their curriculum when it comes to promoting and graduating students, and align them to competencies expected for residency training.  For individuals in medical education who had been previously restricted to find innovative ways to train new physicians, the SFFP serves as the gauntlet thrown down.
Individuals who aspire to enter professional school were not exempt from the SFFP.  The panel summarized expected competencies for entering students which included foundational competency in mathematics, research methodology, physics, chemistry, and four major areas of biology and biochemistry.  Many medical schools have since begun to adopt these competencies as admissions expectations for incoming students, which means that grades – while still very important – may need to be supplemented by scholarly activity (research) to demonstrate a higher-level of competency.
As a result, undergraduate institutions are challenged to adjust their “pre-health curricula” and consider creating innovative courses or interdisciplinary majors where students can further apply their knowledge and become more competent in these areas.  In the ideal world, this means that science professor letters of recommendation should be more explicit in discussing the competency development of each applicant according to their perspective, so mentoring relationships become even more critical to an applicant’s profile.  Finally, essay and interview questions can be crafted or evaluated that test the level of scientific competency development that go beyond “tell me about your research”.

The near future

The competency train is not stopping just with science.  Competencies can also be applied to an understanding of cultural or political issues that undergird the delivery of health care, not to mention the psychological or personal behaviors that correlate to a successful long-term career in the profession. The AAMC Behavioral and Social Sciences Expert Panel should release a report this year on this subject.  Indeed I am already measuring all these competencies among pre-health advisees and impending applicants for their institutional letter, which I will discuss in a future article.
Emil Chuck, Ph.D., is the Health Professions Advisor and Term Assistant Professor of Biology and Bioengineering at George Mason University.  He has worked with Kaplan Test Prep & Admissions as an admissions consultant, student advisor, and test prep instructor.  There are no conflicting relationships that are relevant or associated with the information in this article.

About the Ads
E
Emil Chuck, Ph.D., is the Health Professions Advisor and Term Assistant Professor of Biology and Bioengineering at George Mason University. He has worked with Kaplan Test Prep & Admissions as an admissions consultant, student advisor, and test prep instructor. There are no conflicting relation... Emil Chuck, Ph.D., is the Health Professions Advisor and Term Assistant Professor of Biology and Bioengineering at George Mason University. He has wo...
M
  • M
    Mark
  • February 10, 2011
Has there been any testing to see if students admitted under this new rubric preform any better or worse as students and physicians than students admitted using the old criteria?
E
I'm not sure what you mean by "testing," and I don't want to give away what I'm writing about in my next article on this topic. I will say that competencies are being used in clinical settings to address performance (determine who performs better or worse). It is much more a mindset of the ideals we wish to see in education overall and in medicine for sure. I am fairly sure but most health professions programs have some statement about successful learning competencies for all students and trainees at that institution, but competencies for entering medical students (premed to med transition) are starting to be articulated in admissions beginning entering class of 2012. One thread has already pointed this out regarding Johns Hopkins, and I know Harvard already has had this articulated for a few years already. I would be interested in medical students (all health professions fields)'s perspectives on how competencies are viewed or used in their education.
B
  • B
    blah
  • February 11, 2011
Pre-Health advisors? Hahahaha! They haven't the slightest clue of what's really going on for the most part. Good luck getting them on board with something that is more intricate. Maybe someone should be testing their competencies. Because the one's I have known are seriously clueless. Anyhow, great article otherwise, although it will be a slow implementation process, especially at the undergrad level.
M
  • M
    Mark
  • February 11, 2011
You're the one who is advocating for the change, it's up to you to show that these new admissions and teaching methods produce a better outcome. Are the students admitted under this rubric getting better step scores? Better clinical evals? Better ratings from residency directors? Doing more research? How are you testing that the changes you are making are not just meaningless hoops to jump through and keywords to learn for premeds and medical students.
E
@Mark: Those are great questions but should be asked at the medical school/residency level since I would not have access to board or shelf scores. I'm saying that whatever misgivings you have about this are probably moot for those who really have power to make these changes (and I don't have that power). These competencies have been adopted within less than five years, so any research that can be done to adequately address your questions will not be forthcoming. Any longitudinal study is also going to take a very long time to get at the answers you are looking for.
D
  • D
    Dave
  • February 12, 2011
The fact that these ideas seem to trend towards "quantifying the unquantifiable" scares me. I see this system easily being abused by administrators because I see no way in which it can be objective enough. We should get right down to the real question here: Should professors be able to shatter an individual's goals in life just because he/she doesn't like him/her? Well this is what you will see more of if this system is adopted (though honestly I see this already happening to some degree in 3rd year of medical school when completely subjective clinical evaluations can make-or-break your clinical grades, which can count for so much in deciding which residencies you can get into and where).
For many of us, unfortunately, everything up to (and in many ways including) medical school has been a means to an end. I say "including medical school" because many would argue that medical school is a means to getting a good residency position in your chosen field (which then provides you with all the tools you need to become a good doctor). The unfortunate thing about this is that rarely do any of us really achieve the higher levels of Bloom's taxonomy throughout our learning. Colleges, medical schools, and residency programs are obsessed with quantifying candidates in some way (grades, national exam scores, publications, numbers, numbers, numbers) and we as candidates must be obsessed with giving them what they want every step of the way. This makes rote memorization and regurgitation the norm. Why is there rarely more than that? Because it's hard to do. Because it's time consuming to do. Probably most importantly, because it's much more expensive for the administration side to facilitate this. Just look at the way most medical schools teach. Pack a lecture hall with close to 200 people so that they can passively "receive knowledge" (the bottom echelon of Bloom's taxonomy). McGill University had the right idea by implementing a curriculum entirely made up of problem-based learning and some US medical schools adopted this same methodology. This is a great way to facilitate learning because it provides much more social interaction in small groups, it enables higher order thinking of ways to analyze, synthesize, and evaluate ideas (the higher echelon's of Bloom's taxonomy), and plus you generally remember things much better if you take a higher stake in both the question and the answer. Much of this, unfortunately, has been dropped here in the US because medical schools quickly learned that this is much more difficult, time-consuming, and expensive to do it the right way compared to the much more efficienty traditional large-lecture format. Problem-based learning at most institutions is used minimally and as nothing more than a cheap trick to be able to tell applicants that they offer the best of all worlds.
With all due respect I see this "competency fad" as nothing more than a cheap patch job to plug a deluge of teaching malfeasance on the part of medical schools. You want to do the right thing, quit trying to treat us all like numbers and instead prove to us you really care about giving us all the tools we need to do the best job for our patients. Going back to the McGill model would be a great start (will it happen? no! why? $$$$$$$$$$$$$$$).
D
  • D
    Dale
  • February 13, 2011
@Dave: You are exactly right and this subjective testing in medical school is what 90% of your grades and 100% of your evaluations come from in the 3rd and 4th years. Basically in medical school if the person evaluating you does not like you for whatever reason they can put whatever they want on your record and give you any grade they want. It can make smart students fail and vice versa. Also in reality, at most institutions, there is no action the student can take against this abuse. I would advocate more objective testing as this is the only fair way to evaluate a student across the board. If they want more competent students how about trying to teach for a change rather than read off a power point for 3 hours at a time? What one evaluator thinks is a competent student can leave the next evaluator with a completely different opinion based on the way they interpret problems and solutions. The only thing this will increase is politics in academics, not competency.
D
  • D
    Drew
  • February 13, 2011
Great article. As an undergraduate student who recently studied abroad at the University of Sydney, Australia, I recount in my experience that a form or variant of the McGill system was utilized in all the science courses, all laboratory assignments were not boring prelab write-ups and report writing. Instead, laboratory assignments were all problem-based. Each week, we were given a specific problem/situation and students in groups of 4 were given a week's time to analyze the develop a plan or method to approach the problem, not necessarily answer it. Huge focus laid in the development of a strategy and checking it for loopholes or mistakes. These assignments stimulated idea sharing as well as emphasized the notion of planning ahead while making adjustments accordingly. Even lecture-based material was reviewed in groups and stimulated by broad discussion on hypothetical situations, problems and personal experiences. This was a large deviance from my "public" education in the USA, where lecture halls were 200-300 students scribbling into notebooks while a professor stood in the front reading from a powerpoint. My stint in Sydney was my most productive term and it continued as I returned to the USA.
I understand Dr. Emil Chuck's concern and his advocacy for change. There is a need for education (especially in the undergraduate level) to emphasize what my Sydney professor called "critical thinking." However, I sympathize with "Dave" as well; slapping on a new grading rubric will not do much. "Competency" must be cultivated and it stems from interest in the subject coupled with creativity, not forced by a checklist. This rubric can be broadly interpreted and I feel that many students today will take advantage of this.
My pet peeve on this article is how this rubric of competencies are the responsibility of the student to achieve, that students must attain this level of growth. While this is correct, I feel that it should also be directed at the universities and the institutions as well; to ignore the "numbers" and provide the necessary tools and materials to stimulate competency. I mean c'mon, be honest. No one wants to be in class 3-4 hours a day scribbling notes from a powerpoint. Why not take a class trip to the school hospital and stimulate fascination? If it could be done in Sydney and pre-collegeiate education, then it should definitely be done at the collegiate level. Maybe different methods of evaluating students may work such as the new interview processes that many medical schools are utilizing or new personal essay questions. The point is: these competencies list is good, but it should not viewed as a checklist of chores. Lets not forget that ancient physicians, though taught by rote-memorization and didactic learning, were tutored in a system similar to a master-apprentice relationship, which culminated from the pupil's interest in the subject. Didactic learning hasn't changed, but the system has.
H
  • H
    Hank
  • February 13, 2011
So, the author is advocating that we train competent doctors and scientists? What a ground-breaking revelation! I wish that I had thought this one up. Geez man, you're going to win a Nobel or two, I bet.
How does inventing a new buzz-word to dress up a common sense notion count as development? Call me a cynic, but it seems like this will result in more pointless "educational" activities that detract from my time for actual learning; that is, "book learning," which provides a foundation for actual competence.
The main goal of the "Competency Manifesto" is to provide employment for education "experts."
R
  • R
    Ro
  • February 14, 2011
@Hank
Okay Hank it seems you have figured out that "book learning is the foundation for actual competence." Now can you explain diabetes for example to a 5 year old kid and in terms he can understand. Half the doctors can't and that there lies the problem. There is a difference between learning the material and actually making another person understand it.
Give a person a paper pencil test and then give them an oral immediately afterward. You will definitely see a difference. I welcome new ways of accessing applicants.
And half the people who blogged don't get the idea that yes you are trying to play the impress the admissions committee game. This applies to all jobs in the real world. Sometimes you may only get a CV or a 10 minute presentation to explain why a hospital should hire you or under take a project with you. This is how biased the world is!
D
  • D
    Dave
  • February 14, 2011
@Ro
What you don't get is that this obsession with standardization and assigning numbers to people is exactly what holds us back and prevents us from adopting a deep approach to learning. It makes us adopt more "surface" and "strategic" approaches to learning (do a google search of "deep, surface, and strategic approaches to learning" in case you aren't familiar with these terms and also learn that the "deep approach to learning" is the ideal method for higher education because it helps learners achieve the higher echelons of Bloom's taxonomy). Though it would be great and ideal to learn in great depth on topics (and in ways that enable you to explain diabetes to a 5 year old), but most of us have learned a long time ago that you can get to where you want to go a lot quicker and easier in college and medical school by "book learning" as you say, and simply jumping through the hoops. With the time constraints and the amount of material you have to memorize for these standardized exams, you aren't going to find very many people (no matter how motivated they may be) who will sacrifice that dream of getting into medical school and that competitive residency at their ideal location, just because they know learning theory has shown that learning things in a different way (the deep approach) is better for you in the long run. You go with what you know works most efficiently to help you attain your goals, and these goals are almost always just the sum of achievement at very short term goals (which makes the surface approach of rote memorization and regurgitation ideal). We will continue to invest 100% of our efforts into what it takes to achieve those "numbers" because we know what our goals in life are and we know what it takes to achieve them (rote memorization and regurgitation).
Your last paragraph is a very stereotypical cliche of "life's not fair, 'stuff' happens, deal with it!" Yeah, well tell that to your 5 year old who wants to learn about diabetes (or even worse, your patient who needs to learn what to do about his/her diabetes and his/her physician can't do an adequate job of explaining it). Changing the color of the hoops we need to jump through isn't going to solve anything. The change needs to be a transformative change at all levels regarding the way we are EDUCATED (not just changing the way we are evaluated). And by the way, do you really think faculty at universities and medical schools are really going to invest the time and money needed to proctor oral exams for everyone for every course??? Oral exams and essay questions used to be a lot more common (and yes, it is a lot harder to "wing it" and guess on these exams), but they have fallen out of favor because educational institutions have realized long ago that multiple choice exams are a lot quicker, easier, and cheaper to administer and grade. Not only that, but it is a lot harder to assign nifty little numbers that mean anything to admissions committees in relation to such subjective exams containing oral and essay questions.
Just don't forget to look at the administrative side. Their goal here is $$$$$ and transforming people into numbers. If they want better students and doctors along the way, then you need to give the administrative side (and the general public in need of good doctors) your own advice: "life's not fair, 'stuff' happens, deal with it!"
E
Just for clarification questions I have with some of the comments above.
First, I'm not the one calling for these changes. I am just saying these changes are approaching a climax.
@Dave: I really appreciate your responses, but I wanted to get information from you regarding the following: "Much of this, unfortunately, has been dropped here in the US because medical schools quickly learned that this is much more difficult, time-consuming, and expensive to do it the right way compared to the much more efficiently traditional large-lecture format. Problem-based learning at most institutions is used minimally and as nothing more than a cheap trick to be able to tell applicants that they offer the best of all worlds." Could you tell me what information you have about schools dropping such curricula? I see most of them moving towards something similar to a hybrid of PBL and lecture-based, with many more adopting collaborative classrooms and small group teaching. Indeed many of them have faculty who are kicking and screaming about these changes in the same way you've expressed skepticism. (And I can definitely understand why.)
2) I am not sure what one can do to completely remove elements of subjectivity without other processes in place to contest evaluations. Trainees are not in the best position where they feel comfortable challenging a superior who they don't click with, whether it is during a 200-person lecture or a clerkship rotation. I agree this is where all the issues of perceived (or real) abuse and exploitation come into play. This is why I make it very clear that mentoring is going to contribute even more to future success.
I agree you cannot just give everyone oral exams over standardized exams all the time, but medical schools can videotape your every interaction with patients and get a panel of anonymous faculty to grade you. The board exams cover clinical skills, and that includes these "soft skills".
D
  • D
    Dave
  • February 15, 2011
@Emil Chuck
You are very naive if you think that just because medical schools say they incorporate problem-based learning in with their large lectures as part of their curriculum that this is anything other than a sales ploy. This is definitely the case at my medical school. In researching the schools I applied to I was always told the same thing "oh yeah, we do both large lectures and PBL" and I thought to myself "I can't go wrong because I will get the best of both worlds. Afterall, what if I find out that I really don't like PBL that much?". Come to findout (and this isn't just my experience, I've talked to many medical students across the country about this and most say the same thing) PBL is utilized minimally and in many cases isn't even a graded component of the curriculum. It just exists so their medical schools have the ability to tell applicants "yeah, we do PBL". At my medical school during the first two years we had large lectures anywhere from 4-8 hours per day 5 days per week and small group PBL sessions were only one day a week for just 2 hours!!! How's that for incorporating PBL into the curriculum??? I know there are some schools that offer a 100% PBL curriculum (I think Drexel is one of them, and they offer it to a limited number of students who request it), but I haven't heard of it being "the rule rather than the exception" as you seem to suggest. Most places seem to do it similar to how my institution does it. You know what most of us do when we look at PBL as a 2 hour per week committment? We try to short-change that experience and spend as little time as possible devoted to that style of learning. We just do the bare minimum that is required because we know the real money is in the larger lectures where the grades really matter. Most of us see PBL as a joke.
No, you really need to invest the students completely in PBL in order to make it work. This has to be the primary mode of teaching, not just a sales pitch gimmick like how it seems to be used at most medical schools. I've met many attending physicians who came from medical schools that used PBL 100% of the time as their teaching method. I remember asking one of them before applying to medical school: "what are lectures like in medical school" and he said to me: "I don't know, I didn't have any". This was at a US medical school that still exists, but they got rid of their 100% PBL curriculum (which I alluded to in my previous post that you quoted, and it happened at other medical schools too just the same). I asked him why they switched and he told me "in order to make it work the right way you need highly-trained facilitators who do this as essentially a full-time job, that is a very costly proposition for medical schools on a tight budget, which most of them are".
As for the rest of your comments, you misunderstand my stance on this. My overall premise is that I think we are simply over-evaluating students and it needs to stop because believe it or not you can accomplish really grade learning without the feeling that you are on the clock and your grade is on the line. Look at what even you are saying up until this point, you are so obsessed with evaluating everything in education that you can't just let people learn without assigning some kind of number to it. You are just pandering to the admissions committees if you can't move away from this obsession with evaluating and grading everything.
As for your last comment on "board exams covering clinical skills and that includes these 'soft skills'" here is what I say about board exams. USMLE Step 1 = just memorize First Aid for Step 1 and you will do really well. USMLE Step 2CS = just spend a few days and read the 10-page section explaining what you are expected to do and you will pass. USMLE Step 2CK = just take 4 weeks and study Crush USMLE Step 2CK and you will do really well. The board exams are meaningless and I beat them every time because I am a good test-taker.
J
  • J
    Joe
  • February 15, 2011
While I am only a student starting my first of year of medical school and, consequently, cannot comment on my experience with board exams, I think the push for medical schools to move away from the concrete numbers for students and evaluate candidates on the mentioned soft skills is growing. And in regards to PBL in the US, Lake Erie College of Osteopathic Medicine has two branches (Bradengton, FL and Greensburg, PA) that teach in 100%PBL. They also have some of the lowest tuition rates in the nation with a first-time pass rate well above national average. Something to think about.
R
  • R
    Ro
  • February 15, 2011
@Dave
Things are changing. PBL is being integrated into more schools. The school in my home state Hawaii, John A. Burn School of Medicine has PBL and it is nearly 50% of the curriculum. They have PBL sessions 3-4 hours on Tues. and Thurs. every week. So time and $$$$ is not always an issues but I do agree that it can be.
I also agree that standardize test do not really tell us anything and that is the point I am trying to make. I have never been a good test taker and I doubt I ever will be. Try to see it from their side as well. How do you best evaluate a student? You can throw all the educational tools in the world but do you really know they are learning the material needed and not just what they want? This is one flaw of PBL because it gives a case study and students just think all they need to do is solve it. If the case study on one disease of the heart, students should be studying more on the heart and diseases that affect it not just that one disease. So how do you false misunderstands in the learn process like that. This is why I do feel new ways of evaluations need to be made. However, I don't want it to be tacked on to old ways (like paper-pencil testing) but instead replace them! Now my question to you is how would you make sure students are learning the right material?
FYI, I never liked complete "book learning" and please read the comment above me because it was him who believed in it, not me. I wanted to make the point that learning should be more than strict book knowledge and more about taking that knowledge to being able to relate it to other people. Yes, a doctor should be able to explain type-1 diabetes to a five year old because he or she may get the disease.
E
Also to show how it's viewed when it comes to residency and graduate medical education, here's a 2009 editorial from "Radiology" on competency-based training: http://radiology.rsna.org/content/252/2/322.full . doi: 10.1148/radiol.2522081999 August 2009 Radiology, 252, 322-323.
E
One more report that got released today: Vision and Change in Undergraduate Biology Education -- A Call to Action. http://www.visionandchange.org/ .

E