Last Updated on June 27, 2022 by Laura Turner
Part 4: How Competencies Are Evaluated
(Part of this article is based on another article I have published: “Competency-based holistic evaluation of prehealth applicants” (The Advisor [NAAHP publication] 29(2): 30-36, 2009).)
If you’ve ever tried applying for a job for the government, you will often be asked by USAJobs.gov to self-assess your competency development as follows:
A – Lacks education, training or experience in performing this task
B – Has education/training in performing task, not yet performed on job
C – Performed this task on the job while monitored by supervisor or manager
D – Independently performed this task with minimal supervision or oversight
E – Supervised performance/trained others/consulted as expert for this task
While there is not a universal standard for evaluating everyone on their competencies, it is clear that with the movement to identify competencies in interprofessional health care teams, a rating scale like this will be the ruler against all trainees will be measured. This has already extended to evaluating applicants for health professional programs.
The AAMC/HHMI Scientific Foundations for Future Physicians report notes four key steps to proper implementation of competencies for future curricular design reforms: identification of competencies, determining the components of the competency and expected performance levels, assessment of the competency, and overall assessment of the process (p.36).
In previous articles (Part 1, Part 2, and Part 3), I have identified competency domains that bound all individuals seeking health professional training and how those domains were viewed by pre-health advisees preparing an application compared to their evaluators. In this article, I will describe how our evaluation system attempts to follow the third step, assessment of the competency. I will detail the data we collect in our pre-application process to evaluate the performance level of each advisee and how the evidence influences our institutional evaluation for each applicant.
“Competency is the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served.” (JAMA 287:226-235, 2002; cited by AAMC/HHMI Scientific Foundations for Future Physicians, 2009).
In a process where thousands of applications are being filed and reviewed, individuals who develop the ability to communicate and articulate their own competencies in the application process will clearly have an advantage. In the evaluation system developed at George Mason University, each pre-applicant must rate himself or herself on the development of his/her competencies and have these findings compared against their solicited references’ and the independent interviewers’ ratings. In addition to the interview feedback, the committee letter also evaluates several essays (of unrestricted length) that probe each applicant’s reflection or experience that justifies their competency self-assessments. Usually there is one essay that focuses mainly on one or two competency domains, but often a respondent’s essay will strike multiple domains at once. Furthermore, the content of the solicited letters of recommendation and interview feedback are reviewed for specific evidence that justifies the referee’s assessment of the applicant.
With this in mind, I have a fictional letter of recommendation from a professor supporting an applicant to show how our rubric standards are applied. Professor Smith recommends the applicant, citing the following evidence: “achieved the third highest grade in the class,” “asks interesting questions,” “often helps other students in lab”, and “writes well-organized reports.” In contrast, Professor Jones comments, “even though the student would earn an A+ in my class, I recall her questions in class revealed a sincere interest in the topic, especially when she asked how what I was teaching was related to a breaking news story about an invention…”. In the holistic evaluation, I get a better sense of this individual’s academic foundation as being more likely “proficient” (An individual has completed sufficient training to reliably reproduce a core set of knowledge and skills, but must receive further training when confronted with situations where the training is applied – see The Competency Manifesto: Part 3 for additional details) for Professor Smith and “confident” (An individual is competent in an above-average set of knowledge and skills, and demonstrates appropriate confidence in adapting to new situations that test the skill set – see The Competency Manifesto: Part 3 for additional details) for Professor Jones.
Measurement and comparison of competencies is made by reviewing the content of all the written materials submitted to our committee, including those based on oral communication. The results of the 360-degree assessment of the applicant’s competencies are viewed but are not automatically factored to make the final recommendation. While our competency assessment tends to be based on a threshold standard, the assigned recommendations for applicants are also normative standards, to compare each applicant’s competencies against what is considered to be acceptable for matriculants. There are some limitations with determining “acceptable” as what our committee may see as a highly rated candidate at one school may be average at another; this is why the competency domain of institutional fit cannot be addressed or factored in with this process.
Difference in Acceptable Competency Level in Admissions
Not all health professions programs require institutional evaluations when a process like mine is available to an applicant, so the best that can be done is showcase differences in applicants relative to the central applications for medicine and dentistry.
The most annoying question in pre-health advising is “how many students get in?” With competency-based admissions, the question should better be focused on “how competent must I be to get in?” In the last admissions cycle (entering 2010), data are sorted by the committee recommendation level versus how far the applicant progressed through the admissions cycle (received interview, waitlisted, or accepted). Overall, one or more applicants who received a rating of “recommend” or above were accepted into one of the health professional programs to which they applied.
As one would expect, the percentage of individuals getting accepted increases as the recommendation rating becomes more favorable. However, if one separates applicants between those applying to allopathic medical schools (AMCAS) and those applying to dental schools (AADSAS), one could see interesting differences. First, for the dental school applicants, there seemed to be a remarkable difference in the acceptance percentage based on the recommendation rating (100% enthusiastic, 43% strong) even though their average application GPA’s (3.61 vs. 3.57) and DAT scores were very similar. This suggests that the competency-based recommendation rating correlates well with admissions decisions regarding characteristics sought by the schools among applicants.
In contrast, the competitiveness of allopathic medical school applicants seems to attract individuals with greater development of competencies. Whereas dental schools were satisfied accepting 100% of applicants with “enthusiastic” ratings, medical schools only accepted 27% of applicants from our institution with “enthusiastic” ratings. Indeed, the select few allopathic applicants with “highly recommended” ratings were selected at a higher proportion compared to the other groups (75%), in spite of the fact that the average GPA of the highly recommended group was lower than the “enthusiastic” group (3.58 vs. 3.73). Traditionally, our advisees have not fared well on the MCAT, so it appears that the comparable percentage acceptances among our “enthusiastic”, “strong,” and “confident” recommended groups might be related to the MCAT scores each group compiled (26.8 “highly”, 25.1 “enthusiastic”, 26.5 “strong”, 24.9 “confident”). More interesting were the comparable statistics for applicants who were accepted (GPA 3.57, MCAT 27.4) compared to those waitlisted (3.64, 27.7). In general, the admissions process for allopathic medicine seems to place value in a threshold expectation of performance on the MCAT and an applicant’s competencies as being clearly superior and maturely developed.
Future Signs of Admissions Trends
It must be mentioned that one cannot generically characterize all medical school admissions processes the way that the analysis tempts. Each individual school assesses these competencies and their own institutional fit separately, so having a highly desirable pre-health institutional evaluation rating is no guarantee for admission to the health professional school of one’s dreams. In addition, the evaluation ratings are determined months before an applicant is often interviewed at a school, so a competency may continue to develop throughout the months of the admissions process.
What it does suggest is that accurate self-assessment, development, communication, and demonstration of one’s pre-professional competencies in a more holistic admissions process may be a better indicator of one’s chances for admission to a health professional program. Furthermore, the threshold for competency development may be different for every major type of health professional program one considers for a career, so an honest review of the competencies expected for individual professions in a health care team would be valuable in helping thousands of pre-health students (and professional students) to find satisfying future careers and specialties in the health care workforce. More importantly, a competency-based admissions process further places weight on life experience, maturity, and personal growth; instead of trying to fit preparation for a professional career in a four-year box, it should become personally more acceptable to realize that taking more time to explore one’s passions and interests may strengthen the weaker competencies one has at 21 years of age.
Table 1. EY2010 admissions final decisions and average GPA by committee recommendation.
Table 2. EY2010 AADSAS admissions final decisions
Table 3. EY2010 AMCAS admissions final decisions
Emil Chuck, Ph.D., is Director of Advising Services for the Health Professional Student Association. He brings over 15 years of experience as a health professions advisor and an admissions professional for medical, dental, and other health professions programs. In this role for HPSA, he looks forward to continuing to play a role for the next generation of diverse healthcare providers to gain confidence in themselves and to be successful members of the inter-professional healthcare community.
Previously, he served as Director of Admissions and Recruitment at Rosalind Franklin University of Medicine and Science, Director of Admissions at the School of Dental Medicine at Case Western Reserve University, and as a Pre-Health Professions Advisor at George Mason University.
Dr. Chuck serves an expert resource on admissions and has been quoted by the Association of American Medical Colleges (AAMC).