The information being evaluated for this project is determining an appropriate evaluation tool for a clinical practicum. The core of clinical practicum is patient safety and developing an original tool that incorporates patient safety was of paramount importance. This project explored various types of evaluation tools in the review of literature and used the Quality and Safety Education for Nurses (QSEN) core competencies as well as objectives that had been previous developed for a medical surgical clinical practicum to revise the clinical practicum evaluation tool that is currently being used. The implications for this study are that the tool that was developed can not only be used for a practicum with a pass/fail grading format, but can be easily adapted to include a Likert type scale so this tool can be used for a practicum that had a letter grade assigned to it.
A problem that has plagued nurse educators for several decades is how to determine if a student is competent to pass the clinical portion of nursing education. Educators have debated how to assess competence from ideas of using rubrics, portfolios, and non-standardized clinical evaluation forms. There is no one tool available that has been deemed right or wrong. Another issue plaguing schools of nursing is whether or not to assign a letter grade to the clinical component of the nursing program or whether the clinical component should be considered a pass or fail situation. This project did not seek to determine which tool to use, but rather to revise a current evaluation tool. A Bachelor of Science in Nursing (BSN) program in the Louisville, KY area has seen the need to revise the clinical evaluation tool used to evaluate the second semester of the medical-surgical nursing clinical. The current evaluation tool is seven pages long and contains very specific grading criteria. The lead educator would like for the current tool to be revised to two to three pages, if possible, and for the evaluation tool to contain broad grading criteria. The review of literature will explore portfolios, rubrics, and objective structured clinical evaluation (OSCE), among other tools currently being used to determine clinical competence.
Review of Literature
There is varying discussion among educators at to the best way to evaluate a clinical practicum. Some of the educators believe that a clinical practicum should be evaluated on a pass/fail basis. While other educators argue that the amount of work involved in completing a clinical practicum should be assigned a letter grade. This review of literature will discuss various options that are currently available to evaluate clinical practicums.
Levett-Jones, Gersbach, Arthur, and Roche (2011) discuss the Structured Observation and Assessment of Practice (SOAP) model for evaluation clinical competency. “The SOAP model is a six hour holistic assessment of nursing students’ clinical knowledge, skills, behaviors, attitudes, and values undertaken in clinical context” (p. 65). SOAP has been integrated into the third year of the nursing program in the final semester of course work. The nursing students are observed by assessors, which are university employed registered nurses (RN) that were “selected because of their clinical experience, highly developed interviewing skills, and demonstrated ability in observation, analysis, interpretation, and evaluation of assessment data. The assessors attend a two day training workshop where they are introduced to the purpose and process of the SOAP and provided with opportunities to practice the assessment using ideas of students with standardized patients” (p. 66). Since the introduction of SOAP the “percentage of students who have been identified as competent on this occasion has risen from 35 percent in 2004 to 50 percent in 2009; the percentage identified as competent pending completion of remediation has risen from 39 to 45 percent; and the percentage identified as not competent on this occasion has decreased from 26 to five percent” (p. 68).
Walsh, Jairath, Paterson, and Grandjean (2010) discuss the idea of using a Clinical Performance Evaluation Tool (CPET) that was developed to accurately measure the Quality and Safety Education for Nurses (QSEN) competencies. A core group of undergraduate faculty, most of which were part of the QSEN team for the school of nursing, developed the CPET. The CPET was designed to incorporate the six QSEN primary key competencies, enhancements derived from a literature review that focused on the American Association of Colleges of Nurses core competencies; integration of the mission statement and terminal objectives of the school of nursing; and incorporate desirable features of evaluation tools used by other nursing schools based on a nationwide survey. The CPET has three components. The first component is a “one page checklist that evaluated student performance regarding the six QSEN competencies. The second component is a key that allows clinical instructors to determine the specific way in which QSEN competencies pertain to the clinical course. The third and final component of the CPET consists of a short form describing the guidelines applying the CPET tool for each clinical faculty member and for the students” (p. 518-19). The CPET was pilot tested in June 2008 on 25 students taking their first adult medical-surgical nursing course. Content validity was established by mapping of CPET content to the QSEN competencies and reliability was enhanced by the development of written guidelines for CPET use in evaluation and for CPET completion. Sensitivity was also a major consideration and was determined that CPET would be graded on a pass/fail approach.
Tanicala, Scheffer, and Roberts (2011) discussed the struggles of determining clinical competence when the behavior of the student is considered borderline. These authors discovered that “nurse educators have struggled for some time over the issues and inconsistencies of assessing and evaluating students’ clinical behaviors” (p. 155). “An inductive, qualitative approach using focus groups was selected for Phase I based on a six-step, systematic approach recommended by Krueger (1998): a) sequencing questions to maximize useful data; b) electronic and note-taking data collection: c) coding data patterns; d) participant verification of data; e) debriefing the moderators after each focus group; and f) planning for dissemination of the results” (p. 156). The focus groups consisted of nurse educators from public and private schools of nursing with full-time and part-time appointments, variations of clinical specialties, and with doctorate to bachelor degrees in nursing. Eight of the participants represented colleges and universities in a metropolitan area and the remaining three participants represented colleges and universities in a suburban area. Analysis of the data collected revealed the major theme to be context and patterns and the five subthemes to be: communication, ethics, thinking, safety, and standards (course and professional). The researches constructed a survey from the finding of the preliminary work that was test piloted with 26 expert baccalaureate clinical educators. The results of the pilot study lead to the construction of a 12-item survey that consists of clinical scenarios requiring faculty decision-making about student clinical behaviors.
Karayurt, Mert, and Beser (2008) conducted a study to develop a scale to assess clinical performance of nursing students. The study was conducted at a Turkish University School of Nursing during 2002-2004 and included 52 third year students and 45 fourth year students. Repeated evaluation of the performance of the students was conducted and evaluated by the lecturers who taught medical diseases and pediatrics, psychiatry, surgery, and internal disease. The sample included 350 performance evaluations of 97 students. After reviewing the performance evaluations, a clinical performance scale was developed that included the following items: “planning nursing care, using nursing processes to offer nursing care and interventions directed towards fulfilling professional roles” (p. 1125). The initial tool contained 77 items. However after an expert group of 17 lectures (who provided guidance in the clinical practice setting) reviewed the tool, it was revised to contain 53 items.
Scarpa and Connelly (2011) discussed criterion based performance assessment for advanced practice nurses (APN) using a synergistic theoretical nursing framework. A nursing-focused evaluation tool was developed based off of a generic APN job description. The criterion based evaluation tool contains five sections and is based off the six core competencies that are considered essential roles and behaviors of the APN as well as the seven practice domains of the domains and core competencies of nurse practitioners. Due to the collaborative practice agreements of the APN with physicians it was determined that peer evaluations should be completed.
Oremann, Yarbrough, Saewert, Ard, and Charasika (2009) conducted research to determine how faculty members grade clinical performance. These authors sent an email to over 21,000 members of the National League of Nurses (NLN) inviting them to participate in the survey. A total of 1,573 responses were included in the final set of data and include 128 faculty from diploma programs; 866 faculty from associate degree programs; 563 faculty from baccalaureate degree programs; and eight faculty from entry-level masters degree programs. The results indicated that 93 percent of the faculty utilize observation as the tool of choice. Other evaluation tools included written assignments, skills testing, student contributions to clinical conferences, student self-assessments, simulations, and preceptor evaluations. The results of the survey also found that 83 percent of the faculty use a pass/fail grading system and that 88 percent of the associate degree programs, the students has to pass both clinical and theory. The faculty reported continual evaluation of students in the clinical setting.
Rentschler, Eaton, Cappiello, McNally, and McWilliams (2007) discussed the utilization of objective structured clinical evaluation (OSCE). These authors found that OSCE has been utilized in the evaluation clinical performance of medical students since 1975 and that the process is both valid and reliable in many settings. OSCE “uses a simulated and standardized format to measure synthesis of knowledge and clinical skills. It also provides an innovative learning experience for students. Individuals are trained to be standardized patients to provide a controlled clinical situation that is realistic and nonthreatening” (p. 135). The benefits of OSCE are that is provides a formative evaluation for both the school as well as the student. Forty nine of the 54 seniors of a BSN program agreed to participate. After the OSCE was completed the students were asked to fill out a post-OSCE evaluation tool containing six items using a three point Likert scale. The results of the post-OSCE evaluation indicated that the students found the experience to be positive; the case studies were realistic; a majority reported they felt confident in their knowledge, interpersonal skills, and clinical skills.
Walsh, Bailey, and Koren (2009) also discussed the use of OSCE. These authors performed an integrative review of 41 previous research projects that met the following inclusion criteria: presented some form of comparison or evaluative analysis that induced either a study on function, correlations, factor analysis, cost, reliability and/or validity testing. These authors discovered that 18 studies from 1968 to 2001 described the use of some form of OSCE design in nursing education. The results of their research indicated that “when compared to subjective teacher-ratings of student performance or the multiple choice question test, the OSCE is a superior evaluation of clinical competence, as it facilitates the assessment of a complex repertoire of skills, knowledge and attitudes viewed as the underpinnings required for competent clinical practice” (p. 1568). Walsh et al. also discovered the OSCE can be used for both formative and summative evaluation, OSCE generally has a more timely feedback, OSCE has been found to be reliable and sensitive to differences in levels of medical education, and OSCE allows for weaknesses in curriculum to be identified.
Chernecky, Miller, Garrett, and Macklin (2012) discussed the effectiveness of the new ABC’s pedagogy for clinical evaluation. The new ABC pedagogy includes the following: A is used for anatomy/physiology; B is used for best care; C is used for Complications; D is used for drugs, and E is used for evidence based practice and is evaluated using a five point Likert scale and can be used for both graduate and undergraduate students and faculty through course evaluations. The ABCs includes “avenues for critical thinking including interpretation and analysis of laboratory date, evaluation of best care and individualized interventions, and inference and explanation of sequelea such as graft rejection” (p. 62). This form of evaluation is one way for professors or preceptors to help students develop clinically and support the goals of nursing education. It has been found that the outcomes expected for the ABCs are the same as those for other clinical nursing pedagogies. The results of course evaluations where the ABCs were used included 98 percent of undergraduate students rated it as excellent; 88 percent of graduate students rated it as excellent, and 88 percent of staff nurses and advanced practice nurses rated it as excellent.
Walsh, Seldomridge, and Badros (2008) discussed the development of a practical evaluation tool for preceptor use. The ultimate goal of their research was to encourage realistic evaluation of student clinical performance. After a meeting with faculty, preceptors, and clinical managers the following performance indicators and clinical indicators emerged: accountability, attitude, judgment, knowledge, confidentiality, communication, assessment, interventions, skills, and medications. These researchers suggested using a rubric because the “use or rubrics can be a way to improve objectivity in grading and help students understand why they received a particular grade and when used repeatedly throughout an experience, a rubric provides formative assessment as a person being evaluated knows exactly what to do to achieve the highest level” (p. 114). These authors developed a Clinical Internship Evaluation Tool (CIET) which contains a 42 item instrument that assessed 18 professional behaviors and 24 patient management behaviors. The CIET dramatically decreased the amount of time the preceptors reported spending evaluating students. Before the CIET it would take two to three hours to fill out the clinical evaluation tool, after implementation of the CIET, preceptors are only spending 30 to 60 minutes.
Nicholson, Gills, and Dunning (2009) also suggested using scoring rubrics as a means of evaluating clinical performance. These researchers studied the clinical performance of nurses in an operating room. Nicholson et al used both holistic and analytical rubrics that were developed to align to the Australian College of Operating Room Nurses (ACORN) Standard for Perioperative Nurses. Three video clips were utilized that captured varying performance of nurses performing as instrument nurses in the operating suite and were used as prompts by expert raters who judged the performance using rubrics. The results of this study found that the holistic rubrics led to more consistent judgments than the analytical rubrics, yet the analytical rubrics provided more diagnostic information for interpretation purposes.
Bashford, Schaffer, and Young (2012) discussed the use of a competency-based assessment (CBA). These authors found that competency is imperative for safe patient care and healthy work environments and that using CBA can decrease the time spent in orientation. This type of assessment is used for newly hired RNs prior to orientation on the unit to assess competency related to “basic nursing skills and incorporates patient safety through assessment of clinical decisions related to safe medication usage, recognition of response to changes in patient condition and medication reconciliation” (p. 63). These authors used a mixed method study which included a comparative experimental design, investigator field notes, and surveys. Thirty one newly hired RNs were included in the study. Eighty-seven percent of the participants indicated that the time spend doing the CBA was useful in identifying strengths and learning needs and half of the participants reported that the CBA helped them recognize the areas of knowledge and concepts that they need to update.
Kear and Bear (2007) suggest using portfolio evaluation as a clinical grading tool because portfolio based learning includes qualitative perspectives and the samples of work reflect personal and professional accomplishments. These are also utilized for reflective learning and as an innovative way of documenting student learning and evaluating clinical competence. Portfolios foster critical thinking skills through reflective writing. The introduction of portfolio evaluation was incorporated into the curriculum content and delivery of a revised RN-to-BSN program. A portfolio evaluation tool (PET) was developed to qualitatively measure the perception of the students in achieving the overarching goals and curriculum objectives of this program. A total of 29 students completed the portfolio evaluation; however three were not used because the students did not adequately complete their self-evaluations. The results indicate that 88.5 percent of the students met the goal to prepare them for graduate school and 90 percent demonstrated behavioral criteria that met the program objectives. The results also indicated need for improvement because 69.2 percent did not clearly demonstrate confidence in public speaking or have an understanding of the impact of the changing health care delivery system on patient care and nursing practice, and 65.4 percent did not have the ability to communicate effectively in the role of patient advocate.
Diem and Moyer (2010) discussed the development of a tool to evaluate public health nursing clinical education at the baccalaureate level. This study was a collaborative program between two schools of nursing to address the challenges in 2004 by recruiting additional placements that provided access to underserved community groups whether or not a nurse was available to mentor the students. This research was a two phase study that included both qualitative and quantitative measures. The development of items to be included in the tool was the first phase and testing of the tool was the second phase. The first phase explored quantitative measures to identify items for the tools to measure clinical confidence and student satisfaction. One hundred seventy students were given the opportunity to participate; however only 58 percent choose to participate in the identification of these items. “The students were asked a set of four open-ended questions to determine the public health clinical coursework were important to their learning as an individual and as a team member and the words they used to describe their skills and learning” (p. 288). The final tool developed included a17 items that were rated on a five point Likert scale. The results indicated that 68 percent of the students were satisfied or highly satisfied with all aspects of the clinical experience where as 32 percent were less satisfied. The results also indicated that 58 percent did not want a change, 32 percent wanted to increase the amount of time with community members, and the remaining 10 percent wanted various increases or decreases with different people.
Durkin (2010) discussed the development and implementation of an independence rating scale and evaluation process for nursing orientation for new graduates at Children’s Hospital Boston (CHB). At CHB a spreadsheet tool was developed the presented each domain and set of critical behaviors for each clinical area and were rated on a scale of competence as independent, supervised, assisted, marginal, or dependent. It was determined that 100 percent was an unrealistic expectation for novice nurses to a minimum score of 70 percent for each competency and 80 percent to 100 percent for critical behaviors should be obtained. The results of this study indicated that the development of a goal-attainment type rating scale based on independence level has not proven to be instrumental in preparing new graduate nurses to practice effectively.
According to this review of literature there is no one method that is preferable to evaluate clinical practicums. Each tool has advantages and disadvantages. The key concept to keep in mind when determining which clinical evaluation tool to use is what is the evaluator attempting to evaluate.
Author: Jason Hawkins RN, BSN, MSN
opyright 2015 Onlyanurse.com