World Rural Health Conference
Home Print this page Email this page Small font size Default font size Increase font size
Users Online: 2107
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents 
ORIGINAL ARTICLE
Year : 2019  |  Volume : 8  |  Issue : 4  |  Page : 1408-1413  

Design and implementation of clinical competency evaluation system for nursing students in medical-surgical wards


1 Department of Medical-Surgical, School of Nursing and Midwifery, Iran University of Medical Sciences, Tehran, Iran
2 Department of Intensive Care and Cardiovascular Perfusion Technology, School of Nursing and Midwifery, Iran University of Medical Sciences, Tehran, Iran
3 Department of Medical-Surgical, School of Nursing and Midwifery, International Campus, Iran University of Medical Sciences, Tehran, Iran

Date of Web Publication25-Apr-2019

Correspondence Address:
Dr. Sepideh Nasrollah
School of Nursing and Midwifery, Iran University of Medical Sciences, Rashid yasemi St., Valiasr Ave., Tehran
Iran
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jfmpc.jfmpc_47_19

Rights and Permissions
  Abstract 


Background: In nursing, it is important to ensure the evaluation of students' clinical competency and using a valid and reliable evaluation system is necessary. The aim of this study was to design a clinical competency evaluation system for nursing students in medical-surgical wards and determine its validity and reliability. Methods: This cross-sectional study was conducted on the nursing students who were spending their practicum courses at the medical-surgical wards. First, the educational objectives and applicable evaluation tools were determined. Then, three tools of: Direct Observation of Procedural Skills (DOPS), Mini Clinical Evaluation Exercise (Mini-CEX), and Clinical Work Sampling (CWS) were determined as appropriate tools. Finally, the evaluation system was designed and its validity was confirmed using content validity index (CVI) and content validity ratio (CVR). Reliability of the tools was calculated using Cronbach's alpha coefficient. Results: CWS tool had CVI = 0.91 and CVR = 0.93, DOPS tool had CVI = 0.98 and CVR = 0.94, and Mini-CEX tool had CVI = 0.93 and CVR = 1. These results indicated desirable validity of the designed evaluation system. In addition, all items had appropriate CVR. Reliability was also higher than 0.7. Significant difference was found between the results of students' evaluation using the School's current evaluation method and the designed evaluation system. From the perspective of teachers and students, the designed evaluation system was accepted. Conclusion: The designed evaluation system had high reliability and validity. Its application satisfied the majority of teachers and students. Therefore, it can be used as a useful evaluation system for assessing clinical competencies in medical-surgical wards.

Keywords: Clinical competence, evaluation, medical-surgical wards, nursing


How to cite this article:
Rafii F, Ghezeljeh TN, Nasrollah S. Design and implementation of clinical competency evaluation system for nursing students in medical-surgical wards. J Family Med Prim Care 2019;8:1408-13

How to cite this URL:
Rafii F, Ghezeljeh TN, Nasrollah S. Design and implementation of clinical competency evaluation system for nursing students in medical-surgical wards. J Family Med Prim Care [serial online] 2019 [cited 2019 Jul 18];8:1408-13. Available from: http://www.jfmpc.com/text.asp?2019/8/4/1408/257103




  Introduction Top


Evaluation is an inseparable part of education and student evaluation as a subset of evaluation in the educational activities is considered as the most important component of university education.[1],[2] Educational evaluation is an important element in teaching–learning process, as in the process of formulating and designing the curriculum, the second step after determining educational goals, is determining the comprehensive evaluation methods.[3] Evaluation provides the opportunity to identify the weaknesses and strengths, so effective steps can be taken to reform the educational system by promoting positive aspects and eliminating failures.[1],[2] Creating an evaluation system using evaluation techniques and tools to assess the outcomes of educational curriculum is important in the nursing schools.[4] The evaluation system should be valid, reliable, sustainable, objective, practical, and cost-effective, based on the level and scope of learning, acceptable from the perspective of learners and teachers, and have educational effect on learning and the future performance of learners.[1],[5],[6] Learning levels of Miller's pyramid and his recommended methods of evaluation for each level from the bottom up include: knows (written evaluations), knows-how (written evaluations), shows- how (evaluations such as OSCE) and does (direct observation, portfolio, log books, peer review),[1],[7] One of the learning outcomes in nursing is the creation of clinical competency in students. Clinical competency includes; understanding of knowledge, clinical, technical, and communicational skills, and the ability to solve problems through the use of clinical judgment.[8] But, each one of the evaluation methods has limitations and to assess the clinical competency at the top of the Miller's pyramid, using only one evaluation tool does not have a high validity and reliability and it is necessary to use different tools and methods.[9] Evidence suggests that tools and evaluation methods currently used in nursing schools do not have the validity and reliability to evaluate the performance appraisal and clinical competences of students [10] and, in some circumstances, are not able to recognize the theoretical and practical knowledge of students.[11],[12] Van der Vleuten believes that, it is naive to assume that by using only one tool we can comprehensively assess the learning of students, and this mentality reduces the quality of evaluation. He stated that, evaluation is one of the important challenges of educational design and must be considered as a systemic program. Having a programming vision and using various evaluation methods can help in accurate and correct implementation of the assessment.[13] Since the ultimate goal of nursing education is to train competence nurses and ensure that the patients receive high levels of care, the most important goal of clinical education in nursing is to improve the practical skills and clinical competencies of nursing students.[14] Although, there are various tools for evaluation of clinical competences, and as determining clinical competency is one of the responsibilities of nursing schools, an effective and comprehensive evaluation system to assess the clinical competences of students has not been developed yet.[15] The aim of this study was to design and implement an effective, valid, and reliable evaluation system for assessing the clinical performances of nursing students. Due to the number and variation of clinical teaching wards, this study was conducted on cardiovascular, respiratory, endocrine, and infection wards that had greater scope of educational and skill objective. Two criteria, reliability and validity, are among the major indicators in the design of evaluation system. A system that has the validity that is capable, adequate, and suitable to measure is desired.[3] To measure the validity of a system, content validity method is used that determines whether the system appropriately and adequately covers the content of measuring scope.[16] On the other hand, a system has to have the reliability that its generated results have stability, repeatability, reliability, and accuracy.[17],[18] In this study, after designing the evaluation system, its reliability and validity were also examined.


  Materials and Methods Top


This was a cross-sectional study. The study population consisted of fourth-year nursing students who were spending their practicum courses at the cardiovascular, respiratory, endocrine, and infection medical-surgical wards as well as the clinical instructors who were responsible for the clinical education and training of the students at the Iran University of Medical Sciences. A total of 30 students and 4 clinical instructors responsible for the clinical education of the students were included in the study. This study was conducted in the first semester of the academic year 2017-2018. The study data were collected in three stages based on the study objectives. First, a full description of the educational objectives in the cardiovascular, respiratory, endocrine, and infection wards was determined and then reviewed and amended after a survey of expert opinions. A list of applicable tools and evaluation methods was determined and, through in-person discussion meetings with expert panels as well as written questionnaires, basic information about the practicality, cost-effectiveness, applicability of the tools in the practicum course, educational impact on students' learning and future performance, and acceptance of these tools and the perspective of learners and teachers were collected. Finally, three tools namely Direct Observation of Procedural Skills (DOPS), Mini Clinical Evaluation Exercise (Mini-CEX), and Clinical Work Sampling (CWS) were determined as the appropriate tools [Table 1] and were used in designing the evaluation system. To ensure the designed evaluation system contains the important and essential criteria for evaluating the learning objectives of nursing students, content validity ratio (CVR) was used. In this study, reliability was calculated using Cronbach's alpha coefficient and validity was determined using content validity index (CVI) and content validity ratio (CVR). The validity and reliability of the designed system and its CVI and CVR were calculated and was found to be acceptable (Reliability higher than 0.7, CVI higher than 0.79, and CVR higher than 0.80). In the second stage which was concurrent with the start of the first semester in 2017-2018, the designed system was introduced to students who had practicum course at that semester and to clinical instructors who were responsible for the clinical education of students for that semester. All students at the mentioned wards were assessed and evaluated by the designed evaluation system during their practicum course. The duration of the practicum course was 18 days and, during this time, the students were supervised, trained, and assessed by the same instructors. Finally, instructor and student feedback on the newly designed evaluation system were collected using a questionnaire and their satisfaction was determined.
Table 1: Features that were evaluated for the tools of designed evaluation system

Click here to view


Ethical considerations: Since the designed evaluation system was at testing period and its reliability had not been determined before its implementation, to preserve the right of students, decision about their scores was made based on the evaluation method that was currently at use in the school. Furthermore, to maintain the integrity and confidentiality of information, the names and scores of students were not published in any stage of the study.

Data analysis method: After the implementation of study and collecting the questionnaires and students' scores, data analysis was done using SPSS statistical software version 16. To determine the level of satisfaction of samples from the designed evaluation system and description of study subjects, descriptive statistics and central mean indicators were used. To answer the study's questions, inferential statistics were used and, to compare the results of students' evaluation with the two designed evaluation system and the current evaluation method, Paired t-test was used.


  Results Top


A total of 30 fourth-year students comprising 19 females (63.33%) and 11 males (36.67%) were included in the study. Four female instructors who had done masters in nursing, have over 10 years of experience in clinical education and have clinical teaching experience in medical-surgical wards were made responsible for the training of the students. The first objective of this study was to determine the validity of the designed evaluation system to assess the clinical performance of students in medical-surgical wards (cardiovascular, respiratory, endocrinology, and infection). To determine the validity of each tool used in the newly designed system, content validity was used. First, to ensure that the important criteria necessary for the evaluation of educational objectives of nursing students in the medical-surgical wards have been included in their evaluation tool, content validity ratio (CVR) was used. According to the critical value table provided by Lawshe, based on the number of members in the panel of experts,[19] which is 13 in this study, the acceptable CVR is 0.54 in order to say an item in necessary in the significant level of P < 0.05. As seen in [Table 1], all items in the three designed evaluation tools to measure the professional characteristics, procedural skills, and clinical skills had a CVR greater than 0.54. Therefore, it can be said with 95% confidence that all the items were necessary and important for the implementation of clinical evaluation of nursing students in the medical-surgical wards. In the next step, to get information and make decisions about review, amendment, removal, or replace of any item of the designed evaluation system tools, content validity index (CVI) was used. This meant the experts were to declare their opinions about the relevance, clarity, and simplicity of each item.

According to the standards, each item with a score was greater than 0.79 was considered appropriate, and if its score was between 0.70–0.79, it required review and amendment, and if its score was less than 0.70 it had to be removed.[20] As seen in [Table 1], all items in the three designed evaluation tools to measure professional characteristics, procedural skills, and clinical skills had a CVR of greater than 0.54. Therefore, according to experts' opinions, no item was removed or changed. In addition, based on a questionnaire survey scored using 5-point Likert scale, all the tools of the designed evaluation system were of higher than the average scale thus indicating the satisfaction of the experts [Table 2].
Table 2: Content validity ratio and index of the tools of designed evaluation system

Click here to view


The second objective of this study was to determine the reliability of the designed evaluation system in assessing the clinical performance of the students. For this purpose, it was necessary to test the system first. Thus, the evaluation system was designed that its validity had been confirmed, and was implemented for all students who had the practicum course at the desired wards in the academic year 2017-2018 at the first semester. In the present study, Cronbach's alpha coefficient was used to determine the reliability of the new evaluation system. In this method which is the most important method to calculate the reliability of a tool, the internal consistency of each question was reviewed by each one of the exam's questions.[3] Accordingly it was determined that all parts of the designed evaluation system had good reliability (more than 0.7). Furthermore, the reliability of the designed evaluation system was calculated and confirmed (greater than 0.7) [Table 3].
Table 3: Reliability of the tools of designed evaluation system

Click here to view


Regarding the significant difference between the evaluation results of students using the newly designed evaluation system and the current evaluation method used in the school, the scores of students in all wards were compared and statistically tested. Statistical findings showed a significant difference between the scores in all dimensions. Negative or positive T means that the average scores of students in the designed evaluation system are either higher or lower than the current evaluation method used in the school and, thus, it does not reflect the performance of the designed evaluation system in recognizing the performance of the students [Table 4].
Table 4: Comparison of the students' evaluation results in the cardiovascular, respiratory, endocrine, and infection wards using the current method of assessment and designed evaluation system

Click here to view


The level of satisfaction of the students and instructors regarding the newly designed evaluation system was examined using a questionnaire survey. The new system has been accepted by both by the teachers and students and was described by them as highly useful and practical. The designed evaluation system had a statistically significant difference (P < 0.0001), [Table 5] as compared with the current evaluation system.
Table 5: Comparison of the satisfaction of students and teachers from the School's current evaluation methods and designed evaluation system

Click here to view



  Discussion Top


Student evaluation is important because of its implications. The effective evaluation of students not only plays an important role in screening students but also increases their motivation to learn and helps teachers to assess their activities. Educational experts assume multiple goals and positive results in evaluating the students which include (1) promoting the ability of students with guidance and motivating them to learn knowledge, skills, and professional abilities, (2) identifying students who are clinically incompetent and prevent their entry into service and consequently, protecting people and patients in health centers receiving inappropriate and even life-threatening care, (3) establishing a criteria for selecting clinically competent students and their admission to higher educational levels, (4) identifying the strengths and weaknesses of educational programs and the curriculum by providing feedback to teachers and administrators, and (5) identifying and resolving the barriers to student learning (5 and 9). Considering the clinical nature of nursing profession and the need of society for competent nursing staffs, it is necessary for the nursing schools to ensure that their students have the professional competences to undertake their duties [21] by establishing a comprehensive, effective, and efficient evaluation system. Therefore, nursing schools should use a set of tools and evaluation techniques to assess the educational curriculum because evidence suggests that the use of only one method or one evaluation tool for judging the clinical competency of students is not appropriate.[1] In this regard, in the current study an evaluation system was designed and implemented to comprehensively assess the competency of nursing students who are in their fourth year and practice in the cardiovascular, respiratory, endocrine, and infection medical-surgical wards. The results of this study showed that the designed evaluation system was accepted by both the teachers and students, considering various factors. Among such factors, the ability of the new system to properly evaluate the clinical skills, procedural skills, and the behavior and professional characteristics of the students, the ability to distinguish between students with different levels of clinical competences, not being influenced by personal views of teachers, and the ability to properly evaluate the goals and capabilities necessary for the clinical performance of the students with clarity and understanding. Thus, the designed evaluation system was appropriate to assess the clinical competency of students. Several studies have indicated that the tools and evaluation methods that are currently used in the nursing schools are not adequately capable to assess the clinical competency of students and are not accepted by students and teachers.[10] Ensuring the validity and reliability of an evaluation system is a major challenge in designing an evaluation system. Van der Vleuten stated that the validity and reliability of an evaluation system are among the main criteria for the system and referred that the use of an evaluation system without validity and reliability is a threat to education. When a new evaluation system is developed, apart from the efforts put forth in its design, it is expected that sufficient information about its validity and reliability should be offered to people enabling them to better judge the quality of system.[22]

The validity of an evaluation system depends on various factors that are categorized into two groups: internal and external. Internal factors affecting the validity include system's manual, quality of the questions, arrangement of questions, and duration of the test. External factors affecting the validity include implementation, scoring, and consideration of the psychological characteristics of students. The reliability of evaluation system is influenced by several factors which include the number of questions and the duration of the test, the sample size, the similarity of content and understandability of the questions, and the scale of the measure.[3] The results of the study showed that the designed evaluation system had appropriate content validity ratio (CVR), content validity index (CVI), and high overall content validity and reliability. Thus, according to this study, simultaneous use of multiple evaluation methods had increased the reliability and validity of the designed evaluation system. In similar, a study by Karayurt et al. (2009) to design a scale to assess the nursing students of University of Turkey found that the use of multiple methods and tools of evaluation to assess the clinical performance of students decreases the weaknesses and limitations of each method and increases the reliability and validity.[14] Accordingly, in recent decades, educational and evaluation researchers emphasize on creating a comprehensive and multidimensional evaluation system.[1],[23],[24] However, according to education experts, most of the methods and tools used in nursing schools do not have acceptable reliability and validity.[10] Examining the satisfaction level of the stakeholders (clinical instructors and students) is also an important criterion in designing an evaluation system. The study by Zeraati and Alavi entitled “Designing and validity evaluation of Quality of Nursing Care Scale in Intensive Care Units” has also found that the satisfaction of the clinical teachers and students regarding a newly designed evaluation system is also important for it to be used by them.[25] The findings of the present study showed that majority of the students and teachers were satisfied with the designed evaluation system and were dissatisfied with the current system of evaluation. Thus, simultaneous use of multiple evaluation methods has increased the satisfaction of both the teachers and students. However, in other studies, majority of the nursing students complained about clinical evaluation process. For example, results of a study by Imanipour et al., entitled “Development of a comprehensive clinical performance assessment system for nursing students: A programmatic approach” showed that 57% of nursing students thought the clinical evaluation was inappropriate.[26]


  Conclusions Top


Findings of this study showed that the designed evaluation system to assess the clinical performance of nursing students in the cardiovascular, respiratory, endocrine, and infection medical-surgical wards had high validity and reliability. In addition, majority of teachers and students have accepted and were satisfied in adopting the newly designed evaluation system. They found the system to be more reliable, useful, and practical compared to the previous evaluation methods and possess the ability to conduct clinical evaluation in line with the goals and feedback of the educational system. Given the positive results of this evaluation system, its use in clinical evaluation of nursing students is suggested.

Acknowledgments

The authors would like to thank the authorities and hospital staffs affiliated to the Iran University of Medical Sciences who helped us in this study.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Shumway JM, Harden RM. The assessment of learning outcomes for the competent and reflective physician: AMEE guide no 25. Med Teach 2003;25:569-84.  Back to cited text no. 1
    
2.
Swandwich T. Understanding Medical Education: Evidence, Theory and Practice. 1st ed. Malaysia: Wiley-Blackwell; 2010.  Back to cited text no. 2
    
3.
Seif AA. Educational Measurement, Assessment and Evaluation. 3rd ed. Tehran: Doran Publisher; 2004.  Back to cited text no. 3
    
4.
Norcini JJ. ABC of learning and teaching in medicine: Work based assessment. BMJ 2003;326:753-5.  Back to cited text no. 4
    
5.
Epstein RM. Assessment in medical education. N Engl J Med 2007;356:387-96.  Back to cited text no. 5
    
6.
Regehr G, Ginsburg S, Herold J, Hatala R, Eva K, Oulanova O. Using standardized narratives to explore new ways to represent faculty opinions of resident performance. Acad Med 2012;87:419-27.  Back to cited text no. 6
    
7.
Amin Z, Eng KH. Basics in Medical Education. 1st ed. Singapore: World Scientific; 2004.  Back to cited text no. 7
    
8.
Schroeter K. Competency Literature Review. Colorado: Competency & Credentialing Institute; 2009.  Back to cited text no. 8
    
9.
Joyce B. Developing an Assessment System: Facilitator's Guide. Accreditation Council for Graduate Medical Education. [Cited 2012 Jun 14]; Available from: http://www.acgme.org/outcome/e-learn/FacManual_module3.pdf.  Back to cited text no. 9
    
10.
Wishnia GS, Yancy P, Silva J, Kern-Manwaring N. Evaluation by exception for nursing students. J Nurs Educ 2002;41:495-7.  Back to cited text no. 10
    
11.
Ibrahimi A. Assessment and comparison of teachers' and senior nursing students' views about clinical teaching problems in school of nursing and midwifery of Isfahan university. [Dissertation]. Iran: Master of Nursing. School of Nursing and Midwifery of Tehran University of Medical Sciences; 1992.  Back to cited text no. 11
    
12.
Mojabi S. Evaluation of senior students' view about clinical evaluation method in nursing schools and their suggestions. [Dissertation]. Iran: Master of Nursing. School of Nursing and Midwifery of Iran University of Medical Sciences; 1986.  Back to cited text no. 12
    
13.
Van der Vleuten CP, Schuwirth LW. Assessing professional competence: from methods to programmes. Med Educ 2005;39:309-17.  Back to cited text no. 13
    
14.
Karayurt O, Mert H, Beser A. A study on development of a scale to assess nursing students' performance in clinical settings. J Clin Nurs 2009;18:1123-30.  Back to cited text no. 14
    
15.
Dolan G. Assessing student nurse clinical competency: Will we ever get it right? J Clin Nurs 2003;12:132-41.  Back to cited text no. 15
    
16.
Waltz CF, Strickland OL, Lenz ER. Measurement in Nursing and Health Research, 3rd ed. New York: Springer Publishing Co; 2005.  Back to cited text no. 16
    
17.
Kubiszyn T, Borich G. Educational Testing and Measurement, Classroom Application and Practice, 7th ed. United States: John Wiley & Sons, INC; 2003.  Back to cited text no. 17
    
18.
Kaplan RB, Saccuzzo DP. Psychological Testing: Principles, Applications, and Issues, 7th ed. United States: Wadsworth; 2009.  Back to cited text no. 18
    
19.
Lawshe CH. A quantitative approach to content validity. Personal Psychol 1975;28:563-75.  Back to cited text no. 19
    
20.
Lynn MR. Determination and quantification of content validity. Nurs Res 1986;35:382-5.  Back to cited text no. 20
    
21.
Zubeir A, Chong YS, Khoo HE. Practical Guide to Medical Student Assessment. Singapore: World Scientific; 2006.  Back to cited text no. 21
    
22.
Polit DF, Black CT. The content validity index: Are you sure you know what's being reported? Critique and recommendations. Res Nurs Health 2006;29:489-97.  Back to cited text no. 22
    
23.
Van der Vleuten CP, Schuwirth LW, Driessen EW, Dijkstra J, Tigelaar D, Baartman LK, et al. A model for programmatic assessment fit for purpose. Med Teach 2012;34:205-14.  Back to cited text no. 23
    
24.
Schuwirth LW, Van der Vleuten CP. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach 2011;33:478-85.  Back to cited text no. 24
    
25.
Zeraati M, Alavi NM. Designing and validity evaluation of quality of nursing care scale in Intensive Care Units. J Nurs Meas 2014;22:461-71.  Back to cited text no. 25
    
26.
Imanipour M, Jalili M. Development of a comprehensive clinical performance assessment system for nursing students: A programmatic approach. Jpn J Nurs Sci 2016;13:46-54.  Back to cited text no. 26
    



 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5]



 

Top
   
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
   Materials and Me...
  Results
  Discussion
  Conclusions
   References
   Article Tables

 Article Access Statistics
    Viewed151    
    Printed6    
    Emailed0    
    PDF Downloaded25    
    Comments [Add]    

Recommend this journal