Development of a Blended Clinical Education Faculty Development Program in Postgraduate Medical Education and Evaluation with the Kirkpatrick Model
PDF
Cite
Share
Request
Original Article
VOLUME: 6 ISSUE: 1
P: 50 - 58
March 2025

Development of a Blended Clinical Education Faculty Development Program in Postgraduate Medical Education and Evaluation with the Kirkpatrick Model

Hamidiye Med J 2025;6(1):50-58
1. Health Sciences University Türkiye, Antalya Training and Research Hospital, Clinic of Otorhinolaryngology, Antalya, Türkiye
2. Akdeniz University Faculty of Medicine, Department of Medical Education, Antalya, Türkiye
No information available.
No information available
Received Date: 08.11.2024
Accepted Date: 14.03.2025
Online Date: 27.03.2025
Publish Date: 27.03.2025
PDF
Cite
Share
Request

ABSTRACT

Background

The structure of postgraduate medical education differs as it is mostly a competency-based education focused on opportunity-based learning. The content of faculty development programs (FDPs) is mostly intended to help enhance knowledge and skills and to adapt to the developing and changing educator roles by responding to the needs of educators. The aim of this study was to develop a blended FDP for postgraduate medical educators implement the program evaluate it using the Kirkpatrick model.

Materials and Methods

Clinical educators of internal medicine and obstetrics and gynecology were included in the study. The program was carried out with asynchronous training through an online learning management system, and face-to-face training with 5 modules. The program was evaluated with the Kirkpatrick model.

Results

The Likert scale scores were: achievement of session objectives 4.74, training techniques-methods 4.75, program content 4.77, performance of trainers 4.72, duration-methods of sessions 4.89, and satisfaction 4.85. The pre-test and post-test results were found to be statistically significant (p<0.001 and p=0.001). When the educational activities carried out in the clinics before and after the training were compared, journal club activity at the end of the program increased statistically (p<0.001), while a statistically significant increase was detected in the appointment of educational consultants to residents after the training (p=0.001).

Conclusion

This study is the first blended clinical FDP developed, implemented and evaluated with the Kirkpatrick model for postgraduate medical education. According to the results, the program was found to be successful at every stage of its evaluation.

Keywords:
Faculty development program, Kirkpatrick model, postgraduate medical education

Introduction

To increase training motivation in trainers, adequate faculty development programs (FDPs) should be implemented to train them in the required competencies according to institutional policies and the desired academic excellence (1). It has been reported that the goal of quality education can only be achieved after the trainers’ skills are enhanced (2). Khan et al. (3) emphasized that being an expert in medicine and a successful surgeon is not enough to be a successful educator and that additional FDPs are needed.  Attention was drawn to the training of trainers, particularly on effective surgical training in surgical branches, focusing on residents’ evaluation and feedback, coaching in the role of a trainer, and training skills during surgery (4).

Trainers working in training and research hospitals (TRHs) in our country are responsible for training residents, conducting research, and managing an intense clinical workload. To this end, educators are trying to become involved in FDPs to improve their academic and professional skills. There is no regular and compulsory FDP for clinical trainer training in TRHs, but trainers receive this training based on opportunities available and through their personal efforts. FDPs existing in our country are designed for undergraduate medical education, though postgraduate education may differ in method and content. To this end, the following research questions were asked:

• What are the needs of clinical educators involved in postgraduate medical education regarding “training the trainer”?

• If the FDP is developed and implemented;

• What is the satisfaction level of the participants at the end of the training?

• What are the participation and success rates in the training event?

• What is the level of participants’ usage of the training content given in the clinic after the training?

• How is the applied training reflected in clinical education?

Materials and Methods

Approval for the study was obtained from the Akdeniz University Faculty of Medicine Clinical Research Ethics Committee (approval number: KAEK-569, dated: 02.09.2021), and the participants were informed and consent was received.

An internal medicine (IM) department and a surgical department, both with the highest number of residents at Health Sciences University Türkiye, Antalya Training and Research Hospital, were selected for the program. Seventeen trainers from IM and all trainers from obstetrics and gynecology (OG) agreed to participate in the study. The demographic characteristics of the participants are shown in Table 1.

Program Development

The blended clinical FDP was developed by taking into consideration the Kern program development steps (5).

Determining the Current Situation

Before starting the FDP, an online survey was administered to a total of 50 residents in IM (26) and OG clinics (24) to evaluate the educational activities of residents in the clinics.

Needs Assessment

To develop the training program, it is extremely important to first determine the need for postgraduate medical education. For this purpose, a structured focus group meeting was held with IM and OG educators who agreed to participate in the study to analyze needs. A structured focus interview form consisting of ten basic questions, each with opening questions, was used for the focus group interview. During the interview, the researcher first introduced himself and the project, shared information about the purpose of the research and how long the interview would last, and obtained verbal approval for recording. The interviews were held with 19 out of 24 educators in 3 groups of 5-7 people, for an average of 30-45 minutes.

While creating the training program, in addition to these interviews, all articles containing FDPs published to date in the electronic databases of PubMed, Science Direct, and Google Scholar were transcribed. However, no FDPs specifically developed for postgraduate medical education could be found. The content of the training program was created in line with the data obtained.

This study aimed to develop a blended FDP by combining the advantages of online education with the strengths of face-to-face education.

Educational Strategies

The Canvas learning management system was chosen in this study because its web-based interface is suitable for small groups and it provides opportunities such as a mobile application, exams, video uploading, and feedback to discussions. The training videos were prepared by medical education experts in the field who determined the learning goals and objectives. Online courses consist of 5 modules, which can be entered and repeated at any time: Module 1: National Core Education Program (NCEP), Competence-Competency, Learning Goal Writing, Medical Specialization Board Curriculum Creation and Standards Determination System; Module 2: Adult Learning Principles; Module 3: Educational Roles and Feedback in the Clinic; Module 4: Educational Methods in Clinical Education; Module 5: Assessment and Evaluation in the Clinic.

Program Implementation

In order to implement the educational development training, internal support from the Department of Medical Education Faculty Members and external support from Computer Engineering and the administrative dimension of the program from the Chief Physician’s Office were included at every stage. To foresee the problems that may be encountered during the implementation of the program, a pilot application was carried out with 2 participants to obtain information about the content of the program and the evaluation methods used, and any problems in the program were eliminated.

The educator development program started with online training. Twenty-four participants were monitored through the system logs by ensuring that they entered the system with their username and password. When the participant moved on to the course content, he/she started the program with the pre-test before starting the training. In line with the video learning goal, the pre-test questions consisted of 15 multiple choice questions prepared by the trainers working in the online training. The participants were given 30 minutes to answer; the pre-test answers were kept confidential. After all training modules were completed, participants answered the post-test. When the participants answered the post-test questions, they were able to see their exam results, their answers to the questions, and the correct answers in the system. Online courses were kept open for active participants for 3 months.

Face-to-face training was organized three months later for the participants who completed the online training. A pre-test consisting of 15 questions, in line with the aims and objectives of the face-to-face education content, was conducted before the session, and a post-test was conducted after the program. Sixteen participants (12 in IM, 4 in OG) completed the face-to-face training (Face-to-face Training Content; Assesment and Evaluation in the Clinic, Use of Audiovisual Tools, Effective Presentation Planning, Interactive Training Techniques, Body Language).

Program Evaluation

Using the Kirkpatrick program evaluation model, for the first level evaluation, a 5-point Likert online satisfaction survey was administered at the end of the training program to all participants who attended both the face-to-face and online meetings. For the second level evaluation, pre-test and post-test results in online and face-to-face training were compared. For the third level evaluation, residents who received the training were asked 4 months later to fill out the same online survey forms that we applied to obtain their opinions on the educational activities implemented in their clinics.

Statistical Analysis

For the qualitative evaluation of the research, the video recordings of the structured focus group interviews were watched and the conversations were transcribed. The researchers first coded the data, then codes and subcategories were organized, and the findings were defined and interpreted. In addition to the researcher, the opinions of two lecturers, who are experts in their field, were obtained. Expressions with common meanings were combined, themes were formed. The accuracy of these themes was confirmed by evaluating the opinions of the trainers.

Categorical variables were analyzed with Pearson’s chi-square and Fisher’s exact test. The suitability of the data for normal distribution was checked with the Shapiro-Wilk test. The Mann-Whitney U test and Kruskal-Wallis test, were used to analyze the difference between continuous variables in independent groups. Bonferroni correction was performed in post-hoc tests. The pre-test and post-test results of the trainers, in online and face-to-face training, were compared with the Wilcoxon signed-rank test. The IBM SPSS 23.0 software package (IBM Corp., Armonk, NY) was used to analyze the data, and p-values less than 0.05 were considered statistically significant.

Results

Participant Focus Group Interview Results

The video and audio-recorded interviews were then transcribed, and transcript codes, categories, and sub-themes were determined through descriptive analysis. The coding done within this framework was grouped into four main themes: educational roles, educational methods, program scope, and assessment-evaluation. Quotations from the participants’ opinions are stated to conceal the identity of the participant. To facilitate interpretation of the quote, the group in which the interview was held, the participant code, and the time period in which the statement was made are provided for each participant.

Educational Roles

Although it was not expressed conceptually, the participants stated that the trainer should be less of a lecturer and more of a guide and role model. Regarding the educational role;

Group 3 (P17, 21:05); Residents struggle with preparing presentations. I think it is our duty as educators to guide the residents on these issues, such as how to write an article, how to read it, and where to publish it, before reaching the thesis stage.

Program Scope

Participants generally stated that they prepared a training program compatible with the NCEP for residents. They reported that they were able to prepare the content, but did not know how to write a purpose or objective.

Group 1 (P4, 34:04): I think the real problem is that the educator does not know what, how much, and how to teach.

Educational Methods

In the interviews with the participants, they stated that they mostly used lectures, seminars, journal clubs, and multidisciplinary case presentations as training methods. They reported that bedside training and rounds were held regularly.

Group 1 (P5, 27:04): As a young educator, I sometimes cannot control the back rows while teaching. The residents are already so tired, and they start falling asleep at the fifth minute. I need to engage them in the lesson, but I do not know how to achieve that.

Assessment and Evaluation

Multiple-choice midterm exams are frequently administered as assessments in clinical evaluations. It was observed that the participants were not knowledgeable about how to conduct the structured exam.

Group 1 (P5, 08:49): Assessment and evaluation are the areas where we are completely lacking. I learned numerous checklists in assessment and evaluation in my previous training, but I cannot apply them. Today, I have difficulty when someone asks me to prepare a question.

Program Evaluation Results Based on Kirkpatrick Model

First Level (Reaction); Participants’ Satisfaction Evaluation Results

At the end of the training program, all participants were asked to evaluate the program in all aspects according to the propositions (Table 2).

Participants’ opinions regarding the FDP are included based on the propositions.

“Unlike the trainer training I had previously received, topics related to resident training were explained. This made it very useful, and the fact that some of the courses were online made our job easier.”

“The asynchronous online training eliminated the time constraint, and I had the chance to watch the videos over and over again.”

Second Level (Learning); Test Results of Participants

For the second level evaluation, the change in knowledge among the FDP participants before and after the training was evaluated (Table 3). It was determined that there was a significant increase in the post-test scores of the participants in both online and face-to-face training (p<0.001 and p=0.001). No significant relationship was observed between the participants’ pre-test and post-test scores in terms of their gender, title and department (p>0.05), (p=0.018).

Third Level (Transfer); Opinions of Residents on Educational Activities Implemented in Clinics

Forty-two of the residents (participation rate: 84%) responded to the survey. When the educational activities implemented in the clinics before and after the FDP were compared (Table 4), journal club activity was seen to have increased significantly (p<0.001).

Likert scale scores were compared to evaluate the clinical training of the resident before and after the FDP (Table 5). Score increases were observed in statistically significant findings.

Discussion

This study covers the development of a blended educator development program for postgraduate medical education, and the implementation and the evaluation of the program based on the Kirkpatrick model. In the study, the program was evaluated from multiple perspectives by collecting and analyzing quantitative and qualitative data in the same time period.

The structure of postgraduate medical education differs in the clinical environment as it is mostly a competency-based education focused on opportunity-based learning (6, 7). Therefore, instructors need to be actively involved. While FDPs of medical faculties are becoming widespread in our country, access to these trainings is left to individual preferences. This is especially true for clinicians who take part in post-graduate training, and work very hard, as it poses certain cost, time, and accessibility obstacles. The majority of the trainers in our study (83.3%) had not participated in an FDP before and were trained as trainers with the traditional master-apprentice model.

The content of FDPs is mostly intended to support the development of knowledge and skills and to adapt to the developing and changing educator roles by responding to the needs of educators in parallel with the developments in medical education (8). In a study that analyzed the literature on FDPs for clinicians, it was reported that educational development programs focused only on teaching skills covering topics such as training methods, training curriculum development, implementation and evaluation, research methodology, presentation skills, evidence-based medical teaching and quality improvement, as well as using technology tools, communication skills, and role modeling (9). In 2007, Stanford University Department of Anesthesiology reported the development of a program for faculty members in the department to improve and strengthen the training of residents, contributing to the education of residents in the program (10).

When planning an FDP, the diversity of trainers should be recognized, attention paid to the responsibilities of participants. Organizing and categorizing faculty members by their titles will help ensure that each faculty member receives the best benefit from the program and will help faculty members work effectively as a group. Additionally, in line with the FDP, activities should have different content for different levels of trainers in order to ensure maximum satisfaction (11). This is important, as the content that is appropriate for a junior faculty member, just starting out in academic life, is likely to be different from that for a senior faculty member. For this reason, we included participants with different academic titles in our study. While creating the content, literature analysis, review of FDPs conducted in our country, focus group interview data with IM, obstetrics, and gynecology clinic trainers, who constitute the population of our study, contributed greatly to the creation of the training program.

The time and location requirements of face-to-face training can be a barrier when participants and instructors have busy schedules and heavy workloads. Online learning provides a ubiquitous and self-paced learning experience, whereas face-to-face learning encourages adherence to pre-planned formal instruction. Online learning not only eliminates the time and space logistics problems of face-to-face education; it also reduces costs and increases the effectiveness of education by increasing participation (12). In our study, we found that while online courses were 100% completed, the participation rate in face-to-face training was 66.6%. In their study, Yilmaz et al. (13) aimed to determine how junior and senior faculty members of medical departments at a Turkish university perceived the facilitators and barriers in a new blended educator development program. Lack of time was seen as the most critical barrier to participation in the program, while setting goals for personal development and gaining skills in teaching were presented as key enabling factors in the blended program (13).

Program developers devote significant effort to program design and implementation, but less effort to evaluation (14). A significant gap that exists in the academic literature is the lack of discussion and analysis of how FDPs can be implemented to help medical educators improve their skills in all areas of performance. In this context, it has been stated that the Kirkpatrick model can be used not only to evaluate a health program, but also to evaluate a comprehensive FDP (14, 15). When creating a new FDP, the target group and method for measuring the results of Kirkpatrick’s four levels should be determined in advance for each stage. Attention is drawn to the importance of establishing goals and measurable performance criteria in program development early in the planning process (16). In our study, while developing the program as suggested by the literature, we determined the program evaluation model (Kirkpatrick) and identified measurable criteria, such as the satisfaction survey, pre-test, and post-test. In the first stage, we evaluated the program. As a result of the satisfaction survey, consisting of 24 questions that questioned every aspect of the program, we determined that the participants were generally satisfied with the program. In addition, the statements left by the participants at the end of the program, which indicate that the program was very useful, they wanted to participate again, and they watched the videos repeatedly, show that the program was successful for the first level evaluation. In the second phase, we found that there was a significant increase in the post-test scores of the trainers in both online and face-to-face training (p<0.001 and p=0.001). Turning the knowledge, skills, and attitudes gained after the training into behavior and transferring it to real life constitutes the third stage of the Kirkpatrick program evaluation. This stage must be evaluated, and a certain period of time must pass for behavioral change to develop. In our study, a survey was conducted on residents to determine to what extent the training activities and achievements of FDP participants were transferred to their work areas, 4 months after the end of the program. When the survey results before and after the FDP were compared, it was seen that there were significant gains in certain educational activities. However, we think that long-term evaluations are needed to understand why there was no obvious methodological change in assessment and evaluation.

Studies conducted considering the Kirkpatrick model to evaluate FDPs were evaluated in a systematic review article (17). These studies used the Kirkpatrick model of program evaluation. Researchers found that participants reported a positive change in their attitudes after participating in such educator development activities, and demonstrated greater knowledge of their teaching skills. However, few studies have evaluated FDPs in relation to their outcomes addressing changes at the highest level of Kirkpatrick’s model (18). Most studies in the literature have examined changes in small groups of educators rather than on a large scale, revealing only short-term changes in behavior (19). Additionally, most of the data collected in these studies were based on feedback; thus, they did not contain conclusive evidence about the impact of FDPs on student performance or instructors’ teaching skills (16). The results of the FDP according to the Kirkpatrick program evaluation model are considered to be the most important limitation of the study, since our longest data includes the 4th month after the program. The long-term results are unknown, and the results could not be directly observed on the job.

Conclusion

This study is the first blended clinical FDP developed, implemented, and evaluated using the Kirkpatrick program evaluation model for postgraduate medical education. According to the research data, we determined that the program was successful, achieving statistically significant results at every stage of the Kirkpatrick program evaluation of educator training programs. In order to maintain clinical teaching skills in the long term, the training must be repeatable and its reflections on the field must be closely monitored.

Ethics

Ethics Committee Approval: Approval for the study was obtained from the Akdeniz University Faculty of Medicine Clinical Research Ethics Committee (approval number: KAEK-569, dated: 02.09.2021).
Informed Consent: The participants were informed and consent was received.

Acknowledgment

Authors would like to thank Antalya Training and Research Hospital TUEK for supporting the project The authors also wish to thank Akdeniz University, Computer Engineering faculty member Assoc. Prof. Dr. Alper Bilge for the consultancy and technical support he provided in the online trainings.

Authorship Contributions

Concept: H.E., E.G., Design: H.E., E.G., Data Collection or Processing: H.E., Analysis or Interpretation: H.E., Literature Search: H.E., E.G., Writing: H.E.
Conflict of Interest: No conflict of interest was declared by the authors.
Financial Disclosure: The authors declared that this study received no financial support.

References

1
Wilkerson L, Rby D. Strategies for improving teaching practices: A comprehensive approach to faculty development. Acad Med. 1998;73:387-396.
2
Khan CB, Chishti S. Effects of staff training and development on professional abilities of university teachers in distance learning systems. Q Rev Dist Educ. 2012;13:87-94.
3
Khan N, Khan MS, Dasgupta P, Ahmed K. The surgeon as educator: fundamentals of faculty training in surgical specialties. BJU Int. 2013;111:171-178.
4
Deal SB, Alseidi AA, Chipman JG, Gauvin J, Meara M, Sidwell R, et al. Identifying priorities for faculty development in general surgery using the Delphi consensus method. J Surg Ed. 2018;75:1504-1512.
5
Kern D, Thomas PA, Howard DM, Bass EB. Curriculum development for medical education. A six-step approach. Baltimore: The Johns Hopkins University Press; 1998:28-37.
6
Sandhu D. Postgraduate medical education challenges and innovative solutions. Med Teach. 2018;40:607-609.
7
Eyigor H, Kara CO. Otolaryngology residents’ attitudes, experiences, and barriers regarding the medical research. Turk Arch Otorhinolaryngol. 2021;59:215-222.
8
Steinert Y. Faculty development in the new millennium: key challenges and future directions. Medical Education Online. 2000;44-50.
9
Alexandraki I, Mooradian AD. Academic advancement of clinician educators: Why is it so difficult? Int J Clin Pract. 2011;65:1118-1125.
10
Macario A, Tanaka PP, Landy JS, Clark MS, Pearl RG. The Stanford Anesthesia Faculty Teaching Scholars Program: Summary of faculty development, projects, and outcomes. J Grad Med Educ. 2013;5:294-298.
11
Alexandraki I, Rosasco RE, MooradianAD. An evaluation of faculty development programs for clinician-educators: a scoping review. Acad Med. 2021;96:599-606.
12
Slavit D, Sawyer R, Curley J. Filling your PLATE: A professional development model for teaching with technology. TechTrends. 2003;47:35-38.
13
Yilmaz Y, Durak Hİ, Yildirim S. Enablers and barriers of blended learning in faculty development. cureus. 2022;4;14:22853.
14
Alhassan A.I. Implementing faculty development programs in medical education utilizing Kirkpatrick’s model. Adv Med Educ Pract. 2022;13:945-954.
15
Steinert Y, Mann K, Centeno A, Dolmans D, Spencer J, Gelula M, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Medical Teacher. 2006; 28:497-526.
16
McCutcheon S, Duchemin AM. Overcoming barriers to effective feedback: a solution-focused faculty development approach. Int J Med Educ. 2020;11:230-232.
17
Phuong TT, Cole SC, Zarestky J. A systematic literature review of faculty development for teacher educators. High Educ Res Dev. 2018;37:373-389.
18
Cheung VK, Chia NH, So SS. Expanding scope of Kirkpatrick model from training effectiveness review to evidence-informed prioritization management for cricothyroidotomy simulation.Heliyon. 2023;9:1826.
19
McLean M, Cilliers F, Van Wyk JM. Faculty development: Yesterday, today and tomorrow. Medical Teacher 2008;30:555-584.