ABSTRACT
Background
This study aimed to assess the quality, reliability, and educational value of pneumatic retinopexy (PR) videos on YouTube.
Materials and Methods
This retrospective, cross-sectional analysis evaluated the first 250 YouTube videos identified using the keyword “Pneumatic Retinopexy”. Data collected included the number of views, likes, dislikes, video duration, content type (surgical or non-surgical), purpose, and upload source. Sources were categorized as healthcare professionals or patients. Video quality and educational value were assessed using the modified DISCERN (mDISCERN), Health on the Net Foundation (HONcode), Journal of the American Medical Association (JAMA), and Global Quality (GQ) scoring systems.
Results
Of the 250 videos screened, 194 were included. Median scores were 2 (range: 0-5) for mDISCERN, 2 (range: 0-8) for HONcode, 1 (range: 0-4) for JAMA, and 3 (range: 1-5) for GQ. Healthcare professionals uploaded 83.5% (n=162) of videos, while patients uploaded 16.5% (n=32). Videos uploaded by healthcare professionals received significantly higher quality ratings (p<0.001). Surgical content videos were longer and demonstrated higher quality scores compared to non-surgical videos (p<0.05). Correlation analysis revealed that higher numbers of views, daily view rates, and comments were positively associated with increased like rates.
Conclusion
This study demonstrates that the most reliable and educationally valuable PR videos on YouTube are primarily uploaded by healthcare professionals. Enhancing the availability of high-quality PR content on YouTube may significantly improve educational outcomes for both patients and healthcare providers.
Introduction
Rhegmatogenous retinal detachment (RRD) is the most common type of retinal detachment (RD) and can lead to significant visual sequelae. Among procedures used to treat RRD, pneumatic retinopexy (PR) is unique in that it can be performed in an office setting rather than an operating room (1).
There are several clear advantages of PR, including faster visual recovery, avoidance of systemic anesthesia, reduced risk of cataract formation, and lower procedural costs (1-4). However, PR may not be appropriate for eyes with certain high-risk conditions, such as aphakia, extensive lattice degeneration, or proliferative vitreoretinopathy. Clinical studies have demonstrated that PR achieves anatomical outcomes comparable to pars plana vitrectomy, and it may be preferable in specific patient groups due to its lower morbidity. Although randomized clinical trials and medium-sized observational studies support PR as an effective treatment, further large-scale studies are necessary to confirm these findings (2, 5-9).
In recent years, the internet has become an important source of medical information, with patients frequently utilizing it as a resource for obtaining health-related knowledge. YouTube currently ranks as the second most visited website worldwide (10). Usage of this platform continues to grow significantly, with an average of two billion active users per month and over one million videos uploaded daily (11). Medical videos on YouTube are frequently viewed, and approximately 80% of users discuss the information they acquire from these videos with their physicians (12). Moreover, 75% of patients report that YouTube videos influence their treatment decisions, particularly for chronic medical conditions (13). Despite these advantages, there are certain problematic aspects associated with the use of YouTube for medical information, such as patient-uploaded content, opinions shared without sufficient knowledge or expertise, promotional materials, inadequate information on contraindications, and complications, and the absence of a regulated review process (14).
Although YouTube hosts a substantial amount of content offering information on various medical conditions and their treatment methods, no study has yet evaluated videos specifically related to PR. Therefore, the aim of this study is to assess the reliability, quality, effectiveness, and utility of YouTube videos pertaining to PR.
Materials and Methods
This retrospective, record-based, cross-sectional study was conducted by searching YouTube (www.youtube.com) on 15 September 2023, using the keyword “Pneumatic Retinopexy”. To ensure search accuracy, no personal YouTube or Google accounts were used, and both Google and computer caches were cleared. A total of 250 videos were initially analyzed. However, only videos uploaded in English were included and videos were evaluated only once. All videos were independently reviewed by two ophthalmologists (S.E., M.U.), and any discrepancies were resolved by a third ophthalmologist (M.K.). Since the data were collected from publicly accessible videos and no patient-specific data were involved, ethical approval from the local research ethics committee and patient consent were not required.
The study evaluated the following parameters: the number of views, video duration (minutes), age of the video (time until 15 September 2023), number of likes and dislikes, number of comments, and daily views, video type (with or without subtitles), content type (surgical vs non-surgical), purpose (clinical knowledge, treatment procedure, and postoperative period), and source (patients, doctors, hospital institutions, or commercial health channels). The exclusion criteria are summarized in Figure 1.
The quality and educational value of the videos were assessed using the Health on the Net Foundation (HONcode), modified DISCERN (mDISCERN), Journal of the American Medical Association (JAMA), and Global Quality (GQ) scoring systems. The HONcode was developed to enhance transparency and the trustworthiness of health information dissemination. Websites adhering to HONcode principles have been demonstrated to provide high-quality health information to users (15, 16). In this study, video quality was evaluated based on the eight original HONcode principles, assigning each video a score of 1 for adherence and 0 for non-adherence, resulting in a total HONcode score. The JAMA scoring system was utilized to assess the reliability of video content (17). This widely used evaluation tool consists of four categories: authorship, attribution, disclosure, and currency, with each category scored as either 0 or 1, and a maximum score of 4 indicating the highest quality. The DISCERN instrument helps users evaluate the quality of written health information. In this study, video reliability and transparency were assessed using a modified five-point DISCERN scale (18), with scores ranging from 1 to 5 based on five criteria adapted from the original DISCERN questionnaire. Additionally, a GQ score was assigned to each video, rating overall quality on a five-point scale, with 1 representing poor quality and 5 representing excellent quality (Table 1) (19).
Statistical Analysis
In this study, continuous variables were expressed as mean ± standard deviation or median (minimum-maximum), while categorical variables were summarized using frequencies and percentages. The normal distribution of continuous variables was assessed using the Kolmogorov-Smirnov and Shapiro-Wilk tests. The Mann-Whitney U test was used for comparisons between two independent groups, whereas the Kruskal-Wallis H test was applied for comparisons involving three or more independent groups, depending on data distribution. Categorical variables were analyzed using Pearson’s chi-square test, Fisher’s exact chi-square test, or the Fisher-Freeman-Halton test, as appropriate. Relationships between variables were evaluated by Spearman correlation analysis. All statistical analyses were conducted using IBM SPSS Statistics (version 28). A confidence level of 95% was adopted, and p-values less than 0.05 were considered statistically significant.
Results
In this study, we identified a total of 250 YouTube videos meeting the specified inclusion criteria, of which 194 were included in the analysis. The median duration of these videos was 9.4 minutes, and 49.5% of them were longer than 10 minutes. Table 2 provides a descriptive summary of the characteristics of the 194 analyzed videos.
Table 3 compares video characteristics according to the detailed distribution of upload sources. Significant differences were observed between groups regarding the the number of comments, the video purpose, the surgical content, and all scoring metrics. Pairwise comparisons revealed statistically significant differences across all scoring metrics between videos uploaded by patients and those uploaded by physicians or hospital institutions.
Table 4 summarizes the comparative analysis of videos according to their content type. Notable differences were observed among groups regarding video length, number of comments, and all scoring parameters. Videos containing surgical content tended to have longer durations and demonstrated higher quality scores. Additionally, a positive and statistically significant correlation was identified between the number of likes and both the total number of views and the daily view ratio. Table 5 presents the correlation coefficients among all analyzed variables.
Discussion
The widespread use of YouTube, combined with the ease and free nature of video uploading, has made the platform a prominent resource for individuals seeking to share or access information. However, despite its potential benefits, YouTube can also facilitate the dissemination of inaccurate or potentially harmful information. For this reason, numerous studies in the field of ophthalmology have evaluated the reliability and quality of content available on YouTube (20-26). Kunze et al. (20) concluded that videos related to meniscus injuries were generally of poor quality and low reliability in their analysis of YouTube videos, using the keyword “Meniscus”. In another study focusing on retinitis pigmentosa, only 31.5% of the videos were found to contain valuable and scientifically accurate information (22). Sahin et al. (23) similarly reported the presence of negative, contradictory, and misleading information in YouTube videos related to retinopathy of prematurity. As a consequence of such misinformation, some patients may refuse specific treatments, while others may have unrealistic expectations regarding treatment success rates.
Previous studies have employed various scoring systems to assess the accuracy and reliability of online videos. In our study, the median scores for mDISCERN, GQ score, JAMA, and HONcode were 2, 3, 1, and 2, respectively. Similar findings of low-quality scores have been reported in studies examining videos related to refractive and vitreoretinal surgeries, aligning closely with our results (27, 28).
Our analysis revealed a significant discrepancy in the number of comments based on the source of the uploaded videos, with videos uploaded by patients receiving a higher number of responses (p=0.005). This may be attributed to viewers with similar medical conditions preferring to engage with and learn from the experiences of other patients, who typically communicate without complex medical terminology. Similar to our findings, previous research also indicates that videos uploaded by physicians, despite their higher reliability, tend to attract fewer views (29-31). The extensive scientific content, detailed explanations, and longer duration of physician-uploaded videos might contribute to their lower engagement rates, as indicated by fewer views and comments.
It has been established that videos uploaded by healthcare professionals are generally rated higher in terms of quality and reliability compared to those uploaded by patients. Additionally, patient-uploaded videos predominantly focus on postoperative experiences, whereas those uploaded by physicians and other healthcare providers typically emphasize the treatment process itself. This difference may stem from the fact that patients commonly share videos to explain their motivations for undergoing surgery and offer recommendations for postoperative head positioning, while healthcare professionals’ videos regarding PR typically adopt a more scientific approach, covering topics such as etiology, surgical techniques, treatment options, potential complications, and prognosis.
In our study, we identified a significant correlation between video length and both JAMA and GQ scores, which aligns with previous findings reported in the literature (32, 33). Specifically, longer videos typically offered more comprehensive explanations regarding surgical techniques, clinical information, postoperative care, and potential complications, suggesting they might possess greater educational value.
The daily view count is widely considered a critical indicator for evaluating a video’s relevance to current topics. Nevertheless, it has been proposed that integrating daily views with likes, dislikes, and comments may provide a more comprehensive and objective assessment (34). Our findings revealed a positive correlation between the daily view count and the total number of likes, dislikes, and comments, thus supporting this integrated assessment approach.
Study Limitations
This study has certain limitations that must be acknowledged. Firstly, the videos were evaluated at a single point in time. Given the dynamic nature of YouTube content, videos and the information they contain may evolve, potentially yielding different outcomes if assessed at a later date. Secondly, our analysis exclusively included English-language videos, which may limit the generalizability of our results. However, English remains the predominant language used on the internet.
Conclusion
In conclusion, this study is the first in the literature to evaluate the quality, utility, and reliability of YouTube videos concerning PR. Our findings indicate that videos labeled “Pneumatic Retinopexy” on YouTube generally demonstrate low content quality and reliability. To enhance the reliability and educational value of these videos as sources of information, it is essential that all relevant procedural details be accurately presented by qualified healthcare professionals.