Assessment of confidence in medical writing

Effective medical writing and publishing study results in peer-reviewed journals are essential for sharing findings globally. Success in this process indicates personal credibility and aids career advancement. To succeed, authors must adhere to guidelines from organizations like the International Committee of Medical Journal Editors (ICMJE) and master English language skills for academic writing. Challenges are magnified for non-native English speakers and those unfamiliar with publication ethics. These obstacles are common in health research.

Graduate programs in health sciences often overlook writing and publishing skills, necessitating extracurricular workshops. However, the impact of such sessions on participants’ confidence remains unassessed due to lack of a reliable measurement tool. Confidence is vital for authors’ productivity and success. A standardized instrument could assess confidence in writing different sections of a medical article and using appropriate academic English. This tool could guide workshop content and evaluate their effectiveness.

To address this need, a study was conducted to develop and assess the first measurement tool for authors’ confidence in medical writing and English language skills.

In our search for an instrument to assess confidence in medical writing and English language usage, we conducted a thorough literature review but found no suitable tool. Therefore, we created a new one by compiling items from various sources. We used our experience from conducting medical writing workshops and interacting with participants to identify common areas of confidence and concern. Additionally, we consulted medical journal editors and other experts in the field to gather input on potential items for the tool. We collaborated with university professors who had extensive experience in facilitating medical writing workshops to ensure comprehensive coverage of relevant topics.

This process resulted in a pool of 50 items, which were refined through multiple revisions to eliminate redundancy and ensure clarity. Eventually, we settled on a 37-item tool with two domains. A panel of five experts, including Editors-in-Chief of medical journals, workshop presenters, and educators, assessed the items. Based on their feedback, one item related to writing clinical implications was removed because it was deemed potentially irrelevant to all types of medical articles.

We developed a Google form to assess confidence levels in two areas: selecting appropriate content for medical articles and using academic English. Each area had 18 items rated on a 5-point Likert scale. The form takes less than 8 minutes to complete.

To test the form’s reliability, we contacted participants from previous workshops and explained the purpose of our study. We obtained their consent for participation and assured them of data confidentiality. Participants were offered priority for future workshops as an incentive. We sent the form via email and WhatsApp, asking them to complete it within 2 days. After 1-2 weeks, we followed up with a reminder. The form also collected demographic information.

We created a tool with two sections to gauge confidence in writing a standard medical article and in using proper English. Each section contains 18 items. The first section assesses confidence in writing various parts of a medical article, from the cover letter to the conclusion. The second section evaluates confidence in using active verbs, short sentences, simple words, gerunds, conjunctions, etc. The tool showed high reliability, with an internal consistency of around 0.97 and a test-retest reliability of 0.926. Interpretation of the results suggests the tool measures a single construct of confidence in academic writing, and all items contribute to that construct. Moreover, scores are unlikely to change without actual changes in confidence in writing skills.

With a Content Validity Index (CVI) of 0.75 and a convergent validity of 0.79, the tool demonstrates acceptable validity. Factor analysis confirmed unidimensionality, indicating that all items measure the same property, enhancing the scale’s construct validity. Unidimensionality also influences the test’s reliability. These findings support the selection of the domains and the overall tool’s suitability.

This study presents the first reliable and valid measurement tool for assessing participants’ competency in medical writing workshops and similar educational settings, focusing on English language proficiency. Strengths include recruitment of experts for content validity, assessment of internal consistency and test-retest reliability, and confirmation of construct validity through factor analysis. However, limitations include the inability to test criterion and discriminative validity due to lack of comparable tools. Future studies could address this by comparing their tools to this one, though a larger sample size was not feasible despite considerable efforts.

Many studies on the impact of educational sessions on medical writing have relied on qualitative approaches or surveys to gauge participants’ feelings and apprehensions. Quantitative studies often lack information on the psychometric properties of the instruments used. One study, by Cargail et al., assessed participants’ confidence in a writing program, providing information on the instrument’s psychometric properties. In another study, Gardners et al. used a mixed-method approach, but lacked information on the psychometric properties of their instruments.

To ensure content validity, an expert panel reviewed items for a Content Validity Index (CVI). Only one item required modification, as it was deemed limiting to clinical studies. The adjustment aimed to enhance the tool’s usability across contexts. The CVI was calculated at 0.75, indicating acceptable content validity.

Cronbach’s α was used to assess internal consistency, with values above 0.90 indicating strong consistency. The tool’s high number of items likely contributed to this. Test-retest correlations were also high, suggesting reliability.

Developing a reliable and valid measurement tool for assessing confidence in medical writing is crucial. This tool can accurately measure confidence levels in writing medical articles and using appropriate English language. Future research could focus on confidence in publishing issues beyond writing, such as journal selection and ethical concerns.


Source:

Astaneh B, Raeisi Shahraki H, Astaneh V, Guyatt G (2024) Assessment of confidence in medical writing: Development and validation of the first trustworthy measurement tool. PLoS ONE 19(4): e0302299. https://doi.org/10.1371/journal.pone.0302299