Search Articles

View query in Help articles search

Search Results (1 to 10 of 32 Results)

Download search results: CSV END BibTex RIS


Convolutional Neural Network Models for Visual Classification of Pressure Ulcer Stages: Cross-Sectional Study

Convolutional Neural Network Models for Visual Classification of Pressure Ulcer Stages: Cross-Sectional Study

Taking random cropping as an example, the size of the original image was reset to 512×512×3, and after random cropping, the size was 256×256×3. The size of the augmented image was fixed again to 224×224×3 before being input into the networks so that the network model could recognize them. Image augmentation: (A) original image, (B) horizontal flip, (C) vertical flip, and (D) random clipping. The PI images are RGB color patterns (Figure 3), and the number of pixels is [0, 256] [16].

Changbin Lei, Yan Jiang, Ke Xu, Shanshan Liu, Hua Cao, Cong Wang

JMIR Med Inform 2025;13:e62774

Using Deep Learning to Perform Automatic Quantitative Measurement of Masseter and Tongue Muscles in Persons With Dementia: Cross-Sectional Study

Using Deep Learning to Perform Automatic Quantitative Measurement of Masseter and Tongue Muscles in Persons With Dementia: Cross-Sectional Study

However, manual and semi-automatic techniques are labor-intensive and time-consuming, making the image processing task for large studies difficult, expensive, and, most importantly, impractical to apply in a clinical setting. Therefore, in the present study, we aimed to use MRI scans of the head opportunistically to develop an automated deep learning method to evaluate sarcopenia.

Mahdi Imani, Miguel G Borda, Sara Vogrin, Erik Meijering, Dag Aarsland, Gustavo Duque

JMIR Aging 2025;8:e63686

Discrimination of Radiologists' Experience Level Using Eye-Tracking Technology and Machine Learning: Case Study

Discrimination of Radiologists' Experience Level Using Eye-Tracking Technology and Machine Learning: Case Study

Rather, errors primarily stem from the methods radiologists use to visually inspect the image, referred to as perceptual errors [4]. In other words, perceptual errors in radiology are mistakes that occur during the visual inspection and interpretation of medical images. They are distinct from cognitive errors, which involve incorrect reasoning or decision-making based on observed information.

Stanford Martinez, Carolina Ramirez-Tamayo, Syed Hasib Akhter Faruqui, Kal Clark, Adel Alaeddini, Nicholas Czarnek, Aarushi Aggarwal, Sahra Emamzadeh, Jeffrey R Mock, Edward J Golob

JMIR Form Res 2025;9:e53928

Performance of an Electronic Health Record–Based Automated Pulmonary Embolism Severity Index Score Calculator: Cohort Study in the Emergency Department

Performance of an Electronic Health Record–Based Automated Pulmonary Embolism Severity Index Score Calculator: Cohort Study in the Emergency Department

In patients with e PESI Image of the best practice alert presented to clinicians, depicting the Electronic Pulmonary Embolism Severity Index (e PESI) score. CT: computed tomography; DOAC: direct oral anticoagulant; HR: heart rate; Hx: history; O2 Sat: oxygen saturation, as measured by pulse oximetry; PE: pulmonary embolism; RR: respiratory rate; SBP: systolic blood pressure; Suppl O2: supplemental oxygen; Temp: temperature.

Elizabeth Joyce, James McMullen, Xiaowen Kong, Connor O'Hare, Valerie Gavrila, Anthony Cuttitta, Geoffrey D Barnes, Colin F Greineder

JMIR Med Inform 2025;13:e58800

Multiparametric MRI Assessment of Morpho-Functional Muscle Changes Following a 6-Month FES-Cycling Training Program: Pilot Study in People With a Complete Spinal Cord Injury

Multiparametric MRI Assessment of Morpho-Functional Muscle Changes Following a 6-Month FES-Cycling Training Program: Pilot Study in People With a Complete Spinal Cord Injury

client/server framework for the continuous collaborative improvement of deep-learning-based medical image Reference 34: MRtrix3: a fast, flexible and open software framework for medical image processing and Reference 35: Complex diffusion-weighted image estimation via matrix recovery under general noise models Reference 38: Advances in functional and structural MR image analysis and implementation as FSLimage

Alfonso Mastropietro, Denis Peruzzo, Maria Giovanna Taccogna, Nicole Sanna, Nicola Casali, Roberta Nossa, Emilia Biffi, Emilia Ambrosini, Alessandra Pedrocchi, Giovanna Rizzo

JMIR Rehabil Assist Technol 2025;12:e64825

Enhancing Medical Student Engagement Through Cinematic Clinical Narratives: Multimodal Generative AI–Based Mixed Methods Study

Enhancing Medical Student Engagement Through Cinematic Clinical Narratives: Multimodal Generative AI–Based Mixed Methods Study

These images were generated using the Leonardo.ai platform [26], which harnesses the capabilities of the Stable Diffusion XL image–generating technology (Figure 3 and Multimedia Appendix 4). In an effort to maintain transparency and distinguish between real and AI-generated content, all images depicting real people were marked with an “AI-generated image” icon. This icon, chosen for its symbolic significance, is the spinning top from the movie “Inception.”

Tyler Bland

JMIR Med Educ 2025;11:e63865

Evaluating Bard Gemini Pro and GPT-4 Vision Against Student Performance in Medical Visual Question Answering: Comparative Case Study

Evaluating Bard Gemini Pro and GPT-4 Vision Against Student Performance in Medical Visual Question Answering: Comparative Case Study

The introduction of the image recognition feature further expands the horizon, opening up a new realm of applications in medical clinical practice and research [4]. Previous studies on LLMs have demonstrated their ability to pass medical licensing examinations [5-7]. However, these studies were often limited by the models’ restricted image analysis capabilities, leaving some questions unanswered [7].

Jonas Roos, Ron Martin, Robert Kaczmarczyk

JMIR Form Res 2024;8:e57592

The Application of Mask Region-Based Convolutional Neural Networks in the Detection of Nasal Septal Deviation Using Cone Beam Computed Tomography Images: Proof-of-Concept Study

The Application of Mask Region-Based Convolutional Neural Networks in the Detection of Nasal Septal Deviation Using Cone Beam Computed Tomography Images: Proof-of-Concept Study

Uniformity was maintained while cropping the coronal image fields in all the CBCT scans so that the anatomical landmarks were consistent. Each image was then cropped into a 200 × 400–pixel square, extending from the crista galli superiorly to the hard palate inferiorly, and 5 mm laterally from the lateral nasal wall on both sides (Figure 1 A and 1 B). The files were saved in JPEG format. Two maxillofacial radiologists classified the nasal septum images into normal or deviated.

Shishir Shetty, Auwalu Saleh Mubarak, Leena R David, Mhd Omar Al Jouhari, Wael Talaat, Natheer Al-Rawi, Sausan AlKawas, Sunaina Shetty, Dilber Uzun Ozsahin

JMIR Form Res 2024;8:e57335