Search Articles

View query in Help articles search

Search Results (1 to 10 of 12 Results)

Download search results: CSV END BibTex RIS


Feasibility and Acceptability of Pediatric Smartphone Lung Auscultation by Parents: Cross-Sectional Study

Feasibility and Acceptability of Pediatric Smartphone Lung Auscultation by Parents: Cross-Sectional Study

Flowchart showing the number of lung sound recordings and children considered throughout this study. The proportion of agreement among the three annotators regarding the quality of the physicians’ lung sound recordings (91%) was similar to that for the quality of parents’ lung sound recordings (85%), corresponding to a Fleiss κ of 0.66 (95% CI 0.59-0.72) and 0.57 (95% CI 0.51-0.63), respectively (Table 1).

Catarina Santos-Silva, Henrique Ferreira-Cardoso, Sónia Silva, Pedro Vieira-Marques, José Carlos Valente, Rute Almeida, João A Fonseca, Cristina Santos, Inês Azevedo, Cristina Jácome

JMIR Pediatr Parent 2024;7:e52540

Peer Review of “Cross-Modal Sensory Boosting to Improve High-Frequency Hearing Loss: Device Development and Validation”

Peer Review of “Cross-Modal Sensory Boosting to Improve High-Frequency Hearing Loss: Device Development and Validation”

It is the person’s overall aversiveness to sound. Anyway, the data were not collected, so there is little to be done. “What was the rationale for the specifications for the audiogram?” “This was simply a general inclusion criterion to make certain we were capturing garden-variety presbycusis.” It would be useful for this to be mentioned. “The authors are associated both with Stanford University and the company Neosensory, which makes this device. This information is in the paper.”

Robert Eikelboom

JMIRx Med 2024;5:e55554

Authors’ Response to Peer Reviews of “Cross-Modal Sensory Boosting to Improve High-Frequency Hearing Loss: Device Development and Validation”

Authors’ Response to Peer Reviews of “Cross-Modal Sensory Boosting to Improve High-Frequency Hearing Loss: Device Development and Validation”

Response: With practice, the integration of vibrations with sound becomes automatic, not requiring constant awareness of which actuator is vibrating and the phoneme assigned to it. “The user is then able to understand...” Isn’t this yet to be shown, or is evidence provided in the next paragraph? If so, this needs to be made clearer. Response: We have added citations to previous published work to make this clear.

Izzy Kohler, Michael V Perrotta, Tiago Ferreira, David M Eagleman

JMIRx Med 2024;5:e55510

Cross-Modal Sensory Boosting to Improve High-Frequency Hearing Loss: Device Development and Validation

Cross-Modal Sensory Boosting to Improve High-Frequency Hearing Loss: Device Development and Validation

Individuals with high-frequency hearing loss struggle to hear consonants with higher-frequency sound components, such as s, t, and f. As a result of the hearing loss, speech is reported as sounding muffled, most noticeably in noisy environments. Commonly, people with high-frequency hearing loss will report that they can hear but cannot understand [8].

Izzy Kohler, Michael V Perrotta, Tiago Ferreira, David M Eagleman

JMIRx Med 2024;5:e49969

Development and Validation of a Respiratory-Responsive Vocal Biomarker–Based Tool for Generalizable Detection of Respiratory Impairment: Independent Case-Control Studies in Multiple Respiratory Conditions Including Asthma, Chronic Obstructive Pulmonary Disease, and COVID-19

Development and Validation of a Respiratory-Responsive Vocal Biomarker–Based Tool for Generalizable Detection of Respiratory Impairment: Independent Case-Control Studies in Multiple Respiratory Conditions Including Asthma, Chronic Obstructive Pulmonary Disease, and COVID-19

Approaches like these, using acoustic features (“how you sound”) and linguistic analysis (“what you say”), have indicated that Several groups have investigated voice-based identification of COVID-19 using machine learning– and artificial intelligence–based methods on a labeled data set of voice recordings from patients with COVID-19 and control groups [9-23].

Savneet Kaur, Erik Larsen, James Harper, Bharat Purandare, Ahmet Uluer, Mohammad Adrian Hasdianda, Nikita Arun Umale, James Killeen, Edward Castillo, Sunit Jariwala

J Med Internet Res 2023;25:e44410

Real-Time Detection of Sleep Apnea Based on Breathing Sounds and Prediction Reinforcement Using Home Noises: Algorithm Development and Validation

Real-Time Detection of Sleep Apnea Based on Breathing Sounds and Prediction Reinforcement Using Home Noises: Algorithm Development and Validation

When an apnea event occurs, no breath sound can be heard due to a cessation of breathing. However, when the apnea is over, the airway reopens and a loud breath sound can be produced. As for hypopnea, unlike snoring, the airway is narrowed without airway vibration. Thus, it can be estimated that the breathing sound will become smaller and irregular. Therefore, it is expected that it will be possible to detect respiratory events based on sounds generated during sleep.

Vu Linh Le, Daewoo Kim, Eunsung Cho, Hyeryung Jang, Roben Delos Reyes, Hyunggug Kim, Dongheon Lee, In-Young Yoon, Joonki Hong, Jeong-Whun Kim

J Med Internet Res 2023;25:e44818