Published on in Vol 5, No 1 (2022): Jan-Mar

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/25540, first published .
Usability Testing of a Patient-Centered Mobile Health App for Supporting and Guiding the Pediatric Emergency Department Patient Journey: Mixed Methods Study

Usability Testing of a Patient-Centered Mobile Health App for Supporting and Guiding the Pediatric Emergency Department Patient Journey: Mixed Methods Study

Usability Testing of a Patient-Centered Mobile Health App for Supporting and Guiding the Pediatric Emergency Department Patient Journey: Mixed Methods Study

Original Paper

1Faculty of Medicine, University of Geneva, Geneva, Switzerland

2Division of Medical Information Sciences, University Hospitals of Geneva, Geneva, Switzerland

3Department of Pediatric Emergency Medicine, Geneva Children's Hospital, University Hospitals of Geneva, Geneva, Switzerland

*these authors contributed equally

Corresponding Author:

Jessica Rochat, MSc

Faculty of Medicine

University of Geneva

Rue Gabrielle-Perret-Gentil 4

Geneva, 1205

Switzerland

Phone: 41 793925251

Email: Jessica.Rochat@unige.ch


Background: Patient experience in emergency departments (EDs) remains often suboptimal and can be a source of stress, particularly in pediatric settings. In an attempt to support patients and their families before, during, and after their visit to a pediatric ED, a mobile health (mHealth) app was developed by a multidisciplinary team based on patient-centered care principles.

Objective: This study aims to evaluate the usability (effectiveness, efficiency, and satisfaction) of a new mHealth app, InfoKids, by potential end users through usability testing.

Methods: The app was assessed through an in-laboratory, video-recorded evaluation in which participants had to execute 9 goal-oriented tasks, ranging from account creation to the reception of a diagnostic sheet at the end of the emergency care episode. Effectiveness was measured based on the task completion rate, efficiency on time on task, and user satisfaction according to answers to the System Usability Scale questionnaire. Think-aloud usability sessions were also transcribed and analyzed. Usability problems were rated for their severity and categorized according to ergonomic criteria.

Results: A total of 17 parents participated in the study. The overall completion rate was 97.4% (149/153). Overall, they reported good effectiveness, with the task successfully completed in 88.2% (135/153) of cases (95% CI 83%-93%). Each task, with the exception of the first, created difficulties for some participants but did not prevent their completion by most participants. Users reported an overall good to excellent perceived usability of the app. However, ergonomic evaluation identified 14 usability problems occurring 81 time. Among these, 50% (7/14) were serious as their severity was rated as either major or catastrophic and indicated areas of improvements for the app. Following the suggested usability improvements by participants, mitigation measures were listed to further improve the app and avoid barriers to its adoption.

Conclusions: Usability of the InfoKids app was evaluated as good to excellent by users. Areas of improvement were identified, and mitigation measures were proposed to inform its development toward a universal app for all ED patients visiting a digitalized institution. Its contribution could also be useful in paving the way for further research on mobile apps aimed at supporting and accompanying patients in their care episodes, as research in this area is scarce.

JMIR Pediatr Parent 2022;5(1):e25540

doi:10.2196/25540

Keywords



Background

An emergency department (ED) visit is often the first point of contact for patients with a health care institution and thus a showcase of its efficiency. Providing patients with a positive experience should take high priority [1] and is one of the fundamental determinants of health care quality [2]. In a recent meta-synthesis, a study by Graham et al [1] conceptualized a model to understand the most commonly identified drivers of the ED patient experience. These included interpersonal and informational communication, patients’ expectations and empowerment, recognition of emotional needs, actual and perceived waiting times, competent care, and physical and environmental needs [1]. A similar conceptual framework was also developed by Sonis et al [3,4]. The same drivers have been observed in other studies focusing on identifying the determinants of patient and family experience in the pediatric EDs [5-11]. This highlights the essential nature of these drivers and the attention that should be paid to them when implementing an intervention to improve the adult or pediatric ED patient experience and ED efficiency. Several recent reviews have demonstrated a strong correlation between a positive ED patient experience and a range of benefits at the individual and institutional levels. These include increased therapeutic compliance [12]; improved health clinical outcomes [1,13,14]; outpatient [15], inpatient [16], and staff satisfaction [12]; reduced complaints and medicolegal risks [17]; institutional profitability and reputation in the community [12,18,19]; and other health care system goals [13].

Unfortunately, the hectic, unpredictable, crowded, demanding, and time-pressured environment of the ED may adversely affect patient experience [13]. In particular, there is strong pressure from public and institutional leaders to alleviate overcrowding and long waiting times experienced in the ED [20]. Overcrowding because of nonurgent visits negatively impacts the quality of care and patient safety (prolonged waiting times, delays in diagnosis and treatment, delays in treating seriously ill patients, and medication errors). It also affects the costs of care and patient experience. For hospitals, crowding results in loss of revenue because of patients leaving the ED without being seen, diversion of EDs secondary to patient dissatisfaction, and shifting of the market share to competitors [21]. Moreover, overcrowding exposes ED staff to stressful and unpredictable work-related events, resulting in decreased productivity and increased turnover [22,23].

The body of literature assessing conventional intervention strategies aimed at improving these specific ED issues is highly heterogeneous [24-34]. Proposed interventions vary widely and often require major structural or organizational changes that are not necessarily easily scalable to all hospitals. Importantly, a few address the aforementioned drivers of the ED experience in a scoping and integrative manner along the entire patient journey. Successfully addressing these dimensions requires enlisting patients and families as allies in designing, implementing, and evaluating care systems through patient-centered care approaches [35]. One solution to the serious challenges facing the ED today may be found in information technologies, which have the potential to both reduce institutional burdens and improve patients’ experience [36]. Supported by the rapid spread of mobile devices in the community and their innovative features (eg, versatile connectivity, on-board computing and communication capabilities, privacy, and small size), mobile apps may provide such a solution within the easy reach of end users. However, to date, there is a lack of studies on the potential use of mobile apps to individually support the entire emergency care journey. On the basis of this finding and guided by the principles of patient- and family-centered care [5,35], we developed InfoKids [37], an integrated eHealth solution composed of 3 modules connecting patients, caregivers, and administrative clerks through a web and mobile app, with the aim of supporting the entire emergency care process, thus facilitating caregiving and administrative work and streamlining the arrival of patients in the ED [38]. This system is freely available at Geneva University Hospitals, Geneva, Switzerland, for pediatric patients. It is expected to be soon redesigned to cover the entire population seeking ED care (ie, adult, geriatric, and gynecologic) in a service area of more than 1 million individuals. Before scaling up this app to such a large population, an essential step in determining the potential for the success of this patient-centered eHealth intervention was to assess its capacity to meet end users’ needs and improve health care at our institution before clinical effectiveness testing [39-42].

Objective

This study aims to evaluate the usability of the InfoKids mobile app to support the entire patient’s ED journey through quantitative and qualitative usability metrics in a laboratory setting. We then aim to identify potential problems related to its use and formulate mitigation measures to inform both the development of its upcoming version as a universal app for all ED outpatient consultations in our hospitals and future mobile app development in this medical field by other research groups.


Study Design

The usability of the app was assessed through a scenario-based, summative evaluation of human-computer interactions using a mixed methods approach [43]. Multitask quantitative and qualitative usability metrics were used and are described in detail in subsequent sections.

Definition of Usability

Usability is defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” [44]. Usability of a mobile app can be measured by the completeness and successfulness whereby users solve specified tasks centered around the main features of the app. Conversely, systems with poor usability can lead to low goal achievement efficiency or technology not being used [45].

Participants and Setting

The study was conducted in a medical informatics usability laboratory room at Geneva University Hospitals to standardize the intervention and technically facilitate measurements. The evaluation framework was a user-task-system interaction, deliberately omitting the user’s real environments [43]. Tasks were performed on an LG G5 mobile phone with a 5.3-inch screen size at a resolution of 2560×1440 pixels and an Android operating system V7. According to recommendations on the minimum sample size required to conduct a summative evaluation, at least 15 participants were planned to be recruited [46]. Participants were recruited through advertisements posted on Facebook groups and displayed at the Geneva University Medical Center. Participation was open to adults with children of pediatric age (0-16 years). Exclusion criteria were non–French-speaking persons and those who had previously used the app.

InfoKids Mobile App

Overview

The app was developed by a multidisciplinary team using a user-centered design approach to support each dimension of patient-centered care [37], which is an important approach to consider when developing a mobile health (mHealth) tool for patients. It is primarily defined by considering the needs and values of each patient and helping them to be more actively engaged with shared decision-making about their care [35,47]. Such patient involvement is a key element in high-quality health care [48].

A needs analysis guided by the Picker Institute’s patient-centered care dimensions was conducted among patients and their relatives to identify the specific requirements for the app [49]. System specifications were also identified to translate them into functionalities based on the collected needs of pediatric emergency physicians and nurses and observations of the workflow of caregivers and administrative clerks (Figure 1) [37]. Observations were performed to map out a generic patient journey [37,50]. Improvements were identified from this upstream work and incorporated into the app. In the previous stages of the app’s development, heuristic evaluations were performed by 3 ergonomics experts following the guidelines given by Nielsen and Mack [51] to identify any problems and correct them before proceeding with usability testing. In its current version, the InfoKids mobile app is designed to support parents throughout their entire journey in the pediatric ED; that is, from the onset of the first symptoms to their return home. The interface was designed using hedonic elements to make it more enjoyable and aiming to increase its acceptance. The app is available to the local community through free downloads from the Apple App and Google Play stores.

Figure 1. The InfoKids app process.
View this figure
Preconsultation Stage

The app guides parents through a hierarchical organization of symptoms with medical advice on actions to take; that is, manage the symptom at home, need to visit a private practitioner, or require an ED visit. Classification of symptom terminology was established through a card-sorting study [52]. First, this allows parents to make better decisions on how to deal with symptoms and decide whether to consult. Second, the app contains educational videos aimed at responding to the most common questions that parents may have when visiting the ED. Third, it emotionally supports patients by avoiding unrealistic expectations through the display of ED waiting room occupancy in real time. Occupancy is represented by a metaphoric display of a road where patients are represented as cars queuing (Figure 2). According to the Canadian Triage and Acuity Scale [53], 5 levels of emergency are represented by 5 lanes, as displayed on the screen. Each patient is represented by a car in the sequential order of arrival from right to left for each lane, left being the most recent arrivals. Patients with the highest level of urgency are represented by an ambulance rather than a car. Notably, the same view is displayed on a large television screen hanging on the wall of the ED waiting room. The app also provides a graphic forecast of daily occupancy based on statistics from the 5 previous days. This allows a better distribution of visits throughout the day by offering patients the possibility to consult during the least busy periods (Figure 3) and better perceive expected wait times before being seen by a physician. The app also provides guidance on the hospital location through GPS features and informs the hospital in real time of the patient’s upcoming arrival.

Figure 2. Screenshot of the InfoKids mobile app displaying the emergency department occupancy in the waiting room in real time. The Canadian Triage and Acuity Scale categorizes patients by both injury and physiological findings and ranks them by severity from 1 (highest, red) to 5 (blue). By clicking the 144 icon, the user is connected directly to the national emergency call center. HUG: Hôpitaux Universitaires de Genève.
View this figure
Figure 3. Screenshot of the InfoKids mobile app displaying forecasts of daily occupancy based on the statistics of the previous 5 days. The vertical graduation from green (bottom) to red (top) indicates the expected daily occupancy rate from low to high. HUG: Hôpitaux Universitaires de Genève.
View this figure
Per-Consultation Stage

When parents decide to consult, they can inform the ED of their arrival by a simple click. By doing so, administrative entries recorded in advance within the app are automatically and securely communicated to the hospital. This aims to empower the patient as warrantor of the quality of the administrative data stored in the clinical information system and to reduce the risk of patient misidentification [54]. It also aims to improve the efficacy of ED organizations by shifting the paradigm from an impromptu influx of patients arriving at the door to an anticipated occupation, allowing a more efficient management of medical resources. In addition, after triage and when appropriate, patients with nonurgent conditions are offered the possibility to leave the ED temporarily without losing their position in the waiting queue and then called by semiautomated phone messages as soon as a physician is available. These features enable the hospital to act upstream for the regulation of patient flow and overcrowding by a more judicious allocation of health care resources, such as a more rational repartition of caregivers and consultation rooms.

Postconsultation Stage

At the time of discharge, the app automatically sends an informative sheet based on the patient’s diagnosis, thus assuring a personalized follow-up. Each sheet offers clear explanations regarding the current condition or trauma, appropriate treatment, prerequisites for a return to the community, and symptoms that require medical attention. The quality and safety of the information provided rely on the core information library supplied by pediatric emergency physicians and endorsed by Geneva University Hospitals. All these features (Table 1) are explained in an in-app tutorial composed of pop-ups and videos.

Table 1. Summary of InfoKids functionalities per stage of consultation.
StageFunctionalityGoalUser actions
PreconsultationCreation of a user profileShare securely patient information with the hospital.Enter parent and child legal information (identity, postal address, insurance, etc) and health records.
PreconsultationTutorialInform how to use the app and how a consultation at the EDa takes place.Browse the tutorial.
PreconsultationReal-time visualization of ED waiting room occupancyAssist in making decisions about the most appropriate time to consult at the ED.Visualize occupancy and forecasts.
PreconsultationSymptoms decision tree classifierHelp with the decision to consult and improve the patient experience.Identify the symptoms and obtain advice on how to manage them.
PreconsultationGuidanceFind the ED location (GPS).Follow the GPS.
Per-consultationED already informed upon patient arrivalAnticipate the patient’s arrival.Confirm departure.
Per-consultationSymptoms, chronic illnesses, allergies, and usual treatments entered by the parent into the app are automatically communicated to the EDEmpower patient as warrantor of the quality of the administrative data stored in the clinical information system;
reduce patient misidentification.
Enter the child’s administrative and personal data in the app beforehand; automated sending of this information at the time of announcement of departure to the ED by a simple click.
Per-consultationTemporarily leave the ED while waiting for a scheduled consultationReduce the waiting time and improve the patient experience.Accept the legal discharge document, allowing to temporarily leave the ED.
PostconsultationPersonalized diagnostic sheetImprove therapeutic adherence and the patient experience.Provide access to diagnostic and therapeutic follow-up.

aED: emergency department.

Procedure

Participants were invited by emails to individual sessions. The study procedure was explained to the participants upon arrival at the evaluation laboratory. Written informed consent was obtained from all the participants. After completing a baseline questionnaire on demographics and user experience with smartphones, the participants were asked to imagine themselves in a situation where they had heard about the InfoKids app and to follow a scripted and timed standardized scenario. The scenario was developed to sequentially guide the user toward the completion of 9 goal-oriented tasks covering the main functionalities of the app (Textbox 1; Multimedia Appendix 1). The sequence of tasks reflected the sequence of actions that parents seeking medical advice for their sick child with worrying symptoms at home would have to perform. For reasons related to the study design and use of the app, the possibility of patients being able to temporarily leave the ED while waiting for a scheduled consultation was not evaluated but will be the subject of further research. For greater realism, the dates and times were adapted to the time of the experiment. No training on the app was offered before the evaluation began to avoid preparation bias. The participants were not given any assistance to complete the tasks. Study investigators only intervened to encourage participants to keep talking during the intervention, thus avoiding bias of results and minimizing any disruption of participants’ thoughts. The participants were informed that their interaction with the app and their verbal exchanges would be video recorded.

Goal-oriented test tasks.

Goal-oriented test tasks

  • Task 1: open the app, enter your personal data as requested, and accept the terms of use.
  • Task 2: create a profile for your child and close the app (Multimedia Appendices 2 and 3).
  • Task 3: imagine that 2 days later, your child has cough and you are seeking medical advice. Open the app and look for advice (Multimedia Appendix 4). Read the tips on what you can do at home to manage the situation on your own. Also read the tips on when you should go to the pediatrician in the next 24 hours. Close the app.
  • Task 4a: 1 week later, you plan to go to the pediatric emergency room because of the worsening of your child’s cough and health condition. You are wondering about the current emergency room occupancy and want to see how busy the waiting room is (Figure 2). The date is (date of examination), current time is (time of examination). Are there many people in the emergency department (ED)? Can you describe what the cars represent on the screen? Can you describe what the different lines represent?
  • Task 4b: Does occupancy in the ED over the last few days allow you to predict whether the wait on that day will be long? Can you describe what the graph represents (Figure 3)?
  • Task 5: you decide to go to the ED with your child. Inform the ED of your arrival and return to the home page (Multimedia Appendix 5).
  • Task 6a: you are seeking information on the location of the ED. Go to the tutorial to find information on how to use the mapping tool (GPS).
  • Task 6b: after viewing the tutorial, indicate the location of the ED building on the map and return to the home page (Multimedia Appendix 6).
  • Task 7: you went to the emergency room and came home. You receive a notification on your app regarding the diagnosis made in the ED and read it. What is the physician’s diagnosis? What home care information is necessary?
Textbox 1. Goal-oriented test tasks.

To understand participants’ thoughts, the concurrent think-aloud method was applied by asking them to verbalize during task completion [55]. Upon completion of the scenario, the moderator had a debriefing with each participant following a semistructured grid interview, with the aim of assessing overall experience with the tool and usability improvements and perform a retrospective think-aloud method to analyze difficulties encountered and understand their causes [56]. Finally, to assess user satisfaction, the participants were asked to complete the System Usability Scale (SUS) questionnaire [57,58].

Scenario

A pediatric emergency physician (JNS) wrote a credible and standardized scenario based on these tasks, which was then screened and approved by two ergonomists (JR and AR) at the evaluation laboratory. In the scenario, the participant decides to install the app in the eventuality that an ED visit might be necessary. Shortly after, the participant (ie, the parent) needs to use the app for the first time following the onset of cough in their child. A week later, when the cough and the child’s health had deteriorated, the parent had to use the app again to be guided and supported to go to the ED with the child.

Usability Analysis

Quantitative Evaluation

The participant’s task performance was measured by the following metrics:

  1. Effectiveness is defined as the accuracy and completeness in which users achieve the specified goals [44]. Effectiveness is calculated in three different ways:
    • Task completion rate (TCR) per participant, that is, the percentage of tasks successfully completed, whether with ease or difficulty [59]. This is calculated using the following equation:TCR per participant = (number of tasks completed successfully / total number of tasks undertaken) × 100 (1)When a task cannot be started and evaluated (ie, because of a problem with the Wi-Fi connection), it is coded as nonavailable.
    • TCR per task, that is, the percentage of participants who successfully completed a given task, whether with ease or difficulty [59]. This is calculated using the following equation:TCR per task = (number of participants who completed successfully / total number of participants) × 100 (2)When a task cannot be started and evaluated (ie, because of a problem with the Wi-Fi connection), it is coded as nonavailable.
    • Distribution of task success by task is defined as the proportion of participants completing a task according to three possible levels of achievement: (1) the task is considered completed with ease when the user has successfully completed the task without any errors or difficulties; (2) completed with difficulty when the task was completed, but with difficulties that could have been solved by the participant; and (3) failed to complete when the task is left incomplete or abandoned or the participant gave incorrect answers. When a task cannot be started and evaluated (ie, because of a problem with the Wi-Fi connection), it is coded as nonavailable.
  2. Efficiency is defined as the level of resource use required for users to achieve specified goals in relation to accuracy and completeness [44]. This is calculated in three different ways:
    • Time on task is defined as the average amount of time (in seconds) taken to complete a given task from the moment the participant finished reading the instructions until the task was completed (whether with ease or with difficulty) or abandoned.
    • Time-based efficiency (TBE) is defined as the time spent by users in absolute value to ensure the accurate and complete achievement of tasks using the 2 equations described in a study by Ben Ramadan et al [59].
    • Overall relative efficiency (ORE) is defined as the ratio of the time spent by effective users to ensure accurate and complete achievement of tasks to the total time taken by all users (ie, including the time spent by ineffective users) using the 2 equations described in a study by Ben Ramadan et al [59].
  3. Satisfaction measured by administering the SUS questionnaire designed by Brooke [57,58], a highly robust and versatile tool to measure participants’ subjective assessment of usability [60]. SUS is a 10-item questionnaire (Multimedia Appendix 7), with 5 response options for respondents for each item, based on their level of agreement from 1 (strongly disagree) to 5 (strongly agree). Following the Brooke scoring system [57,58], for odd-numbered statements 1, 3, 5, 7, and 9 (positively worded items), the score contribution is equal to the scale position minus 1 (eg, strongly agree 5−1=4). For even-numbered statements 2, 4, 6, 8, and 10 (negatively worded items), the score contribution is equal to 5 minus the scale position (eg, strongly agree 5–5=0). Each score contribution falls within the range of 0 to 4. The participants’ scores for each item are then summed and multiplied by 2.5 to convert the original scores from 0 to 40 to 0 to 100. Although the scores range from 0 to 100, these are not percentages of usability and should be considered only in terms of their percentile ranking. To obtain an SUS score of 100, the respondent must answer 5 to all odd questions and 1 to all even questions. It is generally considered that a score is good starting from 75 and fair between 50 and 75. A score below 50 reveals strong disagreement in terms of satisfaction [60]. As the participants were French speaking, the French translation of the questionnaire was used [61]. As age could be a potential confounder correlated with usability scores [60], we also analyzed SUS scores according to two age categories (≤40 years and >40 years).
Qualitative Evaluation

Qualitative data from the concurrent, retrospective think-aloud and debriefing were used to assess the overall experience with the tool, identify usability problems, understand the cause of difficulties, and identify usability improvements. Usability problems encountered by the participants during the tasks were rated using the Nielsen severity scale [62] and categorized using the ergonomic criteria of Bastien and Scapin [63]. The Nielsen scale ranges from 0 to 4, with higher scores positively correlated with greater problems (0=no usability problem; 1=cosmetic problem that does not need to be addressed unless extra time is available on the project; 2=minor usability problem: fixing this should be given low priority; 3=major usability problem: important to fix and should be given high priority; and 4=usability catastrophe: imperative to fix this before releasing the product). The Nielsen criteria [51] used to rate the severity of usability problems are (1) the frequency of occurrence of a problem (common or rare?), (2) its impact on the user’s experience (easy or difficult for users to overcome?), and (3) its persistence (a unique problem on first use or will it persist to bother users?). As some studies have shown that severity ratings are subjective and can vary significantly from one assessor to another [64], they were conducted independently by 2 ergonomists. In case of disagreement, the ratings were averaged [65]. However, to avoid disagreement, both ergonomists agreed to classify usability problems that led to failure as a usability catastrophe. The Bastien and Scapin [63] method consists of a list of 18 ergonomic criteria that are generally used to identify and understand the most well-known interface problems. The categorization of usability problems following these criteria was performed independently by both ergonomists. In case of disagreement, the evaluators discussed together to reach a consensus.

Data Collection

Participants’ task performance was video-recorded and audio-recorded to retrospectively analyze the usability of the app. Video and audio captures were acquired with an Elmo L-l2iD camera document placed above the phone. Morae software (TechSmith Corporation) was used to analyze the video and audio recordings of participants’ interactions with the app. Subsequently, the recordings and usability metrics were transcribed onto Microsoft Excel spreadsheets. The SUS paper questionnaires were collected immediately after the intervention and subsequently transcribed onto Microsoft Excel spreadsheets. Two researchers (JR and VGR) analyzed the success rates of each task and their duration independently of each other. In case of disagreement, both researchers discussed together to reach a consensus. All data collected were anonymized.

Data Analysis

Descriptive statistics were used to summarize continuous measures at a significance level of .05. Frequency counts were used for summarizing categorical measures. Age categories and SUS mean scores were compared to make comparisons between user characteristics and satisfaction. Data were analyzed, and graph figures were created with GraphPad Prism 9 and Microsoft Excel.

Ethics Approval and Consent to Participate

The study was submitted to the Regional Research Ethics Committee (Req-2021-00505), which waived the need for further evaluation by issuing a no objection statement, as such projects did not fall within the scope of the Swiss federal law on human research [66]. Only data from a fictitious patient were used in this study. Written informed consent was obtained from all participants before the intervention. No participants’ medical information was used. Participants were not identifiable on video and audio recordings. Participants’ data and results obtained through the intervention were deidentified and assigned an individual identifying code that did not contain identifying information. Data were secured by protected access passwords at Geneva University Hospitals on secured hard disks. This study was conducted in accordance with the Declaration of Helsinki [50] and principles of Good Clinical Practice [51].


Participant Characteristics

Between June and September 2017, a total of 17 participants participated in the study. Baseline demographic characteristics are shown in Table 2.

Table 2. Demographic characteristics of study participants (N=17).
CharacteristicsValues, n (%)
Gender

Woman15 (88)

Man2 (12)
Age categories (years)

21-303 (18)

31-404 (24)

41-508 (47)

51-602 (12)
Number of children

19 (53)

25 (29)

32 (12)

41 (6)
Parents with a child aged (years)

0-36 (35)

3-67 (41)

6-97 (41)

9-121 (6)

12-153 (18)
Already visited Geneva pediatric EDa

Yes12 (71)

No5 (29)
Type of phone

iOS7 (41)

Android9 (53)

Windows phone1 (6)
Possession of a smartphone

<1 year0

From 1 to 2 years0

More than 2 years17 (100)
Frequency of mobile apps use

Often (daily)17 (100)

Regularly (several times per week)0

Sometimes (once to several times per month)0

Rarely (once to several times per year)0

Never0

aED: emergency department.

Quantitative Evaluation

Effectiveness Per Participant

The overall completion rate (tasks completed and failed) was 88.2% (135/153). A total of 4 participants did not perform some tasks, 2 (50%) participants ignored task 6a, 1 (25%) participant experienced a problem with the Wi-Fi connection in task 6b, and 1 (25%) participant experienced a software bug in task 7. The mean overall success rate, defined as the percentage of tasks that participants completed successfully (whether with ease or difficulty), was 88.2% (135/153; SD 10.63%; 95% CI 83%-93%). An analysis of almost 1200 usability tasks showed that the minimum accepted average TCR was 78% [67]. In this study, the TCR per participant ranged from 67% to 100% (Figure 4).

Figure 4. Task completion rate per participant for the 9 assigned tasks. Task completed represents the percentage of tasks successfully completed by a participant, whether with ease or difficulty. Failed to complete defines the percentage of tasks that participants failed to complete. Nonavailable represents the percentage of missing data when a task could not be started and evaluated.
View this figure
Effectiveness Per Task

Of the 9 assigned tasks, 4 (44%) were achieved 100% by all participants (Figure 5); 2 (22%; tasks 4b and 7) reached a TCR per task of 94%; 1 (11%; task 6a) reached a TCR of 82%; and 2 (22%) scored below 78%: task 6b with a value of 71% and task 4a with a value of 53%. Of note, all tasks with a value of less than 100% were related to either browsing through the pages of the app or understanding the information displayed. Figure 5 shows that task 4a appeared to be most complicated. Tasks 4b, 6a, and 6b also seemed problematic for some participants.

Figure 5. Task completion rate per task (N=17 participants). Task completed represents the percentage of participants who successfully completed the task, whether with ease or difficulty. Failed to complete defines the percentage of participants who failed to complete the task. Nonavailable represents the percentage of missing data when a task could not be started and evaluated.
View this figure
Task Success Distribution Per Task

The observed task success distribution is shown in Figure 6. Task 1 was completed with ease by all the participants (17/17, 100%), followed by task 4b (13/17, 76%). Tasks 2 and 7 were completed with ease by 71% (12/17) of the participants. Task 3 was completed with ease by 65% (11/17) of the participants, but tasks 6a, 6b, 5, and 4a were completed with ease by only 47% (8/17), 41% (7/17), 24% (4/17), and 6% (1/17) of the participants, respectively. Apart from task 1, all tasks led to difficulties with a completed with difficulties rate ranging from 18% to 76%. Participants encountered failures during four tasks (4a, 4b, 6a, and 6b), with a failed to complete rate ranging from 6% to 47%.

Figure 6. Task success distribution per task (N=17 participants). Completed with ease represents the percentage of participants who completed the task with ease. Completed with difficulty represents the percentage of participants who completed the task with difficulties. Failed to complete defines the percentage of participants who failed to complete the task. Nonavailable represents the percentage of missing data when a task could not be started and evaluated.
View this figure

Efficiency: Time on Task

The mean overall time on task for all tasks was 101.26 (SD 44.07) seconds. Tasks 1, 2, and 4a had a higher time on task than the other tasks (Table 3). These findings showed that the most complicated task (ie, task 4a) was the third most time-consuming task, although it did not require much action compared with tasks 1 and 2, which were the longest and required several pieces of data to be entered into the app, thus explaining their duration.

Table 3. Time on task per study task.
TaskTime on task (seconds), mean (SD)
Task 1: Create a parental account142.58 (38.96)
Task 2: Create a child profile182.56 (58.64)
Task 3: Find the symptoms page96.86 (46.36)
Task 4a: Find and understand the waiting times page138.61 (71.4)
Task 4b: Find and understand the forecast page55.93 (34.27)
Task 5: Inform of the departure to EDa80.08 (58.98)
Task 6a: Find the tutorial page59.93 (27.06)
Task 6b: Find the map page62.34 (47.16)
Task 7: Find the diagnostic sheet92.46 (52.84)

aED: emergency department.

Multimedia Appendices 8 and 9 show the TBE and ORE for every single performed task, respectively. Multimedia Appendix 8 shows that tasks 1, 2, and 4a had the shortest TBE. Therefore, creating the parental account and the child’s profile was not the most efficient task. Task 4a showed the lowest efficiency, with the shortest TBE (0.0065 tasks per second) and lowest ORE (50.2%).

Satisfaction: SUS Questionnaire

The mean overall SUS score was 80.88 (SD 8.57; Table 4). This shows that the usability of the InfoKids app was perceived as good to excellent [68] (Figure 7). The detailed scores indicate that of the 17 participants, 4 (24%) assessed the app as fair, 5 (29%) as good, and 8 (47%) as excellent. Mean SUS scores were similar when analyzed by two age categories, ≤40 years (mean 82.14, SD 9.94 years) and >40 years (mean 80, SD 8.42 years; Mann–Whitney U test=29.5; P=.60).

Table 4. System Usability Scale (SUS) questionnaire results.

Question 1Question 2Question 3Question 4Question 5Question 6Question 7Question 8Question 9Question 10SUS score (sum×2.5; maximum 100)
P1333444444492.5
P2334434344387.5
P3322433323370
P4433433443487.5
P5333432343275
P6333432343480
P7312433343475
P8433444344390
P9334433444387.5
P10404031443365
P11433434344490
P12333233444380
P13433343444490
P14433233343377.5
P15433444444187.5
P16333333322370
P17233333323370
Values, mean (SD)3.35 (0.59)2.65 (0.84)3.06 (0.54)3.35 (1.08)3.24 (0.42)3.06 (0.8)3.41 (0.49)3.65 (0.76)3.41 (0.6)3.18 (0.78)80.88 (8.57)
Values, median (IQR)3 (3-4)3 (3-3)3 (3-3)4 (3-4)3 (3-3)3 (3-4)3 (3-4)4 (4-4)3 (3-4)3 (3-4)80 (75-87.5)
Figure 7. Overview of the modified System Usability Scale rating table with inserted value ranges [68].
View this figure

Qualitative Evaluation

Usability Problems

The think-aloud method identified 14 usability problems with a total of 81 occurrences. Table 5 describes the frequency of usability problems per task and the frequency of each usability problem that led to task completion with difficulties or failures. A total of 9 usability problems led to difficulties to complete a task only, and 5 led to difficulties to complete a task and failures.

Table 5. Frequency of 14 usability problems, difficulties, and failure.
Tasks and usability problemsFrequency of the usability problem (n=81), n (%)Frequency with which it led to task completion with difficulty (n=62), n (%)Frequency with which it led to failure to complete the task (n=19), n (%)
Task 1: create a parental account

None0 (0)0 (0)0 (0)
Task 2: create a child profile

Participants expected to access the child’s profile by clicking directly on the card1 (1)1 (2)0 (0)

Participants wondered if information had been properly saved4 (5)4 (6)0 (0)
Task 3: find the symptoms page

Participants did not directly find the symptoms’ list5 (6)5 (8)0 (0)

Participant did not directly find the cough symptom1 (1)1 (2)0 (0)
Task 4a: find and understand the waiting times page

Participants did not directly find the waiting times page13 (16)8 (13)5 (26)

Participants faced difficulties to understand the meaning of the cars and the different colored lines10 (12)2 (3)8 (42)
Task 4b: find and understand the forecast page

Participants had difficulties in finding the page.4 (5)3 (5)1 (5)
Task 5: inform of the departure to the EDa

Participants had difficulties in finding this feature.7 (9)7 (11)0 (0)

Participants did not understand that they had to select the child.13 (16)13 (21)0 (0)
Task 6a: find the map tutorial page

Participants expected to access the map tutorial directly in the map page.6 (7)5 (8)1 (5)

Participants had difficulties in finding the map tutorial because of a pop-up hiding the button.2 (2)2 (3)0 (0)
Task 6b: find the location of the ED

Participants did not understand the meaning of the “H” icon indicating the location of the ED on the map.9 (11)5 (8)4 (21)
Task 7: find the diagnostic sheet

Participants did not directly find the page.4 (5)4 (6)0 (0)

Participants had difficulties in finding the section to access the diagnostic sheet.2 (2)2 (3)0 (0)

aED: emergency department.

Identified usability problems were rated by their severity scores. Of the 81 occurrences of usability problems, 2 (2%) were rated with a severity score of 1 (cosmetic), 22 (27%) were rated 2 (minor), 17 (21%) were rated 2.5 (between minor and major), 11 (14%) were rated 3 (major), and 29 (36%) were rated 4 (catastrophic; Table 6; Multimedia Appendix 10 [62,63]). None of the participants experienced major or catastrophic usability problems when completing tasks 1, 3, and 7 but tasks 2, 4a, 4b, 5, 6a, and 6b were the most problematic. When analyzing the time on task, the longest time taken to complete task 2 seemed to be related to the time required to access and fill this page compared with other tasks, although the completion rate was optimal and usability problems were reported as minor. The third longest time taken to complete task 4a appeared to be related to the many usability problems graded as catastrophic.

Table 6. Severity scores, identification of usability problems, frequency, percentage, and related task.a
Usability problemsValue (n=81), n (%)Related task
Severity score 12 (2)N/Ab

Access the child’s profile1 (1)2

Find the cough symptom1 (1)3
Severity score 222 (27)N/A

Select the child13 (16)5

Message hiding the button to access the map tutorial2 (2)6a

Find the symptom page5 (6)3

Find the diagnostic sheet: Select the history section2 (2)7
Severity score 2.517 (21)N/A

Find the waiting times page13 (16)4a

Find the diagnostic sheet: Reach the information page4 (5)7
Severity score 311 (14)N/A

Record the information entered4 (5)2

Find the page to inform about departure to the EDc7 (9)5
Severity score 429 (36)N/A

Find the forecast page4 (5)4b

Find the location of the ED9 (11)6b

Understand the waiting times page10 (12)4a

Find the map tutorial6 (7)6a

aSeverity score: 1=cosmetic, 2=minor, 3=major, and 4=catastrophic.

bN/A: not applicable.

cED: emergency department.

Most problems identified (34/81, 42%) were related to the significance of codes’ criteria, whereas 35% (28/81) problems were related to compatibility criteria, 21% (17/81) to the guidance criterion, and 2% (2/81) to explicit control (Table 7).

Table 7. Ergonomic criteria associated with identified usability problems with its number of occurrence and frequency.
Ergonomic criteriaUsability problemsNumber of occurrence and percentage of the usability problem, n (%)
Guidance17 (21)

Guidance—promptingSelect the child13 (16)

Guidance—immediate feedbackRecording of information entered4 (5)
Explicit control2 (2)

User controlMessage hiding the button to access the map tutorial2 (2)
Significance of codes34 (42)


Find the symptom page5 (6)


Find the cough symptom1 (1)


Find the waiting times page13 (16)


Find the forecast page4 (5)


Find the location of the EDa9 (11)


Find the diagnostic sheet: select the history section2 (2)
Compatibility28 (35)


Access the child’s profile1 (1)


Understand the waiting times page10 (12)


Find the page to inform about departure to the ED7 (9)


Find the map tutorial6 (7)


Find the diagnostic sheet: reach the information page4 (5)

aED: emergency department.

Debriefing Interviews

All participants (17/17, 100%) reported positive feedback regarding their overall experience with the app. More specifically, when asking them about the strengths of the app by an open question, 71% (12/17) of participants emphasized the usefulness of the proposed features, such as the information on waiting times, advice according to symptoms, the diagnostic sheet, and the ability to inform the ED of their arrival. Moreover, 65% (11/17) noted the ease of use because of the quickly accessible menu and its intuitiveness.

Regarding app improvements and mitigation measures, 35% (6/17) of participants expressed several needs: (1) an improved ED geolocalization on the map; (2) rewording the history section to diagnostic history to find the sheet more easily; (3) improved explanation of the meaning of the 5 colored emergency lanes; and (4) placement of the I am coming to the ED button on the home page to facilitate its access. Participants also expressed their wish to have new features such as information about the laboratory results and treatment plan in the diagnostic sheet (3/17, 18%), ability to exchange with the ED directly through the app with a chat option (1/17, 6%), and the ability to share the diagnostic sheet with another family member (1/17, 6%).


Principal Findings

In this study, we report an overall good-to-excellent perceived usability of a patient-centered mHealth app aimed at covering the entire emergency care process by supporting patients before, during, and after an ED visit. Given the high percentage of patient-centered assigned tasks that participants successfully completed, we observed a good overall rate of understanding of how the app worked. Participants found most of the features useful, particularly the recommendations provided according to their child’s symptoms, access to information related to waiting times and the diagnosis made in the ED, and ability to inform the ED upon their arrival. However, the ergonomic evaluation identified 81 occurrences of 14 usability problems, of which 50% (7/14) were serious, as their severity ratings were either major or catastrophic. These results indicated areas for app improvements. From participants’ and ergonomists’ suggested usability improvements, mitigation measures were listed to further improve the app and avoid barriers to its adoption (Table 8).

Table 8. Identified usability problems and mitigation measures.
App’s features and identified usability problemsMitigation measures
Editable list of children

The edit button on the child’s profile was not obvious enoughThe whole patient’s profile card should be made clickable.
Child’s profile page

Uncertainty as to whether the entries for chronic illnesses and regular medications are saved in the appEntries for chronic conditions and regular medications should be visible on the patient’s profile page.
Browsing through the pages or menus

Difficulty in locating the EDa departure announcement buttonThe I am coming to the ED button should also be placed on the home page.

Difficulty in locating the diagnostic sheetThe history page should be changed to diagnostic history.

Difficulty in locating the map tutorialThe map tutorial should be placed directly in the map page. The tutorial could start automatically when the map is used for the first time, as is the case in many apps.

Difficulty in locating the waiting times page, the forecast page, and the symptom pageThe tree-testing and card-sorting techniques should be used to improve the information architecture and the nomenclature. A search bar should also be added.
Symptoms’ decision tree

Difficulty in browsing through the symptom’s decision tree

A search bar and more redundancy should be added.
Real-time display of the ED waiting room

The meaning of “occupancy” in the waiting room was not clear for nonacquainted usersThe busy screen should be redesigned using a more explicit graphic representation and adding a caption. Representing patients by avatars and not by cars could be more intuitive for the user.
Geolocation and guidance to the ED

Geolocation markers were not explicit enough on the map pageKnowing that icons are images and that images can be polysemic, their understanding can vary from one person to another. To reduce this effect, a locator pin with a textual indication could be used. In addition, it could be enlarged and bounced to attract the user’s attention.
ED departure feature I am coming to the ED

No prompt to indicate to the user that they must select the child to be announced on departure to the EDA selection checkbox should be set up so that users understand that they need to select a child.

It should be easily possible to hide the pop-up message confirming the patient’s departure to the EDThe chevron must be enlarged to make it more visible.

aED: emergency department.

Apps’ attrition has emerged as an area of particular concern in recent literature on new technological innovations [69,70]. Even when apps are evidence based, this does not guarantee that they will be used consistently over time. Similar to other health information technologies, the benefits of apps can only be achieved if end users intend to adopt them [71]. Poor usability and a lack of user-centered design have been described as 2 drivers for low adoption rates of mobile apps [45]. Although usability has been identified as a key component of good practice in the development of digital apps [72], only a small fraction of medical apps publish their usability evaluation results, despite their growing number [42]. The main concerns of these apps are health conditions or diseases such as mental health [45,73], cancer [74,75], nutrition [76], diabetes [77,78], chronic disease self-management [79,80], and child health [81-86], among others [42].

However, there is no app that addresses more broadly patients’ accompaniment throughout their entire ED care journey (ie, before, during, and after their visits), as well as providing personalized health information and support to manage illness or trauma. We found only 2 studies describing the usability evaluation of prototype app versions providing a personalized treatment schedule and an indoor navigation service for outpatients [87,88]. Moreover, both apps seem to be limited to this sole in-hospital purpose, without patient-centered information regarding their disease, and restricted to Android operating software systems. A study by Westphal et al [89] described a very promising web-based system for providing real-time information to ED patients regarding the procedures that they may encounter during their journey. However, similar to the previous 2 studies, this system focused only on the patient’s journey within the hospital and did not address the patient’s experience over the entire course of care.

The InfoKids app aims to bridge these gaps. Importantly, it is intended for wider use within our institution. Through the current iterative processes of development and evaluation, it is intended to soon become a more universal tool to connect the whole population seeking ED care (ie, adults, geriatric, and gynecologic) in a service area of more than 1 million people. In this sense, this study contributes to this iterative development process. Given its interconnection with the hospital’s computerized system, this app has the potential to ensure better coordination, continuity, and transition of care, thus improving both the patient experience and hospital efficiency.

Strengths

This study had several strengths. To our knowledge, this was the first report of findings of the usability evaluation of an app supporting the longitudinal patient care transition from home up to ED discharge. Second, the mixed methods approach used in combination with different types of usability methods was another strength already identified by studies recognizing the utility of using qualitative and quantitative approaches for app usability testing [72,90]. Third, the 9 goal-oriented tasks assessed were centered around the main features of the app. The fact that users can perform a set of tasks centered around these features that are representative of those that users would normally use in clinical care was identified as a good way to determine the usability of the app and its features and workflow [91]. Fourth, this study added to the literature that recommends more usability studies focused on patient-centered apps [72,91-97]. It also contributed to the effort to publish usability studies based on academic development and patient-centered care, rather than a purely commercial development approach [42].

Limitations

Our study had some limitations. First, we used an artificial laboratory environment, which has a low degree of fidelity. As a result, the generalizability and transferability of the results may be limited in real-life settings. Furthermore, the results obtained were based on an assessment of usability with participants who were naïve to its use. Therefore, it can be assumed that in-depth use of the app beforehand could have improved the perception of its usability among people who had used it before and avoided certain problems of comprehension and navigation. Interestingly, the use of a tutorial that was supposed to correct these problems seems to have been a source of difficulties for users in itself, if only to find it in the app. Therefore, it might be judicious in future versions to replace it with an interface offering contextual help on each page, rather than a long tutorial to memorize or search for. These assumptions should be addressed in future studies. Another limitation is that the small sample of 17 users might not have been sufficiently large to reveal all usability issues. However, it was assumed that 5 users are already a sufficient sample to reveal 85% of usability problems, whereas 15 users were sufficient to uncover almost 97% of problems [98]. In contrast, the fact that only one scenario was proposed to users in an arbitrary order set by the investigators raises concerns about the applicability of the results to any other clinical situations or navigational pattern in the app. This scenario was chosen to test most of the functions of the app according to a logical workflow model that parents wishing to consult with their child in the emergency room would follow. However, it cannot be excluded that other scenarios could have generated other navigation schemes and usability problems or facilitation. For example, if task 4a (evaluated as the most complicated task) had not been interposed between the choice of symptoms (task 3) and the announcement of departure to the ED (task 5) in this scenario, it is possible that no navigational problems would have occurred. It might be interesting in a future study to test the usability of the app based only on several standardized scenarios without predefined tasks. Instead, tasks and navigation would be left to users’ discretion, as in real life. Finally, as the InfoKids app is intended to be used in case of emergency (or at least perceived as such by parents), the quiet and nonstressful laboratory environment used in the study may appear to be a limitation. Guidelines for conducting usability testing recommend establishing a calm and relaxed atmosphere in which users can work without feeling stressed [99-101], although stress in usability testing has rarely been studied so far. One of the few existing studies by Janneck and Dogan [99] compared a usability test performed in a laboratory under calm and relaxed conditions with a test situation in which several stressors (time pressure, noise, and social pressure) were applied. They observed that participants under stressful conditions demonstrated poorer performance in the execution and accuracy of tasks and rated the usability and user experience of the software much more negatively. However, it should be noted that although various situations tend to elicit different patterns of stress responses, there are also individual differences in perceived and behavioral stress responses to the same situation [102]. Indeed, future research assessing the impact of stressors on the usability of InfoKids would provide valuable input for future development in the adult setting.

Conclusions

The usability of mHealth apps is an important factor for their adoption and use. This study addresses a gap in the literature by reporting findings from a usability evaluation relevant to a patient-centered mobile app designed to support the entire emergency care process by assisting patients before, during, and after an ED visit. Our results show that the usability of the current version of InfoKids is rated as good to excellent by users. However, areas for app improvement are identified and mitigation measures are proposed. These usability problems will be addressed in updated releases of InfoKids and will be used to inform the development of its next version as a universal app for all patients seeking ED care. The next step would be to determine whether this mobile app benefits ED patient experience and ED efficiency in a real-life patient environment and clinical conditions. Given the paucity of research in this area, we conclude that our findings could also be useful in paving the way for further research on mobile apps aimed at supporting and accompanying patients in their care episodes.

Acknowledgments

The authors would like to thank the Private Foundation of Geneva University Hospitals for funding to develop the InfoKids app and Rosemary Sudan for providing editorial assistance.

Conflicts of Interest

FE has business interests in a company that may be affected by the research reported in the enclosed paper. FE has fully disclosed those interests and has in place an approved plan for managing any potential conflicts arising from that involvement.

Multimedia Appendix 1

Description of the expected handling tasks of the app and criteria for the determination of completion for each task.

DOCX File , 14 KB

Multimedia Appendix 2

Screenshot of the family page. On the family page, each child is represented by a card containing their photo, the child’s first name, and an edit button located at the bottom right of the card. To access the child’s profile and edit the data, it is necessary to click this button. As many children as necessary can be added.

PNG File , 443 KB

Multimedia Appendix 3

Screenshot of the child’s profile page. The chronic illnesses button must be clicked to view the child’s chronic illnesses.

PNG File , 143 KB

Multimedia Appendix 4

Screenshot of the symptom page.

PNG File , 260 KB

Multimedia Appendix 5

Screenshot of the child selection page.

PNG File , 108 KB

Multimedia Appendix 6

Screenshot of the H icon indicating the localization of the emergency department on the map.

PNG File , 1043 KB

Multimedia Appendix 7

The 10-item System Usability Scale questionnaire.

DOCX File , 12 KB

Multimedia Appendix 8

Time-based efficiency per task (N=17 participants).

PNG File , 6 KB

Multimedia Appendix 9

Overall relative efficiency per task (N=17 participants).

PNG File , 7 KB

Multimedia Appendix 10

Details regarding the usability problems identified.

DOCX File , 19 KB

  1. Graham B, Endacott R, Smith JE, Latour JM. 'They do not care how much you know until they know how much you care': a qualitative meta-synthesis of patient experience in the emergency department. Emerg Med J 2019 Jun 19;36(6):355-363. [CrossRef] [Medline]
  2. Male L, Noble A, Atkinson J, Marson T. Measuring patient experience: a systematic review to evaluate psychometric properties of patient reported experience measures (PREMs) for emergency care service provision. Int J Qual Health Care 2017 Jun 01;29(3):314-326 [FREE Full text] [CrossRef] [Medline]
  3. Sonis JD, Aaronson EL, Castagna A, White B. A conceptual model for emergency department patient experience. J Patient Exp 2019 Sep 21;6(3):173-178. [CrossRef] [Medline]
  4. Sonis JD, White BA. Optimizing patient experience in the emergency department. Emerg Med Clin North Am 2020 Aug;38(3):705-713. [CrossRef] [Medline]
  5. Dudley N, Ackerman A, Brown KM, Snow SK, American Academy of Pediatrics Committee on Pediatric Emergency Medicine, American College of Emergency Physicians Pediatric Emergency Medicine Committee, Emergency Nurses Association Pediatric Committee. Patient- and family-centered care of children in the emergency department. Pediatrics 2015 Jan;135(1):255-272. [CrossRef] [Medline]
  6. Perret S, Gehri M, Pluies J, Rossi I, Akre C. [Families' experiences and satisfaction with a pediatric emergency service]. Arch Pediatr 2017 Oct;24(10):960-968. [CrossRef] [Medline]
  7. Barbarian M, Bishop A, Alfaro P, Biron A, Brody DA, Cunningham-Allard G, et al. Patient-reported experience in the pediatric emergency department: what matters most? J Patient Saf 2021 Dec 01;17(8):1166-1170. [CrossRef] [Medline]
  8. Parra C, Vidiella N, Marin I, Trenchs V, Luaces C. Patient experience in the pediatric emergency department: do parents and children feel the same? Eur J Pediatr 2017 Sep;176(9):1263-1267. [CrossRef] [Medline]
  9. Parra C, Carreras N, Vergés A, Trenchs V, Luaces C. Patient experience in a Spanish pediatric emergency department. Pediatr Emerg Care 2020 Aug;36(8):456-459. [CrossRef] [Medline]
  10. Bal C, AlNajjar M, Thull-Freedman J, Pols E, McFetridge A, Stang AS. Patient reported experience in a pediatric emergency department. J Patient Exp 2020 Feb;7(1):116-123 [FREE Full text] [CrossRef] [Medline]
  11. Byczkowski TL, Fitzgerald M, Kennebeck S, Vaughn L, Myers K, Kachelmeyer A, et al. A comprehensive view of parental satisfaction with pediatric emergency department visits. Ann Emerg Med 2013 Oct;62(4):340-350. [CrossRef] [Medline]
  12. Natesan P, Hadid D, Harb YA, Hitti E. Comparing patients and families perceptions of satisfaction and predictors of overall satisfaction in the emergency department. PLoS One 2019;14(8):e0221087 [FREE Full text] [CrossRef] [Medline]
  13. Sonis JD, Aaronson EL, Lee RY, Philpotts LL, White BA. Emergency department patient experience: a systematic review of the literature. J Patient Exp 2018 Jun 29;5(2):101-106 [FREE Full text] [CrossRef] [Medline]
  14. Kelley JM, Kraft-Todd G, Schapira L, Kossowsky J, Riess H. The influence of the patient-clinician relationship on healthcare outcomes: a systematic review and meta-analysis of randomized controlled trials. PLoS One 2014;9(4):e94207 [FREE Full text] [CrossRef] [Medline]
  15. Welch SJ. Twenty years of patient satisfaction research applied to the emergency department: a qualitative review. Am J Med Qual 2010;25(1):64-72. [CrossRef] [Medline]
  16. Davenport PJ, O'Connor SJ, Szychowski JM, Landry AY, Hernandez SR. The relationship between emergency department wait times and inpatient satisfaction. Health Mark Q 2017;34(2):97-112. [CrossRef] [Medline]
  17. Cydulka R, Tamayo-Sarver J, Gage A, Bagnoli D. Association of patient satisfaction with complaints and risk management among emergency physicians. J Emerg Med 2011 Oct;41(4):405-411. [CrossRef] [Medline]
  18. Richter JP, Muhlestein DB. Patient experience and hospital profitability: is there a link? Health Care Manage Rev 2017;42(3):247-257. [CrossRef] [Medline]
  19. Aaronson EL, Mort E, Sonis JD, Chang Y, White BA. Overall emergency department rating: identifying the factors that matter most to patient experience. J Healthc Qual 2018;40(6):367-376. [CrossRef] [Medline]
  20. Mohiuddin S, Busby J, Savović J, Richards A, Northstone K, Hollingworth W, et al. Patient flow within UK emergency departments: a systematic review of the use of computer simulation modelling methods. BMJ Open 2017 May 09;7(5):e015007 [FREE Full text] [CrossRef] [Medline]
  21. Stead LG, Jain A, Decker WW. Emergency department over-crowding: a global perspective. Int J Emerg Med 2009 Sep 30;2(3):133-134 [FREE Full text] [CrossRef] [Medline]
  22. Arora M, Asha S, Chinnappa J, Diwan AD. Review article: burnout in emergency medicine physicians. Emerg Med Australas 2013 Dec 09;25(6):491-495. [CrossRef] [Medline]
  23. Adriaenssens J, De Gucht V, Maes S. Causes and consequences of occupational stress in emergency nurses, a longitudinal study. J Nurs Manag 2015 Apr;23(3):346-358. [CrossRef] [Medline]
  24. Austin EE, Blakely B, Tufanaru C, Selwood A, Braithwaite J, Clay-Williams R. Strategies to measure and improve emergency department performance: a scoping review. Scand J Trauma Resusc Emerg Med 2020 Jun 15;28(1):55 [FREE Full text] [CrossRef] [Medline]
  25. Hesselink G, Sir O, Schoon Y. Effectiveness of interventions to alleviate emergency department crowding by older adults: a systematic review. BMC Emerg Med 2019 Nov 20;19(1):69 [FREE Full text] [CrossRef] [Medline]
  26. Morley C, Unwin M, Peterson GM, Stankovich J, Kinsman L. Emergency department crowding: a systematic review of causes, consequences and solutions. PLoS One 2018;13(8):e0203316 [FREE Full text] [CrossRef] [Medline]
  27. Rasouli HR, Esfahani AA, Farajzadeh MA. Challenges, consequences, and lessons for way-outs to emergencies at hospitals: a systematic review study. BMC Emerg Med 2019 Oct 30;19(1):62 [FREE Full text] [CrossRef] [Medline]
  28. Elder E, Johnston AN, Crilly J. Review article: systematic review of three key strategies designed to improve patient flow through the emergency department. Emerg Med Australas 2015 Oct;27(5):394-404. [CrossRef] [Medline]
  29. De Freitas L, Goodacre S, O'Hara R, Thokala P, Hariharan S. Interventions to improve patient flow in emergency departments: an umbrella review. Emerg Med J 2018 Oct;35(10):626-637. [CrossRef] [Medline]
  30. Ming T, Lai A, Lau P. Can team triage improve patient flow in the emergency department? A systematic review and meta-analysis. Adv Emerg Nurs J 2016;38(3):233-250. [CrossRef] [Medline]
  31. Abdulwahid MA, Booth A, Kuczawski M, Mason SM. The impact of senior doctor assessment at triage on emergency department performance measures: systematic review and meta-analysis of comparative studies. Emerg Med J 2016 Jul;33(7):504-513. [CrossRef] [Medline]
  32. Gonçalves-Bradley D, Khangura JK, Flodgren G, Perera R, Rowe BH, Shepperd S. Primary care professionals providing non-urgent care in hospital emergency departments. Cochrane Database Syst Rev 2018 Feb 13;2:CD002097 [FREE Full text] [CrossRef] [Medline]
  33. Holden RJ. Lean Thinking in emergency departments: a critical review. Ann Emerg Med 2011 Mar;57(3):265-278 [FREE Full text] [CrossRef] [Medline]
  34. Sayah A, Lai-Becker M, Kingsley-Rocker L, Scott-Long T, O'Connor K, Lobon LF. Emergency department expansion versus patient flow improvement: impact on patient experience of care. J Emerg Med 2016 Feb;50(2):339-348. [CrossRef] [Medline]
  35. Barry MJ, Edgman-Levitan S. Shared decision making--pinnacle of patient-centered care. N Engl J Med 2012 Mar 01;366(9):780-781. [CrossRef] [Medline]
  36. Institute of Medicine. Hospital-based emergency care: at the breaking point. Washington, DC: National Academies Press; 2007:81-164.
  37. Ehrler F, Siebert JN, Rochat J, Schneider F, Galetto A, Gervaix A, et al. Connecting parents to a pediatric emergency department: designing a mobile app based on patient centred care principles. Stud Health Technol Inform 2017;244:13-17. [Medline]
  38. Siebert J, Gervaix A, Ehrler F. InfoKids: a transversal and longitudinal solution enhancing patients and caregivers experience in emergency departments by disrupting the care process paradigm. World Hosp Health Serv 2018;54(2):5-9 [FREE Full text]
  39. Bruce C, Harrison P, Giammattei C, Desai S, Sol JR, Jones S, et al. Evaluating patient-centered mobile health technologies: definitions, methodologies, and outcomes. JMIR Mhealth Uhealth 2020 Nov 11;8(11):e17577 [FREE Full text] [CrossRef] [Medline]
  40. Chiu TM, Eysenbach G. Stages of use: consideration, initiation, utilization, and outcomes of an internet-mediated intervention. BMC Med Inform Decis Mak 2010 Nov 23;10:73 [FREE Full text] [CrossRef] [Medline]
  41. Paz F, Pow-Sang JA. A systematic mapping review of usability evaluation methods for software development process. Int J Software Engin Appl 2016 Jan 31;10(1):165-178. [CrossRef]
  42. Maramba I, Chatterjee A, Newman C. Methods of usability testing in the development of eHealth applications: a scoping review. Int J Med Inform 2019 Jun;126:95-104. [CrossRef] [Medline]
  43. Yen P, Bakken S. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012;19(3):413-422 [FREE Full text] [CrossRef] [Medline]
  44. Ergonomics of human-system interaction - Part 11: Usability: definitions and concepts (ISO 9241-11). International Organization for Standardization. 2018.   URL: https://www.iso.org/standard/63500.html [accessed 2021-10-10]
  45. Inal Y, Wake JD, Guribye F, Nordgreen T. Usability evaluations of mobile mental health technologies: systematic review. J Med Internet Res 2020 Jan 06;22(1):e15337 [FREE Full text] [CrossRef] [Medline]
  46. Wiklund ME, Kendler J, Strochlic AY. Usability Testing of Medical Devices. Boca Raton (FL): CRC Press; 2016:154-188.
  47. Epstein RM, Street RL. The values and value of patient-centered care. Ann Fam Med 2011;9(2):100-103 [FREE Full text] [CrossRef] [Medline]
  48. National Academy of Engineering and Institute of Medicine. Building a Better Delivery System: A New Engineering/Health Care Partnership. Washington, DC: National Academies Press; 2005:115-138.
  49. Rochat J, Ehrler F, Siebert J, Galetto A, Gervaix A, Lovis C. Needs analysis of patients and relatives during their visit to the pediatric emergency department. In: Journées Francophones d'Informatique Médicale - Proceedings. 2016 Presented at: Journées Francophones d'Informatique Médicale; June 28-29, 2016; Geneva, Switzerland   URL: https://archive-ouverte.unige.ch/unige:125015/ATTACHMENT01
  50. McCarthy S, O’Raghallaigh P, Woodworth S, Lim YL, Kenny LC, Adam F. An integrated patient journey mapping tool for embedding quality in healthcare service reform. J Decision Syst 2016 Jun 16;25(sup1):354-368. [CrossRef]
  51. Nielsen J, Mack RL. Heuristic evaluation. In: Usability Inspection Methods. New York, NY: John Wiley & Sons; 1994:25-62.
  52. Rochat J, Siebert J, Galetto A, Lovis C, Ehrler F. Communication of children symptoms in emergency: classification of the terminology. Stud Health Technol Inform 2017;235:456-460. [Medline]
  53. Beveridge R, Clarke B, Janes L, Savage N, Thompson J, Dodd G, et al. Implementation Guidelines for The Canadian Emergency Department Triage & Acuity Scale (CTAS). 1999.   URL: https://www.colleaga.org/sites/default/files/ctased16.pdf [accessed 2021-10-10]
  54. Hospital: national patient safety goals. The Joint Commission. 2021.   URL: https:/​/www.​jointcommission.org/​-/​media/​tjc/​documents/​standards/​national-patient-safety-goals/​2021/​simplified-2021-hap-npsg-goals-final-11420.​pdf [accessed 2021-01-01]
  55. Jaspers MW, Steen T, van den Bos C, Geenen M. The think aloud method: a guide to user interface design. Int J Med Inform 2004 Nov;73(11-12):781-795. [CrossRef] [Medline]
  56. McDonald S, Zhao T, Edwards HM. Dual verbal elicitation: the complementary use of concurrent and retrospective reporting within a usability test. Int J Hum-Comput Interact 2013 Oct 03;29(10):647-660. [CrossRef]
  57. Brooke J. SUS: A 'Quick and Dirty' usability scale. In: Usability Evaluation In Industry. Boca Raton, FL: CRC Press; 1996.
  58. Brooke J. SUS: a retrospective. J Usab Stud 2013;8(2):29-40 [FREE Full text]
  59. Ben Ramadan AA, Jackson-Thompson J, Schmaltz CL. Usability assessment of the Missouri Cancer Registry's published interactive mapping reports: round two. Online J Public Health Inform 2019;11(2):e3 [FREE Full text] [CrossRef] [Medline]
  60. Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Hum-Comput Interact 2008 Jul 30;24(6):574-594. [CrossRef]
  61. Lallemand C, Gronier G. 30 méthodes fondamentales pour concevoir des expériences optimales. In: Méthodes de Design UX. Paris, France: Eyrolles; 2018:520-539.
  62. Nielsen J. Severity ratings for usability problems. Computer Science. 2006.   URL: https://http://www.nngroup.com/articles/how-torate-the-severity-of-usability-problems/ [accessed 2022-02-24]
  63. Bastien J, Scapin D. A validation of ergonomic criteria for the evaluation of human‐computer interfaces. Int J Hum-Comput Interact 1992 Apr;4(2):183-196. [CrossRef]
  64. Tullis T, Albert B. Chapter 4 - Performance metrics. In: Measuring the User Experience (2nd Edition). Boston, MA: Morgan Kaufmann; 2013:63-97.
  65. Nielsen J. Reliability of severity estimates for usability problems found by heuristic evaluation. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.: Association for Computing Machinery; 1992 Presented at: SIGCHI Conference on Human Factors in Computing Systems; May 3 - 7, 1992; Monterey California USA p. 129-130. [CrossRef]
  66. Federal Act on Research involving Human Beings (Human Research Act, HRA). The Federal Assembly of the Swiss Confederation. 2011.   URL: https://www.fedlex.admin.ch/eli/cc/2013/617/en [accessed 2021-10-10]
  67. Sauro J. What is a good task-completion rate? Measuring U. 2011 Mar 21.   URL: https://measuringu.com/task-completion/ [accessed 2021-10-10]
  68. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usab Stud 2009;4(3):114-123. [CrossRef]
  69. Meyerowitz-Katz G, Ravi S, Arnolda L, Feng X, Maberly G, Astell-Burt T. Rates of attrition and dropout in app-based interventions for chronic disease: systematic review and meta-analysis. J Med Internet Res 2020 Sep 29;22(9):e20283 [FREE Full text] [CrossRef] [Medline]
  70. Peiris D, Miranda JJ, Mohr DC. Going beyond killer apps: building a better mHealth evidence base. BMJ Glob Health 2018;3(1):e000676 [FREE Full text] [CrossRef] [Medline]
  71. Gordon WJ, Landman A, Zhang H, Bates DW. Beyond validation: getting health apps into clinical practice. NPJ Digit Med 2020;3:14 [FREE Full text] [CrossRef] [Medline]
  72. Zapata BC, Fernández-Alemán JL, Idri A, Toval A. Empirical studies on usability of mHealth apps: a systematic literature review. J Med Syst 2015 Feb;39(2):1. [CrossRef] [Medline]
  73. Liverpool S, Mota CP, Sales CM, Čuš A, Carletto S, Hancheva C, et al. Engaging children and young people in digital mental health interventions: systematic review of modes of delivery, facilitators, and barriers. J Med Internet Res 2020 Jun 23;22(6):e16317 [FREE Full text] [CrossRef] [Medline]
  74. Bender JL, Yue RY, To MJ, Deacken L, Jadad AR. A lot of action, but not in the right direction: systematic review and content analysis of smartphone applications for the prevention, detection, and management of cancer. J Med Internet Res 2013 Dec 23;15(12):e287 [FREE Full text] [CrossRef] [Medline]
  75. Ginossar T, Shah SF, West AJ, Bentley JM, Caburnay CA, Kreuter MW, et al. Content, usability, and utilization of plain language in breast cancer mobile phone apps: a systematic analysis. JMIR Mhealth Uhealth 2017 Mar 13;5(3):e20 [FREE Full text] [CrossRef] [Medline]
  76. Ferrara G, Kim J, Lin S, Hua J, Seto E. A focused review of smartphone diet-tracking apps: usability, functionality, coherence with behavior change theory, and comparative validity of nutrient intake and energy estimates. JMIR Mhealth Uhealth 2019 May 17;7(5):e9232 [FREE Full text] [CrossRef] [Medline]
  77. Larbi D, Randine P, Årsand E, Antypas K, Bradway M, Gabarron E. Methods and evaluation criteria for apps and digital interventions for diabetes self-management: systematic review. J Med Internet Res 2020 Jul 06;22(7):e18480 [FREE Full text] [CrossRef] [Medline]
  78. Fu H, McMahon SK, Gross CR, Adam TJ, Wyman JF. Usability and clinical efficacy of diabetes mobile applications for adults with type 2 diabetes: a systematic review. Diabetes Res Clin Pract 2017 Sep;131:70-81. [CrossRef] [Medline]
  79. Scott IA, Scuffham P, Gupta D, Harch TM, Borchi J, Richards B. Going digital: a narrative overview of the effects, quality and utility of mobile apps in chronic disease self-management. Aust Health Rev 2020 Feb;44(1):62-82. [CrossRef] [Medline]
  80. Woods L, Duff J, Cummings E, Walker K. Evaluating the development processes of consumer mHealth interventions for chronic condition self-management: a scoping review. Comput Inform Nurs 2019 Jul;37(7):373-385. [CrossRef] [Medline]
  81. Richardson B, Dol J, Rutledge K, Monaghan J, Orovec A, Howie K, et al. Evaluation of mobile apps targeted to parents of infants in the neonatal intensive care unit: systematic app review. JMIR Mhealth Uhealth 2019 Apr 15;7(4):e11620 [FREE Full text] [CrossRef] [Medline]
  82. Carmody JK, Denson LA, Hommel KA. Content and usability evaluation of medication adherence mobile applications for use in pediatrics. J Pediatr Psychol 2019 Apr 01;44(3):333-342 [FREE Full text] [CrossRef] [Medline]
  83. Fiks AG, Fleisher L, Berrigan L, Sykes E, Mayne SL, Gruver R, et al. Usability, acceptability, and impact of a pediatric teledermatology mobile health application. Telemed J E Health 2018 Mar;24(3):236-245. [CrossRef] [Medline]
  84. Webb MJ, Wadley G, Sanci LA. Improving patient-centered care for young people in general practice with a codesigned screening app: mixed methods study. JMIR Mhealth Uhealth 2017 Aug 11;5(8):e118 [FREE Full text] [CrossRef] [Medline]
  85. DeForte S, Sezgin E, Huefner J, Lucius S, Luna J, Satyapriya AA, et al. Usability of a mobile app for improving literacy in children with hearing impairment: focus group study. JMIR Hum Factors 2020 May 28;7(2):e16310 [FREE Full text] [CrossRef] [Medline]
  86. English LL, Dunsmuir D, Kumbakumba E, Ansermino JM, Larson CP, Lester R, et al. The PAediatric Risk Assessment (PARA) mobile app to reduce postdischarge child mortality: design, usability, and feasibility for health care workers in Uganda. JMIR Mhealth Uhealth 2016 Feb 15;4(1):e16 [FREE Full text] [CrossRef] [Medline]
  87. Yoo S, Jung S, Kim S, Kim E, Lee K, Chung E, et al. A personalized mobile patient guide system for a patient-centered smart hospital: lessons learned from a usability test and satisfaction survey in a tertiary university hospital. Int J Med Inform 2016 Jul;91:20-30 [FREE Full text] [CrossRef] [Medline]
  88. Zini F, Ricci F. Guiding patients in the hospital. In: Proceeding of the Advances in User Modeling: UMAP 2011 Workshops. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012 Presented at: Advances in User Modeling: UMAP 2011 Workshops; July 11-15, 2011; Girona, Spain. [CrossRef]
  89. Westphal M, Yom-Tov GB, Parush A, Carmeli N, Shaulov A, Shapira C, et al. A patient-centered information system (myED) for emergency care journeys: design, development, and initial adoption. JMIR Form Res 2020 Feb 25;4(2):e16410 [FREE Full text] [CrossRef] [Medline]
  90. Alwashmi MF, Hawboldt J, Davis E, Fetters MD. The iterative convergent design for mobile health usability testing: mixed methods approach. JMIR Mhealth Uhealth 2019 Apr 26;7(4):e11656 [FREE Full text] [CrossRef] [Medline]
  91. Usability 101: introduction to usability. Nielsen Norman Group.   URL: https://www.nngroup.com/articles/usability-101-introduction-to-usability/ [accessed 2021-10-10]
  92. Slater H, Campbell JM, Stinson JN, Burley MM, Briggs AM. End user and implementer experiences of mHealth technologies for noncommunicable chronic disease management in young adults: systematic review. J Med Internet Res 2017 Dec 12;19(12):e406 [FREE Full text] [CrossRef] [Medline]
  93. Lesselroth B, Monkman H, Adams K, Wood S, Corbett A, Homco J, et al. User experience theories, models, and frameworks: a focused review of the healthcare literature. Stud Health Technol Inform 2020 Jun 16;270:1076-1080. [CrossRef] [Medline]
  94. Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems. International Standard Organisation. 2018.   URL: https://www.iso.org/standard/77520.html [accessed 2021-10-10]
  95. Rowland SP, Fitzgerald JE, Holme T, Powell J, McGregor A. What is the clinical value of mHealth for patients? NPJ Digit Med 2020 Jan 13;3(1):4 [FREE Full text] [CrossRef] [Medline]
  96. Nouri R, Kalhori SR, Ghazisaeedi M, Marchand G, Yasini M. Criteria for assessing the quality of mHealth apps: a systematic review. J Am Med Inform Assoc 2018 Aug 01;25(8):1089-1098 [FREE Full text] [CrossRef] [Medline]
  97. Vo V, Auroy L, Sarradon-Eck A. Patients' perceptions of mHealth apps: meta-ethnographic review of qualitative studies. JMIR Mhealth Uhealth 2019 Jul 10;7(7):e13817 [FREE Full text] [CrossRef] [Medline]
  98. Faulkner L. Beyond the five-user assumption: benefits of increased sample sizes in usability testing. Behav Res Methods Instrum Comput 2003 Aug;35(3):379-383. [CrossRef] [Medline]
  99. Janneck M, Dogan M. The influence of stressors on usability tests-an experimental study. In: Proceedings of the 9th International Conference on Web Information Systems and Technologies (WEBIST 2013).: SciTePress; 2013 Presented at: 9th International Conference on Web Information Systems and Technologies (WEBIST 2013); May 8-10, 2013; Aachen, Germany p. 581-590. [CrossRef]
  100. Schrier JR. Reducing Stress Associated with Participating in a Usability Test. Proc Hum Fact Society Ann Meet 2016 Aug 06;36(16):1210-1214. [CrossRef]
  101. Sova D, Nielsen J. 234 tips and tricks for recruiting users as participants in usability studies. Nielsen Norman Group. 2010.   URL: https://media.nngroup.com/media/reports/free/How_To_Recruit_Participants_for_Usability_Studies.pdf [accessed 2021-10-10]
  102. Schneiderman N, Ironson G, Siegel SD. Stress and health: psychological, behavioral, and biological determinants. Annu Rev Clin Psychol 2005;1:607-628 [FREE Full text] [CrossRef] [Medline]


ED: emergency department
ORE: overall relative efficiency
SUS: System Usability Scale
TBE: time-based efficiency
TCR: task completion rate


Edited by S Badawy; submitted 05.11.20; peer-reviewed by R Marcilly, O Ogundaini, M Rahimian; comments to author 29.12.20; revised version received 18.03.21; accepted 21.12.21; published 15.03.22

Copyright

©Jessica Rochat, Frédéric Ehrler, Johan N Siebert, Arnaud Ricci, Victor Garretas Ruiz, Christian Lovis. Originally published in JMIR Pediatrics and Parenting (https://pediatrics.jmir.org), 15.03.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Pediatrics and Parenting, is properly cited. The complete bibliographic information, a link to the original publication on https://pediatrics.jmir.org, as well as this copyright and license information must be included.