Working documents

  • From A to B with ease: User-centric interfaces for shuttle buses
    Alam, M. S., Martens, M., Bazilinskyy, P.
    Submitted for publication.
    User interfaces are crucial for easy travel. To understand user preferences for travel information during automated shuttle rides, we conducted an online survey with 51 participants from 8 countries. The survey focused on the information passengers wish to access and their preferences for using mobile, private, and public screens during boarding and travelling on the bus. It also gathered opinions on the usage of Near-Field Communication (NFC) for shuttle bus confirmation and viewing assistance to help passengers stand precisely where the shuttle will arrive, overcoming navigation and language barriers. Results showed that 72.6% of participants indicated a need for NFC and 82.4% for viewing assistance. There was a strong correlation between preferences for shuttle bus schedules, route details (r=0.55), and next-stop information (r=0.57) on mobile screens, suggesting that passengers who value one type of information are likely to value related kinds too.
  • Generating realistic traffic scenarios: A deep learning approach using generative adversarial networks (GANs)
    Alam, M. S., Martens, M., Bazilinskyy, P.
    Submitted for publication.
    Diverse and realistic traffic scenarios are crucial for testing systems and human behaviour in transportation research. Leveraging Generative Adversarial Networks (GANs), this study focuses on video-to-video translation to generate a variety of traffic scenes. By employing GANs for video-to-video translation, the study accurately captures the nuances of urban driving environments, enriching realism and breadth. One advantage of this approach is the ability to model how road users adapt and behave differently across varying conditions depicted in the translated videos. For instance, certain scenarios may exhibit more cautious driver behaviour, while others may involve heavier traffic and faster speeds. Maintaining consistent driving patterns in the translated videos improves their resemblance to real-world scenarios, thereby increasing the reliability of the data for testing and validation purposes. Ultimately, this approach provides researchers and practitioners with a valuable method for evaluating algorithms and systems under challenging conditions, advancing transportation models and automated driving technologies.
  • Robot-like in-vehicle agent for a level 3 automated vehicle
    Zeng, X., Bazilinskyy, P.
    Submitted for publication.
    With the rapid development of automotive technology and artificial intelligence, in-vehicle agents have a large potential to solve the challenges of explaining the system status and the intentions of an automated vehicle. A robot-like in-vehicle agent was developed to explore the in-vehicle agent communicating through gestures and facial expressions with a driver in an SAE Level 3 automated vehicle. An experiment with 12 participants was conducted to evaluate the prototype. Results showed that both interactions of facial expressions and gestures can reduce workload, and increase usefulness, and satisfaction. However, gestures seem to be more functional and more preferred by the driver while facial expressions seem to be more emotional and preferred by passengers. Furthermore, gestures are easier to notice but hard to understand independently and facial expressions are hard to notice but more attractive.


  • It is not always just one road user: Workshop on multi-agent automotive research
    Bazilinskyy, P., Ebel, P., Walker, F., Dey, D., Tran, T.
    Adjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI). Stanford, CA, USA (2024)

    In the future, roads will host a complex mix of automated and manually operated vehicles, along with vulnerable road users. However, most automotive user interfaces and human factors research focus on single-agent studies, where one human interacts with one vehicle. Only a few studies incorporate multi-agent setups. This workshop aims to (1) examine the current state of multi-agent research in the automotive domain, (2) serve as a platform for discussion toward more realistic multi-agent setups, and (3) discuss methods and practices to conduct such multi-agent research. The goal is to synthesize the insights from the AutoUI community, creating the foundation for advancing multi-agent traffic interaction research.
  • Exploring holistic HMI design for automated vehicles: Insights from participatory workshop to bridge in-vehicle and external communication
    Dong, H., Tran, T., Verstegen, R., Cazacu, S., Gao, R., Hoggenmüller, M., Dey, D., Franssen, M., Sasalovici, M., Bazilinskyy, P., Martens. M.
    Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems. Honolulu, USA (2024)

    Human-Machine Interfaces (HMIs) for automated vehicles (AVs) are typically divided into two categories: internal HMIs for interactions within the vehicle, and external HMIs for communication with other road users. In this work, we examine the prospects of bridging these two seemingly distinct domains. Through a participatory workshop with automotive user interface researchers and practitioners, we facilitated a critical exploration of holistic HMI design by having workshop participants collaboratively develop interaction scenarios involving AVs, in-vehicle users, and external road users. The discussion offers insights into the escalation of interface elements as an HMI design strategy, the direct interactions between different users, and an expanded understanding of holistic HMI design. This work reflects a collaborative effort to understand the practical aspects of this holistic design approach, offering new perspectives and encouraging further investigation into this underexplored aspect of automotive user interfaces.
  • Putting ChatGPT vision (GPT-4V) to the test: Risk perception in traffic images
    Driessen, T., Dodou, D., Bazilinskyy, P., De Winter, J. C. F.
    Royal Society Open Science, 11:231676 (2024)

    Vision-language models are of interest in various domains, including automated driving, where computer vision techniques can accurately detect road users, but where the vehicle sometimes fails to understand context. This study examined the effectiveness of GPT-4V in predicting the level of ‘risk' in traffic images as assessed by humans. We used 210 static images taken from a moving vehicle, each previously rated by approximately 650 people. Based on psychometric construct theory and using insights from the self-consistency prompting method, we formulated three hypotheses: (i) repeating the prompt under effectively identical conditions increases validity, (ii) varying the prompt text and extracting a total score increases validity compared to using a single prompt, and (iii) in a multiple regression analysis, the incorporation of object detection features, alongside the GPT-4V-based risk rating, significantly contributes to improving the model's validity. Validity was quantified by the correlation coefficient with human risk scores, across the 210 images. The results confirmed the three hypotheses. The eventual validity coefficient was r = 0.83, indicating that population-level human risk can be predicted using AI with a high degree of accuracy. The findings suggest that GPT-4V must be prompted in a way equivalent to how humans fill out a multi-item questionnaire.
  • Changing lanes toward open science: Openness and transparency in automotive user research
    Ebel, P., Bazilinskyy, P., Colley, M., Goodridge, C. M. , Hock, P., Janssen, C., Sandhaus, H., Srinivasan, A. R., Wintersberger, P.
    Adjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI). Stanford, CA, USA (2024)

    We review the state of open science and the perspectives on open data sharing within the automotive user research community. Openness and transparency are critical not only for judging the quality of empirical research, but also for accelerating scientific progress and promoting an inclusive scientific community. However, there is little documentation of these aspects within the automotive user research community. To address this, we report two studies that identify (1) community perspectives on motivators and barriers to data sharing, and (2) how openness and transparency have changed in papers published at AutomotiveUI over the past 5 years. We show that while open science is valued by the community and openness and transparency have improved, overall compliance is low. The most common barriers are legal constraints and confidentiality concerns. Although research published at AutomotiveUIrelies more on quantitative methods than research published at CHI,openness and transparency are not as well established. Based on our findings, we provide suggestions for improving openness and transparency, arguing that the motivators for open science must outweigh the barriers.
  • Exploring the correlation between emotions and uncertainty in daily travel
    Franssen, M., Verstegen, R.*, Bazilinskyy, P., Martens, M.
    Proceedings of International Conference on Applied Human Factors and Ergonomics (AHFE). Nice, France (2024)

    Our mental state influences how we behave in and interact with the everyday world. Both uncertainty and emotions can alter our mental state and, thus, our behaviour. Although the relationship between uncertainty and emotions has been studied, research into this relationship in the context of daily travel is lacking. Emotions may influence uncertainty, just like uncertainty could trigger emotional responses. In this paper, a study is presented that explores the relationship between uncertainty and emotional states in the context of daily travel. Using a diary study method with 25 participants, emotions and uncertainty that are experienced during daily travel while using multiple modes of transport, were tracked for a period of 14 days. Diary studies allowed us to gain detailed insights and reflections on the emotions and uncertainty that participants experienced during their day-to-day travels. The diary allowed the participants to record their time-sensitive experiences in their relevant context over a longer period. These daily logs were made by the participants in the m-Path application. Participants logged their daily transportation modes, their emotions using the Geneva Emotion Wheel, and the uncertainty that they experienced while travelling. Results show that emotions and uncertainty influence one another simultaneously, with no clear causality. Specifically, this study observed a significant correlation between negative valence emotions (disappointment and fear) and uncertainty, which emphasises the importance of uncertainty and the management of negative valence emotions in travel experiences.
  • Combining internal and external communication: The design of a holistic Human-Machine Interface for automated vehicles
    Versteggen, R., Gao, R., Bernhaupt, R., Bazilinskyy, P., Martens, M.
    Proceedings of International Conference on Applied Human Factors and Ergonomics (AHFE). Nice, France (2024)

    In this paper, we explore the field of holistic Human-Machine Interfaces (hHMIs). Currently, internal and external Human-Machine Interfaces are being researched as separate fields. This separation can lead to non-systemic designs that operate in different fashions, make the switch between traffic roles less seamless, and create differences in understanding of a traffic situation, potentially increasing confusion. These factors can limit the adoption of automated vehicles and lead to less seamless interactions in traffic. For this reason, we explore the concept of hHMIs, combining internal and external communication. This paper introduces a working definition for this new type of interface. Then, it explores considerations for the design of such an interface, which are the provision of anticipatory cues, interaction modalities and perceptibility, colour usage, building upon standardisation, and the usage of a singular versus a coupled interface. Then, we apply these considerations with an artefact contribution in the form of an hHMI concept. This interface communicates anticipatory cues in a unified manner to internal and external users of the automated vehicle and demonstrates how these proposed considerations can be applied. By sharing design considerations and a design concept, this paper aims to stimulate the field of holistic Human- Machine Interfaces for automated vehicles.
  • Slideo: Using bicycle-to-vehicle communication to intuitively share intentions to automated vehicles
    Versteggen, J., Bazilinskyy, P.
    Proceedings of International Conference on Applied Human Factors and Ergonomics (AHFE). Nice, France (2024)

    In urban environments, cycling is an important method of transportation due to being sustainable, healthy and less space-intensive than motorised traffic. Most literature on interactions between automated vehicles (AVs) and vulnerable road users (VRUs) focuses on external Human-Machine Interfaces positioned on AVs and telling VRUs what to do. Such an interface requires cyclists to actively look for and interpret the information and can reduce their ability to make their own decisions. We designed a physical bicycle-to-vehicle (B2V) interaction that allows cyclists to share the intention to turn with AVs through vehicle-to-everything (V2X) communication. We explored four concepts of interaction with hands, feet, hips, and knees. The final concept uses haptic feedback in each handle. The test with nine participants explored the clarity of the feedback and compared two variations: (1) providing feedback in the beginning, during and at the end and (2) giving feedback only at the beginning and end. Results indicate that the general meaning of both variants is clear and that the preferred variation of feedback is up to personal preference. We suggest that B2V interactions should be possible to personalise.


  • Exterior sounds for electric and automated vehicles: Loud is effective
    Bazilinskyy, P., Merino-Martınez, R., Vieirac, E. O., Dodou, D., De Winter, J. C. F.
    Applied Acoustics, 214, 109673 (2023)

    Exterior vehicle sounds have been introduced in electric vehicles and as external human–machine interfaces for automated vehicles. While previous research has studied the effect of exterior vehicle sounds on detectability and acceptance, the present study takes on a different approach by examining the efficacy of such sounds in deterring people from crossing the road. An online study was conducted in which 226 participants were presented with different types of synthetic sounds, including sounds of a combustion engine, pure tones, combined tones, and beeps. Participants were presented with a scenario where a vehicle moved in a straight trajectory at a constant velocity of 30 km/h, without any accompanying visual information. Participants, acting as pedestrians, were asked to hold down a key when they felt safe to cross. After each trial, they assessed whether the vehicle sound was easy to notice, whether it gave enough information to realize that a vehicle was approaching, and whether the sound was annoying. The results showed that sounds of higher modeled perceived loudness, such as continuous tones with high frequency, were the most effective in deterring participants from crossing the road. The tested intermittent beeps resulted in lower crossing deterrence than continuous tones, presumably because no valuable information could be derived during the inter-pulse intervals. Tire noise proved to be effective in deterring participants from crossing while being the least annoying among the sounds tested. These results may prove insightful for the improvement of synthetic exterior vehicle sounds.
  • Predicting perceived risk of traffic scenes using computer vision
    De Winter, J. C. F., Hoogmoed, J., Stapel, J., Dodou, D., Bazilinskyy, P.
    Transportation Research Part F: Traffic Psychology and Behaviour, 84, 194-210 (2023)

    Perceived risk, or subjective risk, is an important concept in the field of traffic psychology and automated driving. In this paper, we investigate whether perceived risk in images of traffic scenes can be predicted from computer vision features that may also be used by automated vehicles (AVs). We conducted an international crowdsourcing study with 1378 participants, who rated the perceived risk of 100 randomly selected dashcam images on German roads. The population-level perceived risk was found to be statistically reliable, with a split-half reliability of 0.98. We used linear regression analysis to predict (r = 0.62) perceived risk from two features obtained with the YOLOv4 computer vision algorithm: the number of people in the scene and the mean size of the bounding boxes surrounding other road users. When the ego-vehicle’s speed was added as a predictor variable, the prediction strength increased to r = 0.75. Interestingly, the sign of the speed prediction was negative, indicating that a higher vehicle speed was associated with a lower perceived risk. This finding aligns with the principle of self-explaining roads. Our results suggest that computer-vision features and vehicle speed contribute to an accurate prediction of population subjective risk, outperforming the ratings provided by individual participants (mean r = 0.41). These findings may have implications for AV development and the modeling of psychological constructs in traffic psychology.
  • Holistic HMI design for automated vehicles: Bridging in-vehicle and external communication
    Dong, H., Tran, T., Bazilinskyy, P., Hoggenmueller, M., Dey, D., Cazacu, S., Franssen, M., Gao, R.
    Adjunct Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI). Ingolstadt, Germany (2023)

    As the field of automated vehicles (AVs) advances, it has become increasingly critical to develop human-machine interfaces (HMI) for both internal and external communication. Critical dialogue is emerging around the potential necessity for a holistic approach to HMI unified and coherent experience for different stakeholders interacting with AVs. This workshop seeks to bring together designers, engineers, researchers, and other stakeholders to delve into relevant use cases, exploring the potential advantages and challenges of this approach. The insights generated from this workshop aim to inform further design and research in the development of coherent HMIs for AVs, ultimately for more seamless integration of AVs into existing traffic.
  • Breaking barriers: Workshop on open data practices in AutoUI research
    Ebel, P., Bazilinskyy, P., Hwang, A., Ju, W., Sandhaus, H., Srinivasan, A., Yang, Q., Wintersberger, P.
    Adjunct Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI). Ingolstadt, Germany (2023)

    While the benefits of open science and open data practices are well understood, experimental data sharing is still uncommon in the AutoUI community. The goal of this workshop is to address the current lack of data sharing practices and to promote a culture of openness. By discussing barriers to data sharing, defining best practices, and exploring open data formats, we aim to foster collaboration, improve data quality, and promote transparency. Special interest groups will be formed to identify parameter sets for recur- ring research topics, so that data collected in different individual studies can be used to generate insights beyond the results of the individual studies. Join us at this workshop to help democratize knowledge and advance research in the AutoUI community.


  • Blinded windows and empty driver seats: The effects of automated vehicle characteristics on cyclist decision-making
    Bazilinskyy, P., Dodou, D., Eisma, Y. B., Vlakveld, W. V., De Winter, J. C. F.
    IET Intelligent Transportation Systems, 17, 1 (2022)

    Automated vehicles (AVs) may feature blinded (i.e., blacked-out) windows and external Human-Machine Interfaces (eHMIs), and the driver may be inattentive or absent, but how these features affect cyclists is unknown. In a crowdsourcing study, participants viewed images of approaching vehicles from a cyclist’s perspective and decided whether to brake. The images depicted different combinations of traditional versus automated vehicles, eHMI presence, vehicle approach direction, driver visibility/window-blinding, visual complexity of the surroundings, and distance to the cyclist (urgency). The results showed that the eHMI and urgency level had a strong impact on crossing decisions, whereas visual complexity had no significant influence. Blinded windows caused participants to brake for the traditional vehicle. A second crowdsourcing experiment aimed to clarify the findings of Experiment 1 by also requiring participants to detect the vehicle features. It was found that the eHMI ‘GO’ and blinded windows yielded high detection rates and that driver eye contact caused participants to continue pedalling. To conclude, blinded windows increase the probability that cyclists brake, and driver eye contact stimulates cyclists to continue cycling. Our findings, which were obtained with large international samples, may help elucidate how AVs (in which the driver may not be visible) affect cyclists’ behaviour.
  • Crowdsourced assessment of 227 text-based eHMIs for a crossing scenario
    Bazilinskyy, P., Dodou, D., De Winter, J. C. F.
    Proceedings of International Conference on Applied Human Factors and Ergonomics (AHFE). New York, USA (2022)

    Future automated vehicles may be equipped with external human-machine interfaces (eHMIs) capable of signaling whether pedestrians can cross the road. Industry and academia have proposed a variety of eHMIs featuring a text message. An eHMI message can refer to the action to be performed by the pedestrian (egocentric message) or the automated vehicle (allocentric message). Currently, there is no consensus on the correct phrasing of the text message. We created 227 eHMIs based on text-based eHMIs observed in the literature. A crowdsourcing experiment (N = 1241) was performed with images depicting an automated vehicle equipped with an eHMI on the front bumper. The participants indicated whether they would (not) cross the road, and response times were recorded. Egocentric messages were found to be more compelling for participants to (not) cross than allocentric messages. Furthermore, Spanish-speaking participants found Spanish eHMIs more compelling than English eHMIs. Finally, it was established that some eHMI texts should be avoided, as signified by compellingness, long responses, and high inter-subject variability.
  • Get out of the way! Examining eHMIs in critical driver-pedestrian encounters in a coupled simulator
    Bazilinskyy, P., Kooijman, L., Dodou, D., Mallant, K. P. T., Roosens, V. E. R., Middelweerd, M. D. L. M., Overbeek, L. D., De Winter, J. C. F.
    Proceedings of International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) (2022)

    Past research suggests that displays on the exterior of the car, known as eHMIs, can be effective in helping pedestrians to make safe crossing decisions. This study examines a new application of eHMIs, namely the provision of directional information in scenarios where the pedestrian is almost hit by a car. In an experiment using a head-mounted display and a motion suit, participants had to cross the road while a car driven by another participant approached them. The results showed that the directional eHMI caused pedestrians to step back compared to no eHMI. The eHMI increased the pedestrians’ self-reported understanding of the car’s intention, although some pedestrians did not notice the eHMI. In conclusion, there may be potential for supporting pedestrians in situations where they need support the most, namely critical encounters. Future research may consider coupling a directional eHMI to autonomous emergency steering.
  • Identifying lane changes automatically using the GPS sensors of portable devices
    Driessen, T., Prasad, L., Bazilinskyy, P., De Winter, J. C. F.
    Proceedings of International Conference on Applied Human Factors and Ergonomics (AHFE). New York, USA (2022)

    Mobile applications that provide GPS-based route navigation advice or driver diagnostics are gaining popularity. However, these applications currently do not have knowledge of whether the driver is performing a lane change. Having such information may prove valuable to individual drivers (e.g., to provide more specific navigation instructions) or road authorities (e.g., knowledge of lane change hotspots may inform road design). The present study aimed to assess the accuracy of lane change recognition algorithms that rely solely on mobile GPS sensor input. Three trips on Dutch highways, totaling 158 km of driving, were performed while carrying two smartphones (Huawei P20, Samsung Galaxy S9), a GPS-equipped GoPro Max, and a USB GPS receiver (GlobalSat BU343-s4). The timestamps of all 215 lane changes were manually extracted from the forward-facing GoPro camera footage, and used as ground truth. After connecting the GPS trajectories to the road using Mapbox Map Matching API (2022), lane changes were identified based on the exceedance of a lateral translation threshold in set time windows. Different thresholds and window sizes were tested for their ability to discriminate between a pool of lane change segments and an equally-sized pool of no-lane-change segments. The overall accuracy of the lane-change classification was found to be 90%. The method appears promising for highway engineering and traffic behavior research that use floating car data, but there may be limited applicability to real-time advisory systems due to the occasional occurrence of false positives.
  • Stopping by looking: A driver-pedestrian interaction study in a coupled simulator using head-mounted displays with eye-tracking
    Mok, C. S., Bazilinskyy, P., De Winter, J. C. F.
    Applied Ergonomics, 105, 103825 (2022)

    Automated vehicles (AVs) can perform low-level control tasks but are not always capable of proper decision-making. This paper presents a concept of eye-based maneuver control for AV-pedestrian interaction. Previously, it was unknown whether the AV should conduct a stopping maneuver when the driver looks at the pedestrian or looks away from the pedestrian. A two-agent experiment was conducted using two head-mounted displays with integrated eye-tracking. Seventeen pairs of participants (pedestrian and driver) each interacted in a road crossing scenario. The pedestrians' task was to hold a button when they felt safe to cross the road, and the drivers' task was to direct their gaze according to instructions. Participants completed three 16-trial blocks: (1) Baseline, in which the AV was pre-programmed to yield or not yield, (2) Look to Yield (LTY), in which the AV yielded when the driver looked at the pedestrian, and (3) Look Away to Yield (LATY), in which the AV yielded when the driver did not look at the pedestrian. The driver's eye movements in the LTY and LATY conditions were visualized using a virtual light beam. Crossing performance was assessed based on whether the pedestrian held the button when the AV yielded and released the button when the AV did not yield. Furthermore, the pedestrians' and drivers' acceptance of the mappings was measured through a questionnaire. The results showed that the LTY and LATY mappings yielded better crossing performance than Baseline. Furthermore, the LTY condition was best accepted by drivers and pedestrians. Eye-tracking analyses indicated that the LTY and LATY mappings attracted the pedestrian's attention, while pedestrians still distributed their attention between the AV and a second vehicle approaching from the other direction. In conclusion, LTY control may be a promising means of AV control at intersections before full automation is technologically feasible.
  • The effect of drivers’ eye contact on pedestrians’ perceived safety
    Onkhar, V., Bazilinskyy, P., Dodou, D., De Winter, J. C. F.
    Transportation Research Part F: Traffic Psychology and Behaviour, 84, 194-210 (2022)

    Many fatal accidents that involve pedestrians occur at road crossings, and are attributed to a breakdown of communication between pedestrians and drivers. Thus, it is important to investigate how forms of communication in traffic, such as eye contact, influence crossing decisions. Thus far, there is little information about the effect of drivers’ eye contact on pedestrians’ perceived safety to cross the road. Existing studies treat eye contact as immutable, i.e., it is either present or absent in the whole interaction, an approach that overlooks the effect of the timing of eye contact. We present an online crowdsourced study that addresses this research gap. 1835 participants viewed 13 videos of an approaching car twice, in random order, and held a key whenever they felt safe to cross. The videos differed in terms of whether the car yielded or not, whether the car driver made eye contact or not, and the times when the driver made eye contact. Participants also answered questions about their perceived intuitiveness of the driver’s eye contact behavior. The results showed that eye contact made people feel considerably safer to cross compared to no eye contact (an increase in keypress percentage from 31% to 50% was observed). In addition, the initiation and termination of eye contact affected perceived safety to cross more strongly than continuous eye contact and a lack of it, respectively. The car’s motion, however, was a more dominant factor. Additionally, the driver’s eye contact when the car braked was considered intuitive, and when it drove off, counterintuitive. In summary, this study demonstrates for the first time how drivers’ eye contact affects pedestrians’ perceived safety as a function of time in a dynamic scenario and questions the notion in recent literature that eye contact in road interactions is dispensable. These findings may be of interest in the development of automated vehicles (AVs), where the driver of the AV might not always be paying attention to the environment.


  • What driving style makes pedestrians think a passing vehicle is driving automatically?
    Bazilinskyy, P., Sakuma, T., De Winter, J. C. F.
    Applied Ergonomics, 95, 103428 (2021)

    An important question in the development of automated vehicles (AVs) is which driving style AVs should adopt and how other road users perceive them. The current study aimed to determine which AV behaviours contribute to pedestrians’ judgements as to whether the vehicle is driving manually or automatically as well as judgements of likeability. We tested five target trajectories of an AV in curves: playback manual driving, two stereotypical automated driving conditions (road centre tendency, lane centre tendency), and two stereotypical manual driving conditions, which slowed down for curves and cut curves. In addition, four braking patterns for approaching a zebra crossing were tested: manual braking, stereotypical automated driving (fixed deceleration), and two variations of stereotypical manual driving (sudden stop, crawling forward). The AV was observed by 24 participants standing on the curb of the road in groups. After each passing of the AV, participants rated whether the car was driven manually or automatically, and the degree to which they liked the AV’s behaviour. Results showed that the playback manual trajectory was considered more manual than the other trajectory conditions. The stereotype automated ‘road centre tendency’ and ‘lane centre tendency’ trajectories received similar likeability ratings as the playback manual driving. An analysis of written comments showed that curve cutting was a reason to believe the car is driving manually, whereas driving at a constant speed or in the centre was associated with automated driving. The sudden stop was the least likeable way to decelerate, but there was no consensus on whether this behaviour was manual or automated. It is concluded that AVs do not have to drive like a human in order to be liked.
  • How should external Human-Machine Interfaces behave? Examining the effects of colour, position, message, activation distance, vehicle yielding, and visual distraction among 1,434 participants
    Bazilinskyy, P., Kooijman, L., Dodou, D., De Winter, J. C. F.
    Applied Ergonomics, 95, 103450 (2021)

    External human-machine interfaces (eHMIs) may be useful for communicating the intention of an automated vehicle (AV) to a pedestrian, but it is unclear which eHMI design is most effective. In a crowdsourced experiment, we examined the effects of (1) colour (red, green, cyan), (2) position (roof, bumper, windshield), (3) message (WALK, DON'T WALK, WILL STOP, WON'T STOP, light bar), (4) activation distance (35 or 50 m from the pedestrian), and (5) the presence of visual distraction in the environment, on pedestrians' perceived safety of crossing the road in front of yielding and non-yielding AVs. Participants (N = 1434) had to press a key when they felt safe to cross while watching a random 40 out of 276 videos of an approaching AV with eHMI. Results showed that (1) green and cyan eHMIs led to higher perceived safety of crossing than red eHMIs; no significant difference was found between green and cyan, (2) eHMIs on the bumper and roof were more effective than eHMIs on the windshield, (3) for yielding AVs, perceived safety was higher for WALK compared to WILL STOP, followed by the light bar; for non-yielding AVs, a red bar yielded similar results to red text, (4) for yielding AVs, a red bar caused lower perceived safety when activated early compared to late, whereas green/cyan WALK led to higher perceived safety when activated late compared to early, and (5) distraction had no significant effect. We conclude that people adopt an egocentric perspective, that the windshield is an ineffective position, that the often-recommended colour cyan may have to be avoided, and that eHMI activation distance has intricate effects related to onset saliency.
  • Visual attention of pedestrians in traffic scenes: A crowdsourcing experiment
    Bazilinskyy, P., Kyriakidis, M., De Winter, J. C. F.
    Proceedings of International Conference on Applied Human Factors and Ergonomics (AHFE). New York, USA (2021)

    In a crowdsourced experiment, the effects of distance and type of the approaching vehicle, traffic density, and visual clutter on pedestrians’ attention distribution were explored. 965 participants viewed 107 images of diverse traffic scenes for durations between 100 and 4000 ms. Participants’ eye-gaze data were collected using the TurkEyes method. The method involved briefly showing codecharts after each image and asking the participants to type the code they saw last. The results indicate that automated vehicles were more often glanced at than manual vehicles. Measuring eye gaze without an eye tracker is promising.
  • How do pedestrians distribute their visual attention when walking through a parking garage? An eye-tracking study
    De Winter, J. C. F., Bazilinskyy, P., Wesdorp, D., De Vlam, V., Hopmans, B., Visscher, J., Dodou, D.
    Ergonomics, 64, 793–805 (2021)

    We examined what pedestrians look at when walking through a parking garage. Thirty-six participants walked a short route in a floor of a parking garage while their eye movements and head rotations were recorded with a Tobii Pro Glasses 2 eye-tracker. The participants’ fixations were then classified into 14 areas of interest. The results showed that pedestrians often looked at the back (20.0%), side (7.5%), and front (4.2%) of parked cars, and at approaching cars (8.8%). Much attention was also paid to the ground (20.1%). The wheels of cars (6.8%) and the driver in approaching cars (3.2%) received attention as well. In conclusion, this study showed that eye movements are largely functional in the sense that they appear to assist in safe navigation through the parking garage. Pedestrians look at a variety of sides and features of the car, suggesting that displays on future automated cars should be omnidirectionally visible.
  • Towards the detection of driver–pedestrian eye contact
    Onkhar, V., Bazilinskyy, P., Stapel, J. C. J., Dodou, D., Gavrila, D., De Winter, J. C. F.
    Pervasive and Mobile Computing, 76, 101455 (2021)

    Non-verbal communication, such as eye contact between drivers and pedestrians, has been regarded as one way to reduce accident risk. So far, studies have assumed rather than objectively measured the occurrence of eye contact. We address this research gap by developing an eye contact detection method and testing it in an indoor experiment with scripted driver-pedestrian interactions at a pedestrian crossing. Thirty participants acted as a pedestrian either standing on an imaginary curb or crossing an imaginary one-lane road in front of a stationary vehicle with an experimenter in the driver’s seat. In half of the trials, pedestrians were instructed to make eye contact with the driver; in the other half, they were prohibited from doing so. Both parties’ gaze was recorded using eye trackers. An in-vehicle stereo camera recorded the car’s point of view, a head-mounted camera recorded the pedestrian’s point of view, and the location of the driver’s and pedestrian’s eyes was estimated using image recognition. We demonstrate that eye contact can be detected by measuring the angles between the vector joining the estimated location of the driver’s and pedestrian’s eyes, and the pedestrian’s and driver’s instantaneous gaze directions, respectively, and identifying whether these angles fall below a threshold of 4°. We achieved 100% correct classification of the trials involving eye contact and those without eye contact, based on measured eye contact duration. The proposed eye contact detection method may be useful for future research into eye contact.
  • Bio-inspired intent communication for automated vehicles
    Oudshoorn, M. P. J., De Winter, J. C. F., Bazilinskyy, P., Dodou, D.
    Transportation Research Part F: Traffic Psychology and Behaviour, 80, 127-140 (2021)

    Various external human-machine interfaces (eHMIs) have been proposed that communicate the intent of automated vehicles (AVs) to vulnerable road users. However, there is no consensus on which eHMI concept is most suitable for intent communication. In nature, animals have evolved the ability to communicate intent via visual signals. Inspired by intent communication in nature, this paper investigated three novel and potentially intuitive eHMI designs that rely on posture, gesture, and colouration, respectively. In an online crowdsourcing study, 1141 participants viewed videos featuring a yielding or non-yielding AV with one of the three bio-inspired eHMIs, as well as a green/red lightbar eHMI, a walk/-don't walk text-based eHMI, and a baseline condition (i.e., no eHMI). Participants were asked to press and hold a key when they felt safe to cross and to answer rating questions. Together, these measures were used to determine the intuitiveness of the tested eHMIs. Results showed that the lightbar eHMI and text-based eHMI were more intuitive than the three bio-inspired eHMIs, which, in turn, were more intuitive than the baseline condition. An exception was the bio-inspired colouration eHMI, which produced a performance score that was equivalent to the text-based eHMI when communicating 'non-yielding'. Further research is necessary to examine whether these observations hold in more complex traffic situations. Additionally, we recommend combining features from different eHMIs, such as the full-body communication of the bio-inspired colouration eHMI with the colours of the lightbar eHMI.
  • Automated vehicles that communicate implicitly: Examining the use of lateral position within the lane
    Sripada, A., Bazilinskyy, P., De Winter, J. C. F.
    Ergonomics, 1–13 (2021)

    It may be necessary to introduce new modes of communication between automated vehicles (AVs) and pedestrians. This research proposes using the AV’s lateral deviation within the lane to communicate if the AV will yield to the pedestrian. In an online experiment, animated video clips depicting an approaching AV were shown to participants. Each of 1104 participants viewed 28 videos twice in random order. The videos differed in deviation magnitude, deviation onset, turn indicator usage, and deviation-yielding mapping. Participants had to press and hold a key as long as they felt safe to cross, and report the perceived intuitiveness of the AV’s behaviour after each trial. The results showed that the AV moving towards the pedestrian to indicate yielding and away to indicate continuing driving was more effective than the opposite combination. Furthermore, the turn indicator was regarded as intuitive for signalling that the AV will yield. Practitioner summary: Future automated vehicles (AVs) may have to communicate with vulnerable road users. Many researchers have explored explicit communication via text messages and led strips on the outside of the AV. The present study examines the viability of implicit communication via the lateral movement of the AV.


  • Coupled simulator for research on the interaction between pedestrians and (automated) vehicles
    Bazilinskyy, P., Kooijman, L.*, De Winter, J. C. F.
    Proceedings of Driving Simulation Conference (DSC). Antibes, France (2020)

    Driving simulators are regarded as valuable tools for human factors research on automated driving and traffic safety. However, simulators that enable the study of human-human interactions are rare. In this study, we present an open-source coupled simulator developed in Unity. The simulator supports input from head-mounted displays, motion suits, and game controllers. It facilitates research on interactions between pedestrians and humans inside manual and automated vehicles. We present results of a demo experiment on the interaction between a passenger in an automated car equipped with an external human-machine interface, a driver of a manual car, and a pedestrian. We conclude that the newly developed open-source coupled simulator is a promising tool for future human factors research.
  • External Human-Machine Interfaces: Which of 729 colors is best for signaling ‘Please (do not) Cross’?
    Bazilinskyy, P., Dodou, D., De Winter, J. C. F.
    Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC). Toronto, Canada (2020)

    Future automated vehicles may be equipped with external human-machine interfaces (eHMIs) capable of signaling to pedestrians whether or not they can cross the road. There is currently no consensus on the correct colors for eHMIs. Industry and academia have already proposed a variety of eHMI colors, including red and green, as well as colors that are said to be neutral, such as cyan. A confusion that can arise with red and green is whether the color refers to the pedestrian (egocentric perspective) or the automated vehicle (allocentric perspective). We conducted two crowdsourcing experiments (N = 2000 each) with images depicting an automated vehicle equipped with an eHMI in the form of a rectangular display on the front bumper. The eHMI had one out of 729 colors from the RGB spectrum. In Experiment 1, participants rated the intuitiveness of a random subset of 100 of these eHMIs for signaling 'please cross the road', and in Experiment 2 for 'please do NOT cross the road'. The results showed that for 'please cross', bright green colors were considered the most intuitive. For 'please do NOT cross', red colors were rated as the most intuitive, but with high standard deviations among participants. In addition, some participants rated green colors as intuitive for 'please do NOT cross'. Results were consistent for men and women and for colorblind and non-colorblind persons. It is concluded that eHMIs should be green if the eHMI is intended to signal 'please cross', but green and red should be avoided if the eHMI is intended to signal 'please do NOT cross'. Various neutral colors can be used for that purpose, including cyan, yellow, and purple.
  • Risk perception: A study using dashcam videos and participants from different world regions.
    Bazilinskyy, P., Eisma, Y. B., Dodou, D., De Winter, J. C. F.
    Traffic Injury Prevention, 21, 347–353 (2020)

    Objective: Research has shown that perceived risk is a vital variable in the understanding of road traffic safety. Having experience in a particular traffic environment can be expected to affect perceived risk. More specifically, drivers may readily recognize traffic hazards when driving in their own world region, resulting in high perceived risk (the expertise hypothesis). Oppositely, drivers may be desensitized to traffic hazards that are common in their own world region, resulting in low perceived risk (the desensitization hypothesis). This study investigated whether participants experienced higher or lower perceived risk for traffic situations from their region compared to traffic situations from other regions. Methods: In a crowdsourcing experiment, participants viewed dashcam videos from four regions: India, Venezuela, United States, and Western Europe. Participants had to press a key when they felt the situation was risky. Results: Data were obtained from 800 participants, with 52 participants from India, 75 from Venezuela, 79 from the United States, 32 from Western Europe, and 562 from other countries. The results provide support for the desensitization hypothesis. For example, participants from India perceived low risk for hazards (e.g., a stationary car on the highway) that were perceived as risky by participants from other regions. At the same time, support for the expertise hypothesis was obtained, as participants in some cases detected hazards that were specific to their own region (e.g., participants from Venezuela detected inconspicuous roadworks in a Venezuelan city better than did participants from other regions). Conclusion: We found support for the desensitization hypothesis and the expertise hypothesis. These findings have implications for cross-cultural hazard perception research.


  • Blind driving by means of the track angle error
    Bazilinskyy, P., Bijker, L., Dielissen, T., French, S., Mooijman, T., Peters, L., De Winter, J. C. F.
    Proceedings of International Congress on Sound and Vibration (ICSV). Montreal, Canada (2019)

    This study is the third iteration in a series of studies aimed to develop a system that allows driving blindfolded. We used a sonification approach, where the predicted angular error of the car 2 seconds into the future was translated into spatialized beeping sounds. In a driving simulator experiment, we tested with 20 participants whether a surround-sound feedback system that uses four speakers yields better lane-keeping performance than binary directional feedback produced by two speakers. We also examined whether adding a corner support system to the binary system improves lane-keeping performance. Compared to the two previous iterations, this study presents a more realistic experimental setting, as participants were unfamiliar with the feedback system and received the feedback without headphones. The results show that participants had poor lane-keeping performance. Furthermore, the driving task was perceived as demanding, especially in the case of the additional corner support. Our findings from the blind driving projects suggest that drivers benefit from simple auditory feedback; additional auditory stimuli (e.g., corner support) add workload without improving performance.
  • Continuous auditory feedback on the status of adaptive cruise control, lane deviation, and time headway: An acceptable support for truck drivers?
    Bazilinskyy, P., Larsson, P., Johansson, E., De Winter, J. C. F.
    Acoustical Science and Technology, 40, 382–390 (2019)

    The number of trucks that are equipped with driver assistance systems is increasing. These driver assistance systems typically offer binary auditory warnings or notifications upon lane departure, close headway, or automation (de)activation. Such binary sounds may annoy the driver if presented frequently. Truck drivers are well accustomed to the sound of the engine and wind in the cabin. Based on the premise that continuous sounds are more natural than binary warnings, we propose continuous auditory feedback on the status of adaptive cruise control, lane offset, and headway, which blends with the engine and wind sounds that are already present in the cabin. An on-road study with 23 truck drivers was performed, where participants were presented with the additional sounds in isolation from each other and in combination. Results showed that the sounds were easy to understand and that the lane-offset sound was regarded as somewhat useful. Systems with feedback on the status of adaptive cruise control and headway were seen as not useful. Participants overall preferred a silent cabin and expressed displeasure with the idea of being presented with extra sounds on a continuous basis. Suggestions are provided for designing less intrusive continuous auditory feedback.
  • Survey on eHMI concepts: The effect of text, color, and perspective
    Bazilinskyy, P., Dodou, D., De Winter, J. C. F.
    Transportation Research Part F: Traffic Psychology and Behaviour, 67, 175-194 (2019)

    The automotive industry has presented a variety of external human-machine interfaces (eHMIs) for automated vehicles (AVs). However, there appears to be no consensus on which types of eHMIs are clear to vulnerable road users. Here, we present the results of two large crowdsourcing surveys on this topic. In the first survey, we asked respondents about the clarity of 28 images, videos, and patent drawings of eHMI concepts presented by the automotive industry. Results showed that textual eHMIs were generally regarded as the clearest. Among the non-textual eHMIs, a projected zebra crossing was regarded as clear, whereas light-based eHMIs were seen as relatively unclear. A considerable proportion of the respondents mistook non-textual eHMIs for a sensor. In the second survey, we examined the effect of perspective of the textual message (egocentric from the pedestrian's point of view: 'Walk', 'Don't walk' vs. allocentric: 'Will stop', 'Won't stop') and color (green, red, white) on whether respondents felt safe to cross in front of the AV. The results showed that textual eHMIs were more persuasive than color-only eHMIs, which is in line with the results from the first survey. The eHMI that received the highest percentage of 'Yes' responses was the message 'Walk' in green font, which points towards an egocentric perspective taken by the pedestrian. We conclude that textual egocentric eHMIs are regarded as clearest, which poses a dilemma because textual instructions are associated with practical issues of liability, legibility, and technical feasibility.
  • When will most cars drive fully automatically? An analysis of international surveys
    Bazilinskyy, P., Kyriakidis, M., De Winter, J. C. F.
    Transportation Research Part F: Traffic Psychology and Behaviour, 64, 184-195 (2019)

    When fully automated cars will be widespread is a question that has attracted considerable attention from futurists, car manufacturers, and academics. This paper aims to poll the public’s expectations regarding the deployment of fully automated cars. In 15 crowdsourcing surveys conducted between June 2014 and January 2019, we obtained answers from 18,970 people in 128 countries regarding when they think that most cars will be able to drive fully automatically in their country of residence. The median reported year was 2030. The later the survey date, the smaller the percentage of respondents who reported that most cars would be able to drive fully automatically by 2020, with 15–22% of the respondents providing this estimate in the surveys conducted between 2014 and 2016 versus 3–5% in the 2018 surveys. Respondents who completed multiple surveys were more likely to revise their estimate upward (39.4%) than downward (35.3%). Correlational analyses showed that people from more affluent countries and people who have heard of the Google Driverless Car (Waymo) or the Tesla Autopilot reported a significantly earlier year. Finally, we made a comparison between the crowdsourced respondents and respondents from a technical university who answered the same question; the median year reported by the latter group was 2040. We conclude that over the course of 4.5 years the public has moderated its expectations regarding the penetration of fully automated cars but remains optimistic compared to what experts currently believe.


  • An auditory dataset of passing vehicles recorded with a smartphone
    Bazilinskyy, P., Van der Aa, A., Schoustra, M., Spruit, J., Staats, L., Van der Vlist, K. J., De Winter, J. C. F.
    Proceedings of Tools and Methods of Competitive Engineering (TMCE). Las Palmas de Gran Canaria, Spain (2018)

    The increase of smartphones over the past decade has contributed to distraction in traffic. However, smartphones could potentially be turned into an advantage by being able to detect whether a motorized vehicle is passing the smartphone user (e.g., a pedestrian or cyclist). Herein, we present a dataset of audio recordings of passing vehicles, made with a smartphone. Recordings were made of a passing passenger car and a scooter in various conditions (windy weather vs. calm weather, approaching from the front vs. from behind, 1 m, 2 m, and 3 m distance between smartphone and vehicle, vehicle driving with 30 vs. 50 km/h, and smartphone being stationary vs. moving with the cyclist). Data from an 8-microphone array, video recordings, and GPS data of vehicle position and speed are provided as well. Our present dataset may prove useful in the development of mobile apps that detect a passing motorized vehicle, or for transportation research.
  • Auditory interface for automated driving
    Bazilinskyy, P.
    PhD thesis (2018)

    Automated driving may be a key to solving a number of problems that humanity faces today: large numbers of fatalities in traffic, traffic congestions, and increased gas emissions. However, unless the car drives itself fully automatically (such a car would not need to have a steering wheel, nor accelerator and brake pedals), the driver needs to receive information from the vehicle. Such information can be delivered by sound, visual displays, vibrotactile feedback, or a combination of two or three kinds of signals. Sound may be a particularly promising feedback modality, as sound can attract a driver’s attention irrespective of his/her momentary visual attention. Although ample research exists on warning systems and other types of auditory displays, what is less well known is how to design warning systems for automated driving specifically. Taking over control from an automated car is a spatially demanding task that may involve a high level of urgency, and warning signals (also called ‘take- over requests’, TORs) need to be designed so that the driver reacts as quickly and safely as possible. Furthermore, little knowledge is available on how to support the situation awareness and mode awareness of drivers of automated cars. The goal of this thesis is to discover how the auditory modality should be used during automated driving and to contribute towards the development of design guidelines. First, this thesis describes the state-of-the-art (Chapter 2) by examining and improving the current sound design process in the industry, and by examining the requirements of the future users of automated cars, the public (Chapter 2). Next, the thesis focuses on the design of discrete warnings/TORs (Chapter 3), the use of sound for supporting situation awareness (Chapter 4), and mode awareness (Chapter 5). Finally, Chapters 6 and 7 provide a future outlook, conclusions, and recommendations. The content of the thesis is described in more detail below. Chapter 2 describes state of the art in the domain of the use of sound in the automotive industry. Section 2.1 presents a new sound design process for the automotive industry developed with Continental AG, consisting of 3 stages: description, design/creation, and verification. An evaluation of the process showed that it supports the more efficient creation of auditory assets than the unstructured process that was previously employed in the company. To design good feedback is not enough, it also needs to be appreciated by users. To this end, Section 2.2 describes a crowdsourced online survey that was used to investigate peoples’ opinion of 1,205 responses from 91 countries on auditory interfaces in modern cars and their readiness to have auditory feedback in automated vehicles. The study was continued in another crowdsourced online survey described in Section 2.3, where 1,692 people were surveyed on auditory, visual, and vibrotactile TORs in scenarios of varying levels of urgency. Based on the results, multimodal TORs were the most preferred option in scenarios associated with high urgency. Sound-based TORs were the most favored choice in scenarios with low urgency. Auditory feedback was also preferred for confirmation that the system is ready to switch from manual to automated mode. Speech-based feedback was more accepted than artificial sounds, and the female voice was more preferred than the male voice as a take-over request. To understand better how sound may be used during fully automated driving, it is crucial to acknowledge the opinion of potential end users of such vehicles on the technology. Section 2.4 investigates anonymous textual comments concerning fully automated driving by using data from three Internet- based surveys (including the surveys described in Sections 2.2 and 2.3) with 8,862 respondents from 112 countries. The opinion was split: 39% of the comments were positive towards automated driving and 23% were seen as such that express negative attitude towards automated driving. Chapter 3 focuses on the use of the auditory modality to support TORs. Section 3.1 describes a crowdsourcing experiment on reaction times to audiovisual stimuli with different stimulus onset asynchrony (SOA). 1,823 participants each performed 176 reaction time trials consisting of 29 SOA levels and three visual intensity levels. The results replicated past research, with a V- shape of mean reaction time as a function of SOA. The study underlines the power of crowdsourced research, and shows that auditory and visual warnings need to be provided at exactly the same moment in order to generate optimally fast response times. The results also indicate large individual differences in reaction times to different SOA levels, a finding which implicates that multimodal feedback has important advantages as compared to unimodal feedback. Then, in Section 3.2 focus was given to speech-based TORs. In a crowdsourced study, 2,669 participants from 95 countries listened to a random 10 out of 140 TORs, and rated each TOR on ease of understanding, pleasantness, urgency, and commandingness. Increased speech rate results in an increase of perceived urgency and commandingness. With high level of background noise, the female voice was preferred over the male voice, which contradicts the literature. Furthermore, a take-over request spoken by a person with Indian accent was easier to understand by participants from India compared to participants from other countries. The results of the studies in Chapter 2 and Sections 3.1 and 3.2 were used to design a simulator-based study presented in Section 3.3. 24 participants took part in three sessions in a highly automated car (different TOR modality in each session: auditory, vibrotactile, and auditory-vibrotactile). TORs were played from the right, from the left, and from both left and right. The auditory TOR yielded comparatively low ratings of usefulness and satisfaction. Regardless of the directionality of the TOR, almost all drivers overtook the stationary vehicle on the left. Section 3.4 summarizes results from survey research (Sections 2.2, 2.3, 3.1, 3.2) and driving simulator experiments (including Section 3.3) on TORs executed with one or multiple of the three modalities. Results showed that vibrotactile TORs in the driver’s seat yielded relatively high ratings of self- reported usefulness and satisfaction. Auditory TORs in the form of beeps were regarded as useful but not satisfactory, and it was found that an increase of beep rate yields an increase of self-reported urgency. Visual-only feedback in the form of LEDs was seen by participants as neither useful nor satisfactory. Chapter 4 draws attention to the use of auditory feedback for the situation awareness during manual and automated driving. Section 4.1 investigates how to represent distance information by means of sound. Three sonification approaches were tested: Beep Repetition Rate, Sound Intensity, and Sound Fundamental Frequency. The three proposed methods produced a similar mean absolute distance error. These results were used in three simulator-based experiments (Sections 4.2–4.4) to examine the idea whether it is possible to drive a car blindfolded with the use of continuous auditory feedback only. Different types of sonification (e.g., volume-based, beep-frequency based) were used, and the auditory feedback was provided when deviating more than 0.5 m from lane center. In all experiments, people drove on a track with sharp 90-degree corners while speed control was automated. Results showed no clear effects of sonification method on lane-keepign performance, but it was found that it is vital to not give feedback based on the current lateral position, but based on where the car will be about 2 seconds into the future. The predictor algorithm should consider the velocity vector of the car as well as the momentary steering wheel angle. Results showed that, with extensive practice and knowledge of the system, it is possible to drive on a track for 5 minutes without leaving the road. Drivers benefit from simple auditory feedback and additional stimuli add workload without improving performance. Chapter 5 examines the use of sound for mode awareness during highly automated driving. An on-road experiment in a heavy truck equipped with low- level automated is described. I used continuous auditory feedback on the status of ACC, lane offset, and headway, which blends with the engine and wind sounds that are already present in the cabin. 23 truck drivers were presented with the additional sounds in isolation and in combination. Results showed that the sounds were easy to understand and that the lane-offset sound was regarded as somewhat useful. However, participants overall preferred a silent cabin and expressed displeasure with the idea of being presented with extra sounds on a continuous basis. Chapter 6 provides an outlook on when fully automated driving may become a reality. In 12 crowdsourcing studies conducted between 2014 and 2017 (including the studies described in Sections 2.2, 2.3, 3.1, 3.2), 17,360 people from 129 countries were asked when they think that most cars will be able to drive fully automatically in their country of residence. The median reported year was 2030. Over the course of three years respondents have moderated their expectations regarding the penetration of fully automated cars. The respondents appear to be more optimistic than experts. Chapter 7 presents a discussion and conclusions derived from all chapters in the thesis. • The most preferred way to support a TOR is an auditory instruction in the form of a female voice. • The preferences of people depend on the urgency of the situation. • Reaction times are fastest when an auditory and a visual stimulus are presented at the same moment rather than with a temporal asynchrony. • An increase of beep rate yields an increase of self-reported urgency. • An increase in the speech rate results in an increase of perceived urgency and commandingness. • If the goal is for drivers to react as quickly as possible, multimodal feedback should be used. • It is important to use a preview controller (look-ahead time) for supporting drivers’ situation awareness in a lane keeping task. • Truck drivers are not favorable towards adding additional continuous feedback to the cabin, even though the feedback is easy to understand. In summary, in this thesis I evaluated the use of sound as discrete warnings, but also as a means of continuous/spatial support for situation/mode awareness.
  • Crowdsourced measurement of reaction times to audiovisual stimuli with various degrees of asynchrony
    Bazilinskyy, P., De Winter, J. C. F.
    Human Factors, 60, 1192–120 (2018)

    Objective: This study aimed to replicate past research concerning reaction times to audiovisual stimuli with different stimulus onset asynchrony (SOA), using a large sample of crowdsourcing respondents. Background: Research has shown that reaction times are fastest when an auditory and a visual stimulus are presented simultaneously and that SOA causes an increase in reaction time, this increase being dependent on stimulus intensity. Research on audiovisual SOA has been conducted with small numbers of participants. Method: 1,823 participants each performed 176 reaction time trials consisting of 29 SOA levels and three visual intensity levels, using CrowdFlower, with a compensation of USD 0.20 per participant. Results were verified with a local web-in-lab study (N = 34). Results: The results replicated past research, with a V-shape of mean reaction time as a function of SOA, the V-shape being stronger for lower intensity visual stimuli. The level of SOA affected mainly the right side of the reaction time distribution, whereas the fastest 5% were hardly affected. The variability of reaction times was higher for the crowdsourcing study than for the web-in-lab study. Conclusion: Crowdsourcing is a promising medium for reaction-time research that involves small temporal differences in stimulus presentation. The observed effects of SOA can be explained by an independent-channels mechanism, and also by some participants not perceiving the auditory or visual stimulus, hardware variability, misinterpretation of the task instructions, or lapses in attention. Application: The obtained knowledge on the distribution of reaction times may benefit the design of warning systems.
  • Eye movements while cycling in GTA V
    Bazilinskyy, P., Heisterkamp, N., Luik, P., Klevering, S., Haddou, A., Zult, M., Dialynas, G., Dodou, D., De Winter, J. C. F.
    Proceedings of Tools and Methods of Competitive Engineering (TMCE). Las Palmas de Gran Canaria, Spain (2018)

    A common limitation in human factors research is that vehicle simulators often lack perceptual fidelity. Video games, on the other hand, are becoming increasingly realistic and may be a promising tool for simulator-based human factors research. In this work, we explored whether an off-the-shelf video game is suitable for research purposes. We used Grand Theft Auto (GTA) V combined with a Smart Eye DR120 eye tracker to measure eye movements of participants cycling in hazardous traffic situations. Twenty-seven participants encountered various situations that are representative of urban cycling, such as intersection crossings, a car leaving a parking spot in front of the cyclist, and the opening of a car door in front of the cyclist. Data of participants' gaze on the computer monitor as recorded by the eye tracker were translated into 3D coordinates in the virtual world, as well as into semantic information regarding the objects that the participant was focusing on. We conclude that GTA V allows for the collection of useful data for human factors research.
  • Sound design process for automotive industry
    Bazilinskyy, P., Cieler, S., De Winter, J. C. F.
    Preprint (2018)

    The automotive industry is recognised as a challenging arena for sound design, because the presented information needs to not only comply with safety regulations but also match subjective expectations. By means of a structured interview with ten employees (software developers, quality analysists, sound designers, engineers, managers) of the company Continental, we collected requirements for the sound design process in an automotive industry setting. We present a sound design process consisting of three stages: description, design/creation, and verification. We developed a prototype of a web-based application to support the process.
  • Take-over requests in highly automated driving: A crowdsourcing survey on auditory, vibrotactile, and visual displays
    Bazilinskyy, P., Petermeijer, S. M., Petrovych, V., Dodou, D., De Winter, J. C. F.
    Transportation Research Part F: Traffic Psychology and Behaviour, 56, 82–98 (2018)

    An important research question in the domain of highly automated driving is how to aid drivers in transitions between manual and automated control. Until highly automated cars are available, knowledge on this topic has to be obtained via simulators and self-report questionnaires. Using crowdsourcing, we surveyed 1692 people on auditory, visual, and vibrotactile take-over requests (TORs) in highly automated driving. The survey presented recordings of auditory messages and illustrations of visual and vibrational messages in traffic scenarios of various urgency levels. Multimodal TORs were the most preferred option in high-urgency scenarios. Auditory TORs were the most preferred option in low-urgency scenarios and as a confirmation message that the system is ready to switch from manual to automated mode. For low-urgency scenarios, visual-only TORs were more preferred than vibration-only TORs. Beeps with shorter interpulse intervals were perceived as more urgent, with Stevens’ power law yielding an accurate fit to the data. Spoken messages were more accepted than abstract sounds, and the female voice was more preferred than the male voice. Preferences and perceived urgency ratings were similar in middle- and high-income countries. In summary, this international survey showed that people’s preferences for TOR types in highly automated driving depend on the urgency of the situation.


  • Analyzing crowdsourced ratings of speech-based take-over requests for automated driving
    Bazilinskyy, P., De Winter, J. C. F.
    Applied Ergonomics, 64, 56–64 (2017)

    Take-over requests in automated driving should fit the urgency of the traffic situation. The robustness of various published research findings on the valuations of speech-based warning messages is unclear. This research aimed to establish how people value speech-based take-over requests as a function of speech rate, background noise, spoken phrase, and speaker’s gender and emotional tone. By means of crowdsourcing, 2,669 participants from 95 countries listened to a random 10 out of 140 take-over requests, and rated each take-over request on urgency, commandingness, pleasantness, and ease of understanding. Our results replicate several published findings, in particular that an increase in speech rate results in a monotonic increase of perceived urgency and commandingness. The female voice was preferred over a male voice when there was a high level of background noise, a finding that contradicts the literature. Moreover, a take-over request spoken with Indian accent was found to be easier to understand by participants from India compared to participants from other countries. Our results replicate effects in the literature regarding speech-based warnings, and shed new light on effects regarding background noise, gender, and nationality. The results may have implications for the selection of appropriate take-over requests in automated driving. Additionally, our study demonstrates the promise of crowdsourcing for testing human factors and ergonomics theories with large sample sizes.
  • Blind driving by means of a steering-based predictor algorithm
    Bazilinskyy, P., Beaumont, C. J. A. M., Van der Geest, X. O. S., De Jonge, R. F., Van der Kroft, K., De Winter, J. C. F.
    Proceedings of International Conference on Applied Human Factors and Ergonomics (AHFE) Los Angeles, USA (2017)

    The aim of this work was to develop and empirically test different algorithms of a lane-keeping assistance system that supports drivers by means of a tone when the car is about to deviate from its lane. These auditory assistance systems were tested in a driving simulator with its screens shut down, so that the participants used auditory feedback only. Five participants drove with a previously published algorithm that predicted the future position of the car based on the current velocity vector, and three new algorithms that predicted the future position based on the momentary speed and steering angle. Results of a total of 5 hours of driving across participants showed that, with extensive practice and knowledge of the system, it is possible to drive on a track with sharp curves for 5 minutes without leaving the road. Future research should aim to improve the intuitiveness of the auditory feedback.
  • Usefulness and satisfaction of take-over requests for highly automated driving
    Bazilinskyy, P., Eriksson, A., Petermeijer, S. M., De Winter, J. C. F.
    Proceedings of Road Safety and Simulation (RSS). The Hague, The Netherlands (2017)

    This paper summarizes our results from survey research and driving simulator experiments on auditory, vibrotactile, and visual take-over requests in highly automated driving. Our review shows that vibrotactile takeover requests in the driver’s seat yielded relatively high ratings of self-reported usefulness and satisfaction. Auditory take-over requests in the form of beeping sound were regarded as useful but not satisfactory, and it was found that the beep rate corresponds to perceived urgency. Visual-only feedback (LEDs) was regarded by participants as neither useful nor satisfactory. Augmented visual feedback was found to support effective steering and braking actions, and may be a useful compliment to vibrotactile take-over requests. The present findings may be used in the design of take-over requests.
  • Take-over again: Investigating multimodal and directional TORs to get the driver back into the loop
    Petermeijer, S. M., Bazilinskyy, P.*, Bengler, K., De Winter, J. C. F.
    Applied Ergonomics, 62, 204–215 (2017)

    When a highly automated car reaches its operational limits, it needs to provide a takeover request (TOR) in order for the driver to resume control. The aim of this simulator-based study was to investigate the effects of TOR modality and left/right directionality on drivers' steering behaviour when facing a head-on collision without having received specific instructions regarding the directional nature of the TORs. Twenty-four participants drove three sessions in a highly automated car, each session with a different TOR modality (auditory, vibrotactile, and auditory-vibrotactile). Six TORs were provided per session, warning the participants about a stationary vehicle that had to be avoided by changing lane left or right. Two TORs were issued from the left, two from the right, and two from both the left and the right (i.e., nondirectional). The auditory stimuli were presented via speakers in the simulator (left, right, or both), and the vibrotactile stimuli via a tactile seat (with tactors activated at the left side, right side, or both). The results showed that the multimodal TORs yielded statistically significantly faster steer-touch times than the unimodal vibrotactile TOR, while no statistically significant differences were observed for brake times and lane change times. The unimodal auditory TOR yielded relatively low self-reported usefulness and satisfaction ratings. Almost all drivers overtook the stationary vehicle on the left regardless of the directionality of the TOR, and a post-experiment questionnaire revealed that most participants had not realized that some of the TORs were directional. We conclude that between the three TOR modalities tested, the multimodal approach is preferred. Moreover, our results show that directional auditory and vibrotactile stimuli do not evoke a directional response in uninstructed drivers. More salient and semantically congruent cues, as well as explicit instructions, may be needed to guide a driver into a specific direction during a takeover scenario.


  • Blind driving by means of auditory feedback
    Bazilinskyy, P., Geest, L. Van Der, Van Leeuwen, S., Numan, B., Pijnacker, J., De Winter, J. C. F.
    Proceedings of IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems. Kyoto, Japan (2016)

    Driving is a safety-critical task that predominantly relies on vision. However, visual information from the environment is sometimes degraded or absent. In other cases, visual information is available, but the driver fails to use it due to distraction or impairment. Providing drivers with real-time auditory feedback about the state of the vehicle in relation to the environment may be an appropriate means of support when visual information is compromised. In this study, we explored whether driving can be performed solely by means of artificial auditory feedback. We focused on lane keeping, a task that is vital for safe driving. Three auditory parameter sets were tested: (1) predictor time, where the volume of a continuous tone was a linear function of the predicted lateral error from the lane centre 0 s, 1 s, 2 s, or 3 s into the future; (2) feedback mode (volume feedback vs. beep-frequency feedback) and mapping (linear vs. exponential relationship between predicted error and volume/beep frequency); and (3) corner support, in which in addition to volume feedback, a beep was offered upon entering/leaving a corner, or alternatively when crossing the lane centre while driving in a corner. A dead-zone was used, whereby the volume/beep-frequency feedback was provided only when the vehicle deviated more than 0.5 m from the centre of the lane. An experiment was conducted in which participants (N = 2) steered along a track with sharp 90-degree corners in a simulator with the visual projection shut down. Results showed that without predictor feedback (i.e., 0 s prediction), participants were more likely to depart the road compared to with predictor feedback. Moreover, volume feedback resulted in fewer road departures than beep-frequency feedback. The results of this study may be used in the design of in-vehicle auditory displays. Specifically, we recommend that feedback be based on anticipated error rather than current error.
  • Object-alignment performance in a head-mounted display versus a monitor
    Bazilinskyy, P., Kovácsová, N., Al Jawahiri, A., Kapel, P., Mulckhuyse, J., Wagenaar, S., De Winter, J. C. F.
    Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC). Budapest, Hungary (2016)

    Head-mounted displays (HMDs) offer immersion and binocular disparity. This study investigated whether an HMD yields better object-alignment performance than a conventional monitor in virtual environments that are rich in pictorial depth cues. To determine the effects of immersion and disparity separately, three hardware setups were compared: (1) a conventional computer monitor, yielding low immersion, (2) an HMD with binocular-vision settings (HMD stereo), and (3) an HMD with the same image presented to both eyes (HMD mono). Two virtual environments were used: a street environment in which two cars had to be aligned (target distance of about 15 m) and an office environment in which two books had to be aligned (target distance of about 0.7 m, at which binocular depth cues were expected to be important). Twenty males (mean age = 21.2, SD age = 1.6) each completed 10 object-alignment trials for each of the six conditions. The results revealed no statistically significant differences in object-alignment performance between the three hardware setups. A self-report questionnaire showed that participants felt more involved in the virtual environment and experienced more oculomotor discomfort with the HMD than with the monitor.
  • Sonifying the location of an object: A comparison of three methods
    Bazilinskyy, P., Van Haarlem, W., Quraishi, H., Berssenbrugge, C., Binda, J., De Winter, J. C. F.
    Proceedings of IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems. Kyoto, Japan (2016)

    Auditory displays are promising for informing operators about hazards or objects in the environment. However, it remains to be investigated how to map distance information to a sound dimension. In this research, three sonification approaches were tested: Beep Repetition Rate (BRR) in which beep time and inter-beep time were a linear function of distance, Sound Intensity (SI) in which the digital sound volume was a linear function of distance, and Sound Fundamental Frequency (SFF) in which the sound frequency was a linear function of distance. Participants (N = 29) were presented with a sound by means of headphones and subsequently clicked on the screen to estimate the distance to the object with respect to the bottom of the screen (Experiment 1), or the distance and azimuth angle to the object (Experiment 2). The azimuth angle in Experiment 2 was sonified by the volume difference between the left and right ears. In an additional Experiment 3, reaction times to directional audio-visual feedback were compared with directional visual feedback. Participants performed three sessions (BRR, SI, SFF) in Experiments 1 and 2 and two sessions (visual, audio-visual) in Experiment 3, 10 trials per session. After each trial, participants received knowledge-of-results feedback. The results showed that the three proposed methods yielded an overall similar mean absolute distance error, but in Experiment 2 the error for BRR was significantly smaller than for SI. The mean absolute distance errors were significantly greater in Experiment 2 than in Experiment 1. In Experiment 3, there was no statistically significant difference in reaction time between the visual and audio-visual conditions. The results are interpreted in light of the Weber-Fechner law, and suggest that humans have the ability to accurately interpret artificial sounds on an artificial distance scale.


  • Auditory interfaces in automated driving: an international survey
    Bazilinskyy, P., De Winter, J. C. F.
    PeerJ Computer Science, 1, e13 (2015)

    This study investigated peoples’ opinion on auditory interfaces in contemporary cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two existing auditory driver assistance systems, a parking assistant (PA) and a forward collision warning system (FCWS), as well as towards a futuristic augmented sound system (FS) proposed for fully automated driving. The respondents were positive towards the PA and FCWS, and rated the willingness to have automated versions of these systems as 3.87 and 3.77, respectively (on a scale from 1 = disagree strongly to 5 = agree strongly). The respondents tolerated the FS (the mean willingness to use it was 3.00 on the same scale). The results showed that among the available response options, the female voice was the most preferred feedback type for takeover requests in highly automated driving, regardless of whether the respondents’ country was English speaking or not. The present results could be useful for designers of automated vehicles and other stakeholders.
  • An international crowdsourcing study into people’s statements on fully automated driving
    Bazilinskyy, P., Kyriakidis, M., De Winter, J. C. F.
    Proceedings of International Conference on Applied Human Factors and Ergonomics (AHFE). Las Vegas, USA (2015)

    Fully automated driving can potentially provide enormous benefits to society. However, it has been unclear whether people will appreciate such far-reaching technology. This study investigated anonymous textual comments regarding fully automated driving, based on data extracted from three online surveys with 8,862 respondents from 112 countries. Initial filtering of comments with fewer than 15 characters resulted in 1,952 comments. The sample consisted primarily of males (74%) and had a mean age of 32.6 years. Next, we launched a crowdsourcing job and asked 69 workers to assign each of the 1,952 comments to at least one of 12 predefined categories, which included positive and negative attitude to automated driving, enjoyment in manual driving, concerns about trust, reliability of software, and readiness of road infrastructure. 46% of the comments were classified into the category ‘no meaningful information about automated driving’, leaving 792 comments for further analysis. 39% of the comments were classified as ‘positive attitude towards automated driving’ and 23% were classified as ‘negative attitude towards automated driving’. In conclusion, the public opinion appears to be split, with a substantial number of respondents being positive and a significant number of respondents being negative towards fully automated driving.
  • Report on the in-vehicle auditory interactions workshop: taxonomy, challenges, and approaches
    Jeon, M., Bazilinskyy, P., Hammerschmidt, J., Hermann, T., Landry, S., Wolf, K. E.
    Proceedings of AutomotiveUI. Graz, Austria (2015)

    As driving is mainly a visual task, auditory displays play a critical role for in-vehicle interactions.To improve in-vehicle auditory interactions to the advanced level, auditory display researchers and automotive user interface researchers came together to discuss this timely topic at an in-vehicle auditory interactions workshop at the International Conference on Auditory Display (ICAD).The present paper reports discussion outcomes from the workshop for more discussions at the AutoUI conference.


  • Impact of cache on data-sharing in multi-threaded programmes
    Bazilinskyy, P.
    Thesis for: Erasmus Mundus Double MSc in Dependable Software Systems (2014)

    This thesis answers the question whether a scheduler needs to take into account where communicating threads in multi-threaded applications are executed. The impact of cache on data-sharing in multi-threaded environments is measured. This work investigates a common base–case scenario in the telecommunication industry, where a programme has one thread that writes data and one thread that reads data. A taxonomy of inter-thread communication is defined. Furthermore, a mathematical model that describes inter-thread communication is presented. Two cycle–level experiments were designed to measure latency of CPU registers, cache and main memory. These results were utilised to quantify the model. Three application–level experiments were used to verify the model by comparing predictions of the model and data received in the real-life setting. The model broadens the applicability of experimental results, and it describes three types of communication outlined in the taxonomy. Storing communicating data across all levels of cache does have an impact on the speed of data–intense multi-threaded applications. Scheduling threads in a sender–receiver scenario to different dies in a multi-chip processor decreases speed of execution of such programmes by up to 37%. Pinning such threads to different cores in the same chip results in up to 5% decrease in speed of execution. The findings of this study show how threads need to be scheduled by a cache-aware scheduler. This project extends the author’s previous work, which investigated cache interference.


  • Multi-core Insense
    Bazilinskyy, P.
    Thesis for: Erasmus Mundus Double MSc in Dependable Software Systems (2013)

    This project set out to investigate the benefits of using private heaps for memory management and static thread placement for optimising performance and cache usage. For this study, Insense, which is a component-based programming language developed in the University of St Andrews that abstracts over complications of memory management, concurrency control and synchronisation, was used (Dearle et al. 2008). Two memory management schemes are under investigation: use of a single shared heap and use of multiple private heaps. Further, three thread placement schemes are designed and implemented: 1) even distribution among cores; 2) placing all components on a single core; 3) locating Insense components based on frequency of inter-component communication. Furthermore, several elements of this investigation are worth emphasizing. With regard to allocation and deallocation of memory taking place in component instances running on different cores, the efficiency of using a private heap for each component resulted in speedup by a factor of 16. Then, utilising private heaps reduces a number of L1 cache misses by ~30%. Distributing components over cores according to communication pattern, for the most part performed similar to allowing the OS to perform thread placement dynamically according to load balance. In cases where no exchange of data between components takes place, static placement outperformed because there is no computation which may make load balancing dynamic placement of threads under control of the OS difficult. In this case, the static placement scheme was faster than dynamic balancing by a factor of 2.4.


  • Customisable multitenant web form with JSF and MySQL
    Bazilinskyy, P.
    Thesis for: BEng in Information Technology (2012)

    There is a tendency in Computer Science, nowadays, to move from single-user instances of application to web-based programs. With improvements in Information Technology and Computer Science fields of science it is possible nowadays to conduct business operations from within Internet. Thousands or in some cases millions of sheets of paper and man-hours of work can now be substituted by a single web form connected to a database on a certain website. In recent years a number of new technologies have been introduced to improve usability of Internet applications. It is now possible to create a multitenant piece of software that runs as one instance but serves different users. Nowadays, web forms, that are created for commercial purposes are normally not customisable and lack a possibility to adjust interface in order to suit needs of a particular client. Making multitenant web forms customisable is one of the most highly prioritised tasks for a number of companies that are working in the field of Internet. The aim of the study was to investigate means of creating a fully-functioning and customisable web form that is intended to be run on a server as a single instance. Through methods of user- specific configurations a test case was created that is able to serve a number of clients, giving each one a set of desired features. Before starting this work a following research question was raised: “How to develop the most optimised and the most versatile multitenant web form using JSF and MySQL?”. Also, working on this study makes an attempt to answer this question by doing a theoretical research first and then developing a working product that could be used on a market. A part of the study that focuses on development of the test case application is present in the study. Difficulties and issues that are faced while working multitenant cloud-enabled applications are outlined. Listings of programming code are given as examples where they are essential for understanding of the technical aspects of the research. Additionally, different stages of testing are described to outline strengths and weaknesses of the final product.

Download all papers in bib file here.

* Joint first author.