Directional light-based external human-machine interface with onboarding

Janssen, C., Alam, M. S., Bazilinskyy, P., Dou, F., Zhang, L.

Submitted for publication.
ABSTRACT Existing eHMIs for automated vehicles face a design tradeoff: text-based interfaces communicate explicitly but are language-dependent and difficult to scale, whereas light-based designs are easier to deploy yet often convey only coarse go/no-go information. We present an informative light-based windshield eHMI that adds richer yielding cues while maintaining a minimal visual footprint. In a virtual-reality study (N = 30), participants encountered a yielding AV with and without eHMI; half received a short onboarding video, allowing us to compare trained and untrained pedestrians in repeated trials. The eHMI reduced crossing initiation time, with larger early benefits for trained participants. For perceived safety and trust, trained participants improved immediately, whereas untrained participants improved only after repeated exposure. These findings suggest that minimal directional eHMIs can reduce ambiguity in AV–pedestrian yielding interactions, and that brief onboarding can accelerate learnability of non-textual signals.