I am currently engaged in research on communication between multiple automated vehicles and multiple vulnerable road users. Most of experiments in current research features 1, maybe 2, human participants. But, think about driving around a town in real life. Traffic situations are very complicated. And being able to perform experiments with 3, 4, …, 16 participants is essential for understanding the mechanics of communication of situation awareness/collaborative decision making/collaboration in both modern and future traffic. To enable such research, I present an open-source simulator supporting a virtually unlimited number of human participants and fine-tuned for high precision data logging. It is aimed for, but not limited to, academic research.
Demo of coupled simulator with 3 agents in the same traffic scene. One agents is wearing a motion suit and one a head-mounted display.
During my work at SD-Insights, I developed a portable sensor to collect information on the state of the environment called NEXTeye. It is based on Mapbox Vision SDK and NVIDIA Jetson Nano. The sensor is plug-n-play, retrieves vehicle dynamics data, and performs real-time scene segmentation and object detection. The portability of NEXTeye allows its use not only inside of a car, but also as a wearable by vulnerable road users. Multiple such sensors can be connected and synchronised.
My intrinsic motivation to do a PhD stemmed from the fact that automated vehicles have the potential to prevent virtually all road fatalities. To achieve that, automated vehicles must collaborate with humans inside and outside the vehicle. During my PhD I focused on auditory feedback for automated driving. With on-road and driving simulator studies, I showed that multimodal feedback that takes the urgency of the traffic situation into account could support AV-driver collaboration effectively.