Making the right decision at the right time is one of the biggest challenges in self-driving technology. To integrate autonomous vehicles (AVs) as part of public transport systems, apart from prioritising safety, establishing a level of comfort and familiarity for their presence among people is just as important. Their behaviour should be as human-like as possible so that their actions would not surprise, and would adhere to the rules - official and unspoken - that govern societies and communities. In short, AVs should know how to read between the lines for cues as well as clues for high-quality decision-making to respond in a natural, acceptable way.
To condition our AVs into simulating human behaviour, we have imbued them with enhanced perception and prediction capabilities to guide them in their interactions with drivers and pedestrians.
Behaviour-based on Perception and Prediction
Our smarter-than-ever AVs were commercially deployed at Singapore Science Park 2 and Jurong Island as a revenue service to augment the existing transport network from January to April 2021.
The next evolution: AVs that make high-quality decisions
ST Engineering has successfully developed a new form of fleet learning that improves the AV’s decision-making capabilities with every trip on the road.
In the traditional model, the AV has to infer exactly what it did right or wrong, the theory being that with enough data, it will figure this out by itself. Our approach is focusing on curating high-quality driving data to train our AVs on how a human would have handled the situation. This is done through data-tagged affirmation or correction, which we call rich data.
Affirmation to reinforce behaviour: When our AV overtook a cyclist cautiously and does so with the right distance and speed, we create a tag that stresses the AV did the right thing by slowing down and keeping a 2m distance. This is akin to telling a child “nice work!” and explaining explicitly what was done well.
Correction of undesirable behaviour: When our AV remained stuck behind a parked truck in its way, we tag that stopping and waiting was the wrong behaviour and that the AV should have overtaken the truck.
It’s from these tagged affirmations and corrections that our AVs become increasingly smarter and display nuances in their decision-making that make them more human-like. They could give way for the crossing pedestrians, and make safe estimates like what a human would make a turn without the need to have pedestrians exiting the road entirely. They could slow down appropriately to check whether they could overtake another vehicle. These subtle nuances are evidence of a higher intelligence at work. Through this model of AV learning based on rich data, the AV’s intelligence will keep increasing in order to simulate human behaviour as closely as possible.
As a leading technology provider of autonomous shared transport, ST Engineering has been setting the standards as well as steering the advancement of the AV ecosystem in Singapore. Now, by developing a new form of fleet learning that revolutionalises the AV’s decision-making capability – simulating the nuances of human behaviour – our AVs are able to interact in a natural, human-like way with other drivers and pedestrians on the road. With its human-like presence, people are likely to be more receptive and comfortable with communicating with AVs, an essential element in integrating AVs into our future transport networks for Singapore to make the transition to a truly Smart City.
Copyright © 2024 ST Engineering
By subscribing to the mailing list, you confirm that you have read and agree with the Terms of Use and Personal Data Policy.