Why our Autonomous Vehicles are Smarter than ever

Why our Autonomous Vehicles are Smarter than ever

Why our autonomous vehicles are smarter than ever

Making the right decision at the right time is one of the biggest challenges in self-driving technology. To integrate autonomous vehicles (AVs) as part of public transport systems, apart from prioritising safety, establishing a level of comfort and familiarity for their presence among people is just as important. Their behaviour should be as human-like as possible so that their actions would not surprise, and would adhere to the rules - official and unspoken - that govern societies and communities. In short, AVs should know how to read between the lines for cues as well as clues for high-quality decision-making to respond in a natural, acceptable way.

To condition our AVs into simulating human behaviour, we have imbued them with enhanced perception and prediction capabilities to guide them in their interactions with drivers and pedestrians.

Behaviour-based on Perception and Prediction

  • Perception: Our AV is able to detect and pay attention to every moving and static object within sight. This capability comes from a deep learning-powered computer vision system trained from millions of data points. It is augmented with real-time depth measurements and estimations from sensors on the AV, which are then fused into a single representation, resulting in a precise and accurate view of its surroundings.
  • Prediction: Before interacting with its surroundings, the AV needs to know in advance what the moving objects would do next. Our AV has a prediction engine that uses a combination of advanced probabilistic models, behavioural models, and high-definition maps to best predict the happenings around it. For example, if a pedestrian is walking along the road ahead of the AV, the prediction engine first provides a range of possible scenarios for where that pedestrian could be located a few moments from now. It then weighs and selects the scenario it has the highest confidence that will come true.

  • Behaviour: Now that the AV knows what to expect from its surroundings, it makes a safe driving decision based on these predictions. It can choose to continue driving, slow down, give way, or pursue a more complex driving manoeuvre.

Our smarter-than-ever AVs were commercially deployed at Singapore Science Park 2 and Jurong Island as a revenue service to augment the existing transport network from January to April 2021.

The next evolution: AVs that make high-quality decisions

ST Engineering has successfully developed a new form of fleet learning that improves the AV’s decision-making capabilities with every trip on the road.

In the traditional model, the AV has to infer exactly what it did right or wrong, the theory being that with enough data, it will figure this out by itself. Our approach is focusing on curating high-quality driving data to train our AVs on how a human would have handled the situation. This is done through data-tagged affirmation or correction, which we call rich data.

Affirmation to reinforce behaviour: When our AV overtook a cyclist cautiously and does so with the right distance and speed, we create a tag that stresses the AV did the right thing by slowing down and keeping a 2m distance. This is akin to telling a child “nice work!” and explaining explicitly what was done well.

Correction of undesirable behaviour: When our AV remained stuck behind a parked truck in its way, we tag that stopping and waiting was the wrong behaviour and that the AV should have overtaken the truck.

It’s from these tagged affirmations and corrections that our AVs become increasingly smarter and display nuances in their decision-making that make them more human-like. They could give way for the crossing pedestrians, and make safe estimates like what a human would make a turn without the need to have pedestrians exiting the road entirely. They could slow down appropriately to check whether they could overtake another vehicle. These subtle nuances are evidence of a higher intelligence at work. Through this model of AV learning based on rich data, the AV’s intelligence will keep increasing in order to simulate human behaviour as closely as possible.

As a leading technology provider of autonomous shared transport, ST Engineering has been setting the standards as well as steering the advancement of the AV ecosystem in Singapore. Now, by developing a new form of fleet learning that revolutionalises the AV’s decision-making capability – simulating the nuances of human behaviour – our AVs are able to interact in a natural, human-like way with other drivers and pedestrians on the road. With its human-like presence, people are likely to be more receptive and comfortable with communicating with AVs, an essential element in integrating AVs into our future transport networks for Singapore to make the transition to a truly Smart City.