Could Algolux’s Atlas prove helpful in preventing Tesla autopilot crashes?


Tesla’s Autopilot once again failed to detect an object - this time driving straight into an overturned truck in Taiwan. For Montreal tech company Algolux, the accuracy of today’s perception systems remains a huge challenge. But the company feels its technology is fuelling better computer vision accuracy, which could ultimately help reduce incidences like Tesla’s.

It begs the question: why does Tesla autopilot not perceive such a large obstacle on the road as this, with its use of cameras and radar? Forbes’ Brad Templeton even questioned whether autopilot was actually on.

Algolux has developed a new machine learning approach that “automatically optimizes cameras to maximize computer vision accuracy.” The company says it can be applied for any vision task, such as object or free space detection, while reducing effort and time. The tech is now part of Algolux’s Atlas Camera Optimization Suite.

This allows original equipment managers and Tier 1 suppliers to improve vision system effectiveness, directly addressing safety concerns.

What’s the big issue?

Cameras are the sensor of choice for system developers of safety-critical applications, says the company. However, camera development currently relies on expert imaging teams to manually tune camera architectures. This approach can take months, requires hard-to-find deep expertise and depends on visual subjectivity.

Algolux says this process does not ensure that the camera provides the optimal output for computer vision algorithms.

One of the issues with the Tesla crash, writes Templeton, could have been this: computer vision systems recognize things they have been trained on. Seeing the roof of a truck on the road is not a common event. Tesla’s image classifiers probably have not trained extensively on trucks lying sideways on the road. The back of a truck they will identify, and by now, perhaps the side of an upright truck.

How is Algolux helping to improve computer vision?

Algolux’s “performance and scalability breakthroughs” in its Atlas product means deeper optimization of camera architectures
for any computer vision task.

The company says Atlas “significantly improves computer vision results” in days vs. traditional approaches that may not deliver optimal results even after months. In turn, vision system teams can better scale their limited resources, reduce costs, and shrink development time – a game-changer for OEMs and Tier 1 suppliers.

Can this ultimately prevent crashes like the one in Taiwan and ultimately - if we get there - prevent widespread collisions involving driver assist systems?

Share
+1
Share
0 Shares

+ There are no comments

Add yours