Autonomous Vehicles

Adam Motaouakkil, April 7, 2026

Definition

Autonomous navigation is a field in robotics where systems take sensor information to determine the optimal path for moving an object from point A to point B and navigate it along its trajectory autonomously. The choices behind the speed at which an object moves, the complexity of the map it uses, and the path it takes are all orchestrated by a combination of computer programs and human input.

Planetary rovers and apartment robot vacuums are popular examples of autonomous navigation systems. While they use different cameras, hardware, and software, they still emulate human vision and action and therefore attempt to solve similar engineering problems. The cameras and sensors map out an environment and situate the robot. Once the robot begins constructing its maps, it can use its wheels, arms, and other hardware to interact with its surroundings.

Another popular autonomous system is the autonomous car. With companies like Tesla and Waymo deploying their self-driving cars on the roads, it is important to understand this elusive technology as it is entering urban spaces. This topic is incredibly complex and difficult to keep brief. To that end we will restrict ourselves to explaining the tech behind Tesla and Waymo as examples.

Mapping Architecture

Autonomous cars’ navigation systems can be designed in multiple ways. Tesla, for example, uses a system that is purely camera-based. Multiple cameras on the car pointed at different directions map out its environment. Regular 2D camera feeds are combined to construct a 3D map while other sensors measure the position, orientation, and movement of the moving object (Lee, Barr, 2025). Neural networks are also used to map depth, lanes and combine them in a display that shows what the car recognizes as a legitimate path to follow (Tesla, 2021).

Waymos, on the other hand, rely not only on cameras, but also on Light Detection and Ranging (LiDAR) which emit lasers to determine how far an obstacle is from the LiDAR emitter. 3D maps are also used to visualize the streets they will navigate. (Waymo, 2020). Vehicle software then chooses the optimal driving path and output motor values to drive at adequate speeds. The processes are referred to respectively as mapping, path planning, and path following (Nahavandi et al., 2022).

All of these components of navigation systems require a dedicated team of technicians to mitigate any issues that might arise from electrical noise or measurement error. As Tesla and Waymo’s architectures rely on machine learning models, the typical solution to errors requires additional data to expose their systems to unfamiliar situations.

Navigation System

Waymo’s complex driving system is powered by a world model that outputs driving instructions, trajectories, and 3D locations as natural language (Hwang et al., 2023). World models are AI models that can understand the image and sensor data they are fed and create realistic models of the world with obstacles, pedestrians and cars (Ha, Schmidhuber, 2018). Waymo’s model will then output words for driving instructions. For example, if Waymo detects a pedestrian while driving ahead, it will output “yield.” This requires training from annotators to teach the AI how to extract useful visual information and arrive at the “yield” answer. Data annotators are still required to correct any error from a model’s decisions. Since Waymos are typically deployed in dryer conditions, wet, snowy climates are not as amenable to Waymos as they have not been deployed in snowy cities (Hawkins, 2023).

Tesla also relies on transformer models similar to Waymo’s. Tesla’s model will output a driving command such as steer right by “x degrees” (Tesla, 2021). The advantage Waymo has over this method is that Waymo in case of an accident can trace back the decision to a word that people can understand and fix their data annotation. Tesla on the other hand does not have as much information because steering by any degree can be caused by anything. Where Waymo has multiple ways to map its environment from LiDAR to radar, Tesla’s camera-only approach means that if the cameras fail, there is no other back up to detecting obstacles.

Both companies use virtual worlds to generate environments that their AI can train on and learn to drive before being deployed. Their software essentially plays a “video game” that they use as training grounds.

Drawbacks

Cameras and sensors constantly need to adapt to novel environments and changes in motion (Shortis, 2023). LiDARs are unusable in rain and snow and these hardware errors trickle down into the accuracy of the driving system (Dreissig et al., 2023). In short, the more dynamic an environment, the harder it is to architect an effective autonomous system. Autonomous cars require a tremendous amount of data to drive safely in traffic. This is not taking into account the coordination required with the Waymo cars (Yan et al., 2024) which further complicates matters.

As LLMs and World Models are used to drive autonomous vehicles, autonomous cars feed the demand to build massive data centres to keep deploying massive AI models (Arora, 2026). As for labour, autonomous taxi services like Waymo will put drivers out of jobs and rely on abusive data annotation from workers in the global south (Lee, 2018). For autonomous cars to become fully safe, roads must be designed for cars which takes funds away from public transit and rail (Norton, 2026). Public and private data from Canadian roads risks being exposed to U.S. security services who are not beholden to Canadian privacy laws (Haskins, 2025).