August 20 - Tesla software update 2022.16.3.10 (i.e. Fully Automated Driving FSD 10.69) has just been released. @ACPixel has put out release notes for the new version.
A new "deep lane guidance" module has been added to the Vector Lanes neural network, which blends features extracted from video streams with coarse map data, i.e., lane counts and lane connectivity. Compared to previous models, this architecture achieves a 44% error rate in lane topology, enabling smoother control of lanes and their connectivity before they become visually apparent. This provides a way to make each autopilot drive as well as someone driving their own commute, yet adapt to road changes in a sufficiently general way.
By better simulating system and drive delays in trajectory planning, the overall smoothness of driving is improved without sacrificing latency. The trajectory planner now independently considers the delay from the steering command to the actual steering drive, as well as the delay from the acceleration and braking commands to the drive. This results in a more accurate trajectory of the vehicle driving model. This allows for better downstream controller tracking and smoothness, while also allowing for more accurate responses during demanding maneuvers.
Improved unprotected left turns with more appropriate speed profiles ("Chuck Cook style" unprotected left turns) in high speed cross traffic when approaching and exiting the mid-crossing area. This is achieved by allowing an optimizable initial twitch to mimic the harsh pedal that humans depress when required to drive in front of high-speed objects. The lateral profile approaching this safety zone was also improved to allow a better posture that aligns well as it leaves the zone. Finally, the interaction with objects that are entering or waiting in the intermediate crossing area has been improved to better simulate their future intentions.
Added control over the amount of arbitrary low-speed movement from the occupancy network. This also allows finer control of more precise object shapes that are not easily represented by cubic primitives. This requires predicting the velocity of each 3D voxel. We can now have control over slow-moving UFOs.
Upgraded the occupancy network to use video rather than images of a single time step. This temporal context makes the network robust to temporary occlusions and predicts occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing dataset size by a factor of 2.4.
Upgraded to a new two-stage architecture to produce object kinematics (e.g., velocity, acceleration, yaw rate) where network computations are assigned to O (object) instead of O (space). This improves the speed estimates of distant passing vehicles by 20% while using a tenth of the computation.
The smoothness of protected right turns was improved by improving the association of traffic signals with the slip lane and the association of yield signs with the slip lane. This reduces false deceleration when no associated object is present and improves yielding positions when present.
False deceleration near crosswalks is reduced. This is improving the understanding of pedestrians and cyclists' intentions based on their movements.
Improved geometric errors in self-correlated lanes by 34% and crosswalks by 21% with updates to the full vector lane neural network. By increasing the size of each camera feature extractor, video module, and autoregressive decoder internals, as well as adding hard attention mechanisms, the fine-grained location of lanes was greatly improved and information bottlenecks in the network architecture were eliminated.
The speed profile is made more comfortable when creeping forward in order to stop more smoothly when protecting objects that may be obscured.
Improved animal recall by 34% by doubling the size of the training set for automatic tagging.
Crawling for visibility at any intersection where an object may cross the self-path, with or without traffic control.
Improved accuracy of stop locations in critical scenes with crossing objects by allowing dynamic resolution in trajectory optimization to focus more on more finely controlled regions.
Improved recall of diverging lanes by 36% by involving topological markers in the attention operation of the autoregressive decoder and by increasing the loss applied to diverging markers during training.
Improved the speed error for pedestrians and bicycles by 17% by improving the on-board trajectory estimation as input to the neural network, especially when the self is turning.
Improved object detection recall by adjusting the loss function used during training and improving label quality, eliminating 26% of missed detections of distant passing vehicles.
Object future path prediction in the case of high yaw rate is improved by incorporating yaw rate and lateral motion into the likelihood estimation. This helps objects turn into or out of their own lanes, especially at intersections or in cut scenes.
Improved speed when entering a freeway by better handling upcoming map speed changes, which increases confidence in merging onto the freeway.
Reduced latency when starting from a stop by taking into account bumps in the lead vehicle.
Enables faster identification of red light runners by assessing their current state of motion and expected braking profile.
IT House is aware that Tesla's FSD Beta program currently has approximately 100,000+ public testers in North America.
During the last earnings call, Musk revealed that the FSD Beta test team has driven more than 35 million miles, far more than its competitors. By the end of the year, the number of FSD Beta testers could grow even further. Musk also said that the Tesla Beta for right-hand drive will be released by the end of 2022.