tesla recently initiated a limited pilot program for its self-driving robotaxis in Austin, Texas. While described by some participants as smooth rides, early access videos posted online quickly revealed notable incidents. Autonomous driving experts, reviewing these public glimpses, have raised significant concerns about the technology’s maturity, particularly focusing on tesla’s unique “vision-only” approach and the persistent issue of “phantom braking.” This rollout sparks debate between enthusiastic proponents and industry veterans calling for caution and extensive testing before widespread deployment.
The Austin trial, launched June 22, offered select influencers and investors early access within a specific, tightly controlled geographic zone spanning approximately 5.5 to 6 miles east-west and 3.5 to 4 miles north-south. Riders used a dedicated app, paying a flat $4.20 fee. Despite promotional buzz, videos showed various hiccups during the initial weeks.
Observed issues captured on camera included vehicles driving more than 10 miles per hour over the speed limit, veering into incorrect lanes, unexpected braking incidents, problems with the designated pullover function, and interventions from the required human safety monitor due to parking difficulties. One video reportedly showed a robotaxi briefly driving on the wrong side of a double yellow line before self-correcting. Another depicted a sudden stop as the car approached a tree’s shadow. NBC News reviewed videos and identified 13 apparent instances of the vehicles violating traffic laws or making clear mistakes.
Expert Scrutiny on Tesla’s Robotaxi Tech
Autonomous driving experts are closely watching the Tesla robotaxi program, and their analysis of the early videos points to fundamental concerns about the underlying technology and its current state of readiness. They generally agree that while it’s a start, the system appears immature and requires substantial further development.
The Vision-Only Strategy Debate
A primary point of contention among experts is Tesla’s reliance solely on camera data for its robotaxis, powered by an advanced version of its Full Self-Driving (FSD) software. Raj Rajkumar, an engineering professor at Carnegie Mellon University, and Steven Shladover, a lead researcher at UC Berkeley, both voiced strong reservations about this “vision-only” design. Unlike competitors like Waymo, which use a combination of sensors including lidar (laser sensing) and radar (radio waves) alongside cameras, Tesla believes cameras alone are sufficient.
Rajkumar highlighted phantom braking—where a vehicle suddenly brakes with no apparent obstacle—as a potential flaw stemming directly from the camera-only system. He explained that processing visual data heavily relies on AI and machine learning, and these systems can sometimes “hallucinate.” Once a hallucination occurs, it can trigger unintended actions like sudden braking. Rajkumar warned that this could be extremely dangerous, particularly at high speeds on freeways, if a following vehicle cannot stop in time.
Shladover concurred, stating that automated driving systems need diverse data streams from cameras, radar, and lidar. He added that precise localization using high-accuracy digital maps and adherence to local traffic rules and speed limits are also crucial components. He believes Tesla’s camera-only approach, without redundant sensing methods, increases the risk of passenger injury if a human monitor isn’t present to intervene.
Persistent Phantom Braking Issues
Phantom braking is not a new phenomenon for Tesla. It has been reported in other Tesla software systems, including the Autopilot driver assistance system and the supervised Full Self-Driving beta. The robotaxi system shares underlying technology with FSD.
Tesla currently faces a class-action lawsuit regarding alleged phantom braking in its Autopilot system. Furthermore, the National Highway Traffic Safety Administration (NHTSA) launched an investigation in 2022 after receiving over 750 complaints from drivers whose Model 3 and Model Y vehicles braked unexpectedly at high speeds while using Autopilot. That investigation remains open. A separate incident in December 2022 saw a Tesla operating supervised FSD involved in an eight-car collision attributed to sudden, unexpected braking. Tesla CEO Elon Musk had commented on phantom braking generally via Twitter in 2020, stating it “should be fixed,” but has not publicly addressed the recent issues reported in the robotaxi pilot.
The Necessity of Human Safety Drivers
Bryant Walker Smith, a professor specializing in engineering and law at the University of South Carolina, characterized Tesla’s robotaxi technology as “immature.” He emphasized the critical distinction between testing a system with a safety driver and deploying one commercially without human supervision. Smith likened Tesla’s current Austin demo, which relies on safety monitors, to climbing a sheer cliff with a harness and rope, implying that removing the monitor would be like attempting it without any safety equipment.
Elon Musk himself stated that a human safety monitor would accompany each robotaxi out of an abundance of caution during the Austin trial. Videos confirm the presence of these monitors in the passenger seat, equipped with an “In Lane Stop” button on the touchscreen to halt the vehicle if needed. One incident video showed a monitor using this button to prevent a potential collision with a reversing UPS truck during a parking maneuver.
Experts agree that requiring safety drivers indicates the technology is still in a developmental phase. This practice is common in the autonomous vehicle industry’s early stages; Waymo initially included safety drivers in its public pilot vehicles, and the UK’s Wayve self-driving cars also launch with monitors. Smith noted that while other companies have successfully launched some operations without safety drivers, Tesla’s continued reliance on them for the Austin demo underscores its current stage of development.
Scaling Challenges and Regulatory Landscape
Beyond the technical debates, the path to widespread robotaxi deployment faces significant hurdles, including regulatory complexities and the sheer difficulty of navigating unpredictable real-world scenarios.
Limited Scope and Future Scaling
The Austin pilot is confined to a very small, defined area—what experts call being “extremely geofenced.” While limiting operational areas is standard practice for early AV deployments, it highlights how far the technology is from handling the vast, complex networks of public roads. Rajkumar acknowledged the launch as “a good start” but stressed that Tesla robotaxis have a considerable way to go before they can independently manage the countless unfamiliar and unanticipated situations human drivers encounter daily. Humans possess intelligence and world knowledge allowing them to react on the fly to novel events, a capability AVs are still striving to replicate.
Industry analysts generally predict a much slower pace for scaling Tesla’s robotaxi service than Elon Musk’s past ambitious timelines (like millions of robotaxis by 2026). Estimates suggest significant scaling is unlikely before 2028 at the earliest, with some experts like Smith expressing even greater skepticism based on Tesla’s history of missed deadlines in this area.
Texas Regulation and Oversight
The choice of Austin, Texas, for this initial prototype rollout is partly influenced by the state’s regulatory environment, which has historically been minimal regarding autonomous vehicles compared to places like California, which requires extensive testing data to be made public. Operating in Texas currently means Tesla is not legally obligated to publicly report crucial safety data, such as total miles driven or the frequency of human interventions required to prevent accidents.
However, Texas is implementing changes. Governor Greg Abbott recently signed new legislation taking effect September 1, 2025. This law requires a state permit to operate driverless vehicles on public roads, empowering the Texas Department of Motor Vehicles (TxDMV) to grant or revoke permits for operators deemed a public danger. Companies must attest their vehicles meet Level 4 autonomy standards, operate safely and legally, and provide information for first responders. While this framework is less stringent than some other states, experts like Smith describe the Texas permit process as “easy to get and easy to lose.”
Local officials in Austin have also voiced concerns. City Council member Vanessa Fuentes described the documented incidents as “a lot of errors” and stated the technology “proven that… is unsafe for Austinites.” She criticized Tesla for not coordinating with the city, unlike other AV companies. Council member Zo Qadri echoed these worries, noting constituent complaints and drawing parallels to issues faced by GM’s now-defunct Cruise robotaxi service in San Francisco.
Federal regulators are also monitoring the situation. NHTSA is aware of the incidents captured in online videos and has reportedly contacted Tesla for more information. State Representative Vikki Goodwin, who has experienced unexpected braking with Tesla’s driver-assistance software, expressed a desire for Tesla to proceed cautiously, having been part of a group of lawmakers who requested Tesla delay the launch until the new state law took effect. Tesla did not heed this request but reportedly agreed to comply with the new law later.
Industry Context and The Path Forward
Incidents are not unique to Tesla’s robotaxi efforts. The autonomous vehicle industry as a whole has faced setbacks and safety events. Waymo, for instance, recalled vehicles twice in 2024, including after one hit a telephone pole. GM’s Cruise robotaxi service in San Francisco faced repeated issues, including causing traffic jams, and ultimately folded after a vehicle dragged a pedestrian in a serious accident in 2023.
While incidents occur across the industry, the specific nature of the issues observed in the Tesla pilot, combined with expert concerns about its vision-only architecture, place particular scrutiny on this rollout. Experts like Brad Templeton, a consultant who worked on Google’s self-driving project (Waymo’s predecessor), argue that true safety and success will only be measurable through long-term statistics based on millions of miles driven, rather than isolated demos or early impressions.
Tesla has remained largely silent on the specific issues highlighted by experts and videos during this early trial. Business Insider did not receive a response to a detailed request for comment. Musk’s initial rhetoric positioned the launch as a major milestone—the culmination of a decade’s work involving internal AI chip and software teams. He also framed it as crucial for Tesla’s future financial success, including enabling owners to potentially monetize their vehicles. Tesla has emphasized a “super paranoid” safety approach for the Austin pilot, including avoiding bad weather and difficult intersections and initially prohibiting passengers under 18.
Ultimately, experts view the Austin launch as a preliminary step. It provides valuable real-world data for Tesla to potentially improve its software and system. However, the observed glitches and the fundamental debate surrounding the vision-only approach underscore that Tesla’s robotaxi technology is still in development. The path to safe, scalable, and profitable autonomous taxi services remains challenging, requiring rigorous testing, data-driven improvements, and addressing expert concerns to build public trust and navigate the evolving regulatory landscape.
Frequently Asked Questions
What kinds of problems have experts observed with Tesla’s robotaxi?
Experts and online videos have documented several issues during Tesla’s early Austin robotaxi pilot. These include exceeding speed limits, driving into incorrect lanes, unexpected or “phantom” braking, difficulties with the pullover function, interventions by safety monitors during parking, briefly driving on the wrong side of the road, and dropping off a passenger in the middle of an intersection. NBC News reported identifying 13 instances of potential traffic law violations or mistakes in reviewed videos.
Why are some experts concerned about Tesla’s vision-only robotaxi design?
Autonomous driving experts like Raj Rajkumar and Steven Shladover are concerned because Tesla’s robotaxis rely solely on cameras and AI processing. They argue that complex real-world driving requires redundant sensor data from cameras, radar, and lidar, along with precise mapping. Rajkumar noted that AI processing of camera data can lead to “hallucinations” causing dangerous phantom braking. Experts believe a camera-only solution may not be sufficient for long-term safety, especially at high speeds.
How safe is Tesla’s robotaxi currently, and what are the next steps?
Experts view the current Tesla robotaxi system as immature. Its deployment in Austin is a limited prototype within a geofenced area and requires a human safety monitor who can intervene, indicating it is not yet ready for operation without supervision. The Texas pilot allows Tesla to gather data for improvement. The next steps involve extensive testing, refining the software based on real-world performance, addressing issues like phantom braking, and navigating regulatory requirements before any attempt to scale operations or remove safety drivers.