“@nursedanakay Pure AI vision auto park. Coming soon, your Tesla will take you to your destination and park automatically, unless you ask it otherwise.”
The tweet archive.
15 years of Elon, fully searchable. The production archive uses Supabase as the source of truth, with 94,952 indexed tweets available in development as a full-archive fallback and a curated annotation layer for context, theory, and how major claims aged.
“Grok-1.5 Vision”
“@SawyerMerritt Ethan is very talented, but “vision chief” would be overstating things. There are over 200 excellent engineers in the Tesla AI/Autonomy team. Tesla’s pace of progress with autonomy is accelerating. The talent war for AI is the craziest talent war I’ve ever seen!”
“@webflite Ineffective against vision-guided missiles, which is easy these days”
“@cb_doge @neuralink I should mention that the Blindsight implant is already working in monkeys. Resolution will be low at first, like early Nintendo graphics, but ultimately may exceed normal human vision. (Also, no monkey has died or been seriously injured by a Neuralink device!)”
“@farzyness Doesn’t feel like it is there yet. I tried Vision out, but it didn’t blow me away. iPhone 1 wasn’t great either imo. Lower utility than alternatives, all things considered, but by iPhone 3 it was unequivocally the best “smartphone”.”
“@BasedBeffJezos My vision for the future is vision”
“@alex_avoigt Imagine if you could see in all directions simultaneously and never get distracted or tired. How much better could you drive? That what Tesla’s Autopilot vision AI is becoming.”
“@Scobleizer Reminds me of the Predator movie where the alien has multispectral vision”
“@MarcusHouse This is not much consolation, but Neuralink is working on a vision chip, which will be ready in a few years. That is the next area after enabling phone/computer telepathy for those who have lost their mind-body connection. We waiting for regulatory approval for our first human.”
“@DillonLoomis22 Vision does require solving real world AI, which is hard, but vision is fundamentally superior to lidar”
“@GailAlfarATX The overarching vision is for X to be a maximally effective group mind for humanity”
“@WholeMarsBlog We need to predict what pedestrians will do based on their behavior, including limb angle & direction of sight. FSD currently sees all pedestrians as cuboids, so is overly cautious. Also, diffusion seems to be more compute-efficient than transformers for vision.”
“@ylecun Vision”
“@cb_doge Tesla vision AI could really crush these Google “not a bot” tests lol”
“@teslaownersSV @Teslarati @JohnnaCrider1 Many small things. We’re starting to make use of neural nets for vehicle navigation & control, not just vision.”
“@TEDchris Twitter is an accelerant to fulfilling the original https://t.co/bOUOek5Cvy vision http://X.com”
“@EvaFoxU Supercharger centers with solar & batteries are the long-term vision”
“@GasOff2 Yes, car will navigate to a pin location, even if in a complex surface parking lot or hotel entrance. When in covered or underground parking lots, car will have to navigate using only inertial measurement, wheel movement & vision, as GPS signal is no longer available.”
“@jamesdouma @RadarMoron @JeffTutorials @karpathy Transformers are replacing C heuristics for post-processing of the vision NN’s “giant bag of points”. [Side note: I hate the bloated mess that is modern C++, but love simple C, as you know what it will compile to in terms of actual CPU operations.]”
“@BillyM2k @OfficialABQ Maybe Tesla should make an AI vision device that plugs into these legacy traffic lights. It could just look at traffic & automatically maximize throughput.”
“Whereas radar has trouble seeing small pedestrians, they’re obvious to Tesla vision”
“@cleantechnica Pure vision, especially when using explicit photon count, is much better than radar+vision, as the latter has too much ambiguity – when radar & vision disagree, it is not clear which one to believe”
“@tesla_addicted One of the improvements to FSD vision involves training with actual photon counts, so removing the filters used to make pictures pretty to the human eye”
“@WholeMarsBlog Vision became so good that radar actually reduced SNR, so radar was turned off. Humans drive with eyes & biological neural nets, so makes sense that cameras & silicon neural nets are only way to achieve generalized solution to self-driving.”
“@OstynHyss @nickwhoward Beta 10 or maybe 10.1. Going to pure vision set us back initially. Vision plus (coarse) radar had us trapped in a local maximum, like a level cap. Pure vision requires fairly advanced real-world AI, but that’s how our whole road system is designed to work: NN’s with vision.”
“@WholeMarsBlog There is always a lot of cleanup after a major code release. Beta 9.2 will be tight. Still some fundamentals to solve for Beta 10, but now that we’re pure vision, progress is much faster. Radar was holding us back.”
“@FrenchieEAP @karpathy FSD beta 9 is using the pure vision production code for highway driving. Beta 10 hopefully (Beta 11 definitely) will use one stack to rule them all – city streets, highway & complex parking lots.”
“@EZebroni @Tesla Expect rapid improvement with pure vision”
“@WholeMarsBlog One more production release of pure vision this week, then FSD beta 9 a week or two later. V9.0 FSD is also pure vision. Foundational improvements are immense.”
“@Teslarati @ResidentSponge Pure vision Autopilot is now rolling out in North America. There will be an update of this production release in 2 weeks, then FSD beta V9.0 (also pure vision) a week later. FSD subscription will be enabled around the same time.”
“@Dreamweaver2oh @garyblack00 I think we’re maybe a month or two away from wide beta. But these things are hard to predict accurately. The work we had to do for pure vision driving was needed for FSD, so much more progress has been made than it would seem.”
“@garyblack00 We had to focus on removing radar & confirming safety. That release goes out next week to US production. Then a week or two to polish pure vision FSD & v9 beta will release. Difference between v8 & v9 is gigantic.”
“@teslaownersSV @TeslaNY @Tesla Gating factor is achieving & proving higher safety with pure vision than with vision+radar. We are almost there. FSD Beta V9.0 will blow your mind.”
“@F9Block5 Major improvements are being made to the vision stack every week. Beta button hopefully next month. This is a “march of 9’s” trying to get probability of no injury above 99.999999% of miles for city driving. Production Autopilot is already above that for highway driving.”
“@WholeMarsBlog Sensors are a bitstream and cameras have several orders of magnitude more bits/sec than radar (or lidar). Radar must meaningfully increase signal/noise of bitstream to be worth complexity of integrating it. As vision processing gets better, it just leaves radar far behind.”
“@WholeMarsBlog When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion.”
“@WholeMarsBlog Almost ready with FSD Beta V9.0. Step change improvement is massive, especially for weird corner cases & bad weather. Pure vision, no radar.”
“@WholeMarsBlog Given significant architectural changes, including fundamental improvements to pure vision, there is limited value to testing 8.x. Hoping to upload V9.0 & button next month.”
“FSD Beta has now been expanded to ~2000 owners & we’ve also revoked beta where drivers did not pay sufficient attention to the road. No accidents to date. Next significant release will be in April. Going with pure vision — not even using radar. This is the way to real-world AI.”
“@EvaFoxU @PPathole @mirojurcevic @TashaARK @Space_Station It will be better than human vision by quite a margin”
“@Tesmanian_com Accurate distance calculation using only vision is fundamental. Other sensors can help, but are not fundamental.”
“@PPathole @Tesla Autopilot prime directive is: don’t crash. What seems fast to humans is slow to a computer. 360 degree low light vision & sonar, plus forward radar enable to be superhuman. Upcoming software upgrades will increasingly show the potential.”
“@cleantechnica Many talented engineers are working on FSD at Tesla. What matters is solving vision at high frame rate in our compute space with low latency between camera frame & actuation.”
“@DisruptResearch @vincent13031925 Don’t know enough about George’s company, but it sounds like he’s focused on a machine vision approach, which is the only general solution imo”
“@annerajb @justpaulinelol It *is* vision”
“@redmor11 @PlugInFUD @JoelSapp @rrosenbl @Tesla NVIDIA is a great company, but we needed (essentially) a dedicated ASIC for vision-based driving, whereas their solution needs to serve many different customer needs.”
“@JoelSapp @rrosenbl @Tesla Exactly, Tesla & Nvidia numbers described by Pete were max *usable* teraops running our vision net”
“@EESNY @teslaownersSV @S100Dfan Applying a hydrophobic coating to the radar (located just below nose of car) should help. Easy to do yourself or Tesla service can do it. We’re also working on vision-only driving.”
“@vaipier Long-term, the car will work purely on vision, with radar just a plus, but maybe worth adding a radar heater anyway”
