“Looking forward to ARC-AGI-3 [@arcprize] Grok 4 (Thinking) achieves new SOTA on ARC-AGI-2 with 15.9% This nearly doubles the previous commercial SOTA and tops the current Kaggle competition SOTA”
The tweet archive.
15 years of Elon, fully searchable. The production archive uses Supabase as the source of truth, with 94,952 indexed tweets available in development as a full-archive fallback and a curated annotation layer for context, theory, and how major claims aged.
“Batteries are critical to smooth out the power fluctuations from AI training and grid voltage drops, as even a small change in power causes AI training to fail [@JessePeltan] You cannot train AGI without batteries. China knows this. Anybody who’s paying attention knows this. If we cannot make batteries — we lose.”
“🤨 [@ai_for_success] OpenAI how it started vs how it’s going. Can we really trust Sam Altman or OpenAI with AGI? 🤔 📹 Credit: u/katxwoods”
“Colossus 2 will be the first Gigawatt AI training supercluster [@teslaownersSV] 🚨 BREAKING: xAI just received 168 Tesla Megapacks to power Colossus 2 their second AI data center. That’s massive energy storage. Enough to run an entire neighborhood now fueling AGI. Tesla + xAI = The future of power-hungry intelligence.”
“The smartest & most influential people in the world interact on 𝕏! [@Yuchenj_UW] You know Google is back and probably will win the AGI race when its CEO starts doing customer support on X.”
“RT @cb_doge: AGI (Artificial Gork Intelligence) https://t.co/11VUdrM2To”
“AGI has arrived 😂 @gork https://t.co/v4TQ4OC8TY”
“RT @geoffreyhinton: AGI is the most important and potentially dangerous technology of our time. OpenAI was right that this technology merit…”
“RT @gfodor: @goth600 @elonmusk Extraterrestrials who seeded Earth with human bootloaders for a new AGI subspecies https://t.co/RTQgyhKnWc”
“Beautiful [@lordasado] I made this interactive generative art piece using @Grok 3, purely on my iPhone. I didn’t write a single line of code. Grok 3 pretty much nailed my idea without a single error. How is this not AGI? )”
“@powerbottomdad1 @netcapgirl Ok maybe AGI is further away than I thought 😂”
“@EdKrassen AGI will come first, but there are many levels of AGI. As @waitbutwhy’s charts suggest, we are at the very beginning of intelligence. Humans are the biological bootloader for digital superintelligence and we may serve as a backup plan for intelligence, given that humans are far…”
“@krassenstein It is baby AGI”
“@farzyness Governments will attempt to seize power over AGI, which may be worse. But it is increasingly coming down to a choice of what company do you trust most to develop AGI.”
“@eyeslasho AGI>DEI”
“@WholeMarsBlog It is increasingly clear that all roads lead to AGI. Tesla is building an extremely compute-efficient mini AGI for FSD.”
“@matanSF It has been a great week thanks to the amazing people with whom I have the honor to work. That said, I am worried about Microsoft having unfettered ownership of AGI.”
“@WholeMarsBlog We should dispense with the false idea that money is somehow relevant in an AGi future”
“@ESYudkowsky Btw, one of the xAI tests for AGI is whether it can write better Harry Potter fan fiction than yours, which I’m told is excellent”
“@WholeMarsBlog I think we may have figured out some aspects of AGI. The car has a mind. Not an enormous mind, but a mind nonetheless.”
“Timing seems like AGI>>Idiocracy”
“@stevenmarkryan It’s not AGI until it can solve at least one fundamental physics problem”
“@micsolana @absoluttig Not clear what money means even in a benign, post-scarcity AGI scenario”
“@Teslaconomics Not as profound as AGI, but certainly profound”
“@tunguz Maybe they will show up just in time to save us from AGI Armageddon”
“@ylecun Yeah, ok, but also AGI”
“@PeterDiamandis I don’t mean to suggest a headlong rush into AGI without considering the consequences”
“If AGI is almost here, why doesn’t autocorrekt work!?”
“@SmokeAwayyy Not *every* day is a day closer to AGI But close enough sigh”
“@TalulahRiley I’ve seen quite a few technologies develop, but none with this level of risk. AGI is significantly higher risk than nuclear weapons, in my opinion. Super smart humans have trouble imagining something vastly smarter than themselves.”
“The least bad solution to the AGI control problem that I can think of is to give every verified human a vote”
“@PeterDiamandis After AGI, yes”
“Don’t Look Up … but AGI instead of comet”
“@TheChiefNerd Leading AGI developers will not heed this warning, but at least it was said”
“But, all things considered with regard to AGI existential angst, I would prefer to be alive now to witness AGI than be alive in the past and not”
“@ESYudkowsky To be called AGI, it needs to invent amazing things or discover deeper physics – many humans have done so. I’m not seeing that potential yet.”
“@jack 2029 feels like a pivotal year. I’d be surprised if we don’t have AGI by then. Hopefully, people on Mars too.”
“@JasonDanheiser @WholeMarsBlog @28delayslater Self-driving cars & useful humanoid robots require a sophisticated understanding of reality. I am increasingly convinced that they are on the path to solving AGI. Should AGI be solved? I don’t know, but humanity is moving rapidly in this direction whether I like it or not.”
“Tesla AI might play a role in AGI, given that it trains against the outside world, especially with the advent of Optimus”
“@PPathole @BBCScienceNews This thing we call “money” is just a (slow, lossy & unsecure) database for labor allocation. Investing is meaningless without people, at least until AGI happens, which will obviate need for labor & necessitate UBI.”
“@flcnhvy @neuralink Right now, trajectory of neuro-silicon symbiosis doesn’t appear to intersect trajectory of AGI. Goal of Neuralink is to raise this probability above 0.0%.”
