“@aaronburnett Your analysis is excellent for an organization that doesn’t design, build and fly thousands of satellites. Within a few iterations, we can probably get AI satellites to <100kW/ton, inclusive of all components, especially if the GPU is designed to operate at ~370 Kelvin.”
The tweet archive.
15 years of Elon, fully searchable. The production archive uses Supabase as the source of truth, with 94,952 indexed tweets available in development as a full-archive fallback and a curated annotation layer for context, theory, and how major claims aged.
“RT @tetsuoai: xAI raised $20B in a Series E that was targeting $15B 600M monthly active users. 1M+ H100 GPU equivalents. Grok 5 already…”
“RT @teslaownersSV: Nvidia CEO Jensen “Elon Musk is the Ultimate GPU” https://t.co/jv4YILaUCU”
“Grok at country scale! [@xai] Grok goes Global with KSA: Announcing our landmark partnership with Saudi Arabia and @HUMAINAI—the first time a country adopts Grok at scale. xAI will build a new generation of hyperscale GPU data centers in the Kingdom, deploying Grok nationwide. https://x.ai/news/grok-goes-global-with-ksa”
“@YunTaTsai1 No legacy GPU at all too. That took up a lot of real estate.”
“RT @MalekiSaeed: every micro second you shave off from any gpu kernel at @xai has an immense impact. if you are good at it, you know what t…”
“Jensen delivering the first AI-optimized GPU to OpenAI in 2016 [@cb_doge] Elon Musk with Nvidia CEO Jensen Huang in 2016.”
“RT @DimaZeniuk: Elon Musk’s xAI has achieved full operational capability for Phase I of its Colossus supercomputer GPU cluster https://t.co…”
“RT @GavinSBaker: Post the release of R1: DRAM pricing: increasing. GPU rental pricing: increasing at GCP and Azure. GPU availability: de…”
“@ajtourville The existing factory space was already allocated to vehicle, battery and cell production. And this is not a matter of tucking a few computers into a corner. You need a kW of power and cooling for each GPU, which would mean 12MW of power and 12MW of cooling. The south extension…”
“Every time I hear about GPU FLOPS https://t.co/n06V3deEKA”
“@TrungTPhan A way to enter GPU mode”
“@shivon There is a physics argument that synapse activations take 1 to 2 orders of magnitude less energy than silicon transistors. That, of course, does not explain why a 10MW GPU cluster still cannot write a better novel than ~10W of brain power. My guess is that silicon…”
“@goth600 GPU companies when they hear that https://t.co/cQC6tuyKGV”
“@Scobleizer Maybe we are LLMs, living on a GPU & don’t realize it 👀”
“@soumiksf @ID_AA_Carmack Dojo uses our own chips & a computer architecture optimized for neural net training, not a GPU cluster. Could be wrong, but I think it will be best in world.”
“@martinengwicht @JoelSapp @rrosenbl @Tesla 21 TOPS for Xavier. Pegasus is just Xavier with a power hog GPU on a separate board. Problem is you can’t transfer data fast enough between computer & GPU, so GPU usable TOPS is almost irrelevant.”
“@scottwww @ValueAnalyst1 @karpathy @Tesla @nvidia Exactly. Also, you can’t actually use computation from a separate GPU effectively, as you get choked on the bus, so most of the computation is irrelevant. High power, high cooling, but low true, usable TOPS. Worst of all worlds.”
“@ElectrekCo @FredericLambert To be clear, actual NN improvement is significantly overestimated in this article. V9.0 vs V8.1 is more like a ~400% increase in useful ops/sec due to enabling integrated GPU & better use of discrete GPU.”
“Validating a GPU driver fix and camera pitch angle health check for HW2”
