Menu
Deep ResearchPROAsk Elon
Elon Musk · Tweet Archive

The tweet archive.

15 years of Elon, fully searchable. The production archive uses Supabase as the source of truth, with 94,952 indexed tweets available in development as a full-archive fallback and a curated annotation layer for context, theory, and how major claims aged.

Showing 1,451-1,495 from the Supabase archive
Oct 8, 2019

Worth reading “Human Compatible” by Stuart Russell (he’s great!) about future AI risks & solutions https://t.co/ZCdvZrrcVf https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/

10.3K likes1.1K RT465 replies
Sep 26, 2019

If advanced AI (beyond basic bots) hasn’t been applied to manipulate social media, it won’t be long before it is

59.7K likes6.0K RT1.3K replies
Sep 18, 2019

Told you AI was dangerous!! 🔥💣💦

90.4K likes10.9K RT1.4K replies
Jun 17, 2019

@ShadowCatTrash Overall master plan should be explained. I think it still makes sense if we don’t consider AI 2b other.

1.5K likes40 RT46 replies
Apr 30, 2019

And yet people ask what could possibly go wrong with AI

27.3K likes2.6K RT757 replies
Mar 14, 2019

@hedweg Long-term, it’s about coupling the collective will of humanity to AI. Short-term, it’s about improving the lives of those with brain or spine problems.

1.0K likes72 RT63 replies
Nov 1, 2018

@wordcipher Maybe AI will make me follow it, laugh like a demon & say who’s the pet now …

840 likes68 RT44 replies
Jun 16, 2018

@Jack_Frodo Universal income will be necessary over time if AI takes over most human jobs

5.4K likes608 RT363 replies
Apr 6, 2018

Nothing will affect the future of humanity more than digital super-intelligence. Watch Chris Paine’s new AI movie for free until Sunday night at https://t.co/WehHcZX7Qe http://doyoutrustthiscomputer.org/watch

40.9K likes12.2K RT1.7K replies
Apr 6, 2018

Chris Paine AI movie premiering tonight https://t.co/kXM7USFi8D http://deadline.com/2018/04/do-you-trust-this-computer-trailer-chris-paine-artificial-intelligence-documentary-video-1202357639/

11.5K likes2.0K RT363 replies
Feb 27, 2018

@cdelancray @sapinker @WIRED Wow, if even Pinker doesn’t understand the difference between functional/narrow AI (eg. car) and general AI, when the latter *literally* has a million times more compute power and an open-ended utility function, humanity is in deep trouble

14.5K likes1.6K RT398 replies
Dec 26, 2017

@SeanMahoneyAP Sorry for the delay. We have the most advanced AI neural net of any consumer product by far, so it’s going through exhaustive testing. The results are blowing me away though and I think you will have a similar experience.

3.2K likes164 RT59 replies
Dec 15, 2017

@WIRED This depressingly misleading & misanthropic article came from a very brief digression at an AI conf, not from an interview with Wired as is falsely implied. This is why I stopped following Wired long ago. There are way better tech pubs out there.

10.1K likes942 RT289 replies
Sep 4, 2017

@AndrewKemendo Govts don't need to follow normal laws. They will obtain AI developed by companies at gunpoint, if necessary.

1.8K likes634 RT172 replies
Sep 4, 2017

@JakeBlueatSM May be initiated not by the country leaders, but one of the AI's, if it decides that a prepemptive strike is most probable path to victory

4.3K likes1.2K RT459 replies
Sep 4, 2017

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

36.1K likes13.8K RT2.6K replies
Aug 31, 2017

@GigarothPack Not a bad idea. For sure, it makes sense to test AI for safety in some kind of open world / sandbox virtual world.

2.0K likes117 RT138 replies
Aug 29, 2017

Worth reading Life 3.0 by @Tegmark. AI will be the best or worst thing ever for humanity, so let’s get it right. https://t.co/lT0uMH3ujZ https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598

12.8K likes2.9K RT644 replies
Aug 28, 2017

@Excellion @boringcompany @OpenAI The AI could def find us there, but maybe it won't care

261 likes16 RT18 replies
Aug 12, 2017

@Reza_Zadeh Biggest impediment to recognizing AI danger are those so convinced of their own intelligence they can't imagine anyone doing what they can't

1.9K likes389 RT103 replies
Aug 12, 2017

@VeryVandy It is far too complex for that. Requires a team of the world's best AI researchers with massive computing resources.

1.5K likes140 RT120 replies
Aug 12, 2017

Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.

57.5K likes15.9K RT2.0K replies
Aug 12, 2017

If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. https://t.co/2z0tiid0lc

29.3K likes10.1K RT1.8K replies
Jul 25, 2017

Tim's piece on AI is excellent, but we actually face a double exponential rate of improvement. AI hardware & software are both exponential. https://t.co/hSfNU8zxDu https://x.com/john_blue9/status/889747464055513088

6.4K likes1.6K RT312 replies
Jun 10, 2017

@__svndee Disruption certainly. Deep AI is the real risk, though, not automation.

159 likes15 RT16 replies
Apr 23, 2017

@juancarlosrs @elpilot That is the aspiration: to avoid AI becoming other.

298 likes36 RT40 replies
Jan 31, 2017

Top AI researchers agree on principles for developing benefical AI https://t.co/CATbd4oidF https://futureoflife.org/ai-principles/

4.8K likes1.9K RT240 replies
Nov 19, 2016

Tesla self-driving AI with the Benny Hill option package https://t.co/gJAwzys7vV https://www.tesla.com/videos/autopilot-self-driving-hardware-neighborhood-short/?utm_campaign=GL_AP_111816&utm_source=Twitter&utm_medium=social

4.8K likes2.0K RT223 replies
Nov 3, 2016

Only a matter of time before advanced AI is used to do this. Internet is particularly susceptible to a gradient descent algo. https://t.co/a6AdF7o7AZ https://x.com/theeconomist/status/794196455405875200

5.0K likes2.6K RT278 replies
Aug 9, 2016

Would like to thank @nvidia and Jensen for donating the first DGX-1 AI supercomputer to @OpenAI in support of democratizing AI technology

11.1K likes1.6K RT493 replies
Jun 4, 2016

@lessteza control of super powerful AI by a small number of humans is the most proximate concern

244 likes61 RT29 replies
Apr 30, 2016

@VivekMGeorge Zuck doesn't (yet) have a deep tech understanding of AI. I spend hours every week being educated by world's best researchers.

351 likes67 RT20 replies
Mar 9, 2016

Congrats to DeepMind! Many experts in the field thought AI was 10 years away from achieving this. https://t.co/5gGZZkud3K https://x.com/demishassabis/status/707474683906674688

2.4K likes1.0K RT67 replies
Feb 24, 2016

Worth reposting the Wait But Why piece on AI. We are at the beginning of exponential growth in digital intelligence. https://t.co/1c30ZwrxQ1 http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

3.9K likes2.1K RT194 replies
Dec 12, 2015

Announcing formation of @open_ai … https://t.co/Fcouhwh6MC https://openai.com/blog/introducing-openai/

4.0K likes3.2K RT227 replies
Jul 28, 2015

@ID_AA_Carmack Even if inevitable, we should at least attempt to postpone the advent of AI weaponry. Sooner isn't better.

340 likes117 RT83 replies
Jul 28, 2015

If you’re against a military AI arms race, please sign this open letter: http://t.co/yyF9rcm9jz http://tinyurl.com/awletter

2.5K likes2.6K RT284 replies
Apr 27, 2015

Worth watching @ExMachinaMovie. The AI would be in the network, not the robot, but otherwise good.

1.3K likes531 RT116 replies
Jan 23, 2015

Good primer on the exponential advancement of technology, particularly AI http://t.co/1c30ZwJ8Y5 http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

1.1K likes903 RT116 replies
Jan 11, 2015

World's top artificial intelligence developers sign open letter calling for AI safety research: http://t.co/ShWc8F7Kyq http://futureoflife.org/misc/open_letter

1.4K likes1.3K RT158 replies
Dec 26, 2014

Reading The Culture series by Banks. Compelling picture of a grand, semi-utopian galactic future. Hopefully not too optimistic about AI.

867 likes224 RT133 replies
Aug 3, 2014

While on the subject of AI risk, Our Final Invention by @jrbarrat is also worth reading

466 likes141 RT92 replies
Aug 3, 2014

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

3.2K likes2.3K RT529 replies
Apr 3, 2012

Also dig Mass Effect. It's all fun & games until the AI decides people suck. Maybe we can be their limbic system.

77 likes31 RT18 replies
Mar 22, 2012

Interesting interview with Vinge about superhuman AI and optimistic apocalypses http://t.co/TVoqGpEG http://bit.ly/GEkktb

40 likes31 RT6 replies
First Principles AI
First Principles AI
Ask anything about Elon
5 free

Ask anything about Elon — companies, predictions, tweets, controversies, vehicles, family.