“Worth reading “Human Compatible” by Stuart Russell (he’s great!) about future AI risks & solutions https://t.co/ZCdvZrrcVf https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/”
The tweet archive.
15 years of Elon, fully searchable. The production archive uses Supabase as the source of truth, with 94,952 indexed tweets available in development as a full-archive fallback and a curated annotation layer for context, theory, and how major claims aged.
“If advanced AI (beyond basic bots) hasn’t been applied to manipulate social media, it won’t be long before it is”
“Told you AI was dangerous!! 🔥💣💦”
“@ShadowCatTrash Overall master plan should be explained. I think it still makes sense if we don’t consider AI 2b other.”
“And yet people ask what could possibly go wrong with AI”
“@hedweg Long-term, it’s about coupling the collective will of humanity to AI. Short-term, it’s about improving the lives of those with brain or spine problems.”
“@wordcipher Maybe AI will make me follow it, laugh like a demon & say who’s the pet now …”
“@Jack_Frodo Universal income will be necessary over time if AI takes over most human jobs”
“Nothing will affect the future of humanity more than digital super-intelligence. Watch Chris Paine’s new AI movie for free until Sunday night at https://t.co/WehHcZX7Qe http://doyoutrustthiscomputer.org/watch”
“Chris Paine AI movie premiering tonight https://t.co/kXM7USFi8D http://deadline.com/2018/04/do-you-trust-this-computer-trailer-chris-paine-artificial-intelligence-documentary-video-1202357639/”
“@cdelancray @sapinker @WIRED Wow, if even Pinker doesn’t understand the difference between functional/narrow AI (eg. car) and general AI, when the latter *literally* has a million times more compute power and an open-ended utility function, humanity is in deep trouble”
“@SeanMahoneyAP Sorry for the delay. We have the most advanced AI neural net of any consumer product by far, so it’s going through exhaustive testing. The results are blowing me away though and I think you will have a similar experience.”
“@WIRED This depressingly misleading & misanthropic article came from a very brief digression at an AI conf, not from an interview with Wired as is falsely implied. This is why I stopped following Wired long ago. There are way better tech pubs out there.”
“@AndrewKemendo Govts don't need to follow normal laws. They will obtain AI developed by companies at gunpoint, if necessary.”
“@JakeBlueatSM May be initiated not by the country leaders, but one of the AI's, if it decides that a prepemptive strike is most probable path to victory”
“China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.”
“@GigarothPack Not a bad idea. For sure, it makes sense to test AI for safety in some kind of open world / sandbox virtual world.”
“Worth reading Life 3.0 by @Tegmark. AI will be the best or worst thing ever for humanity, so let’s get it right. https://t.co/lT0uMH3ujZ https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598”
“@Excellion @boringcompany @OpenAI The AI could def find us there, but maybe it won't care”
“@Reza_Zadeh Biggest impediment to recognizing AI danger are those so convinced of their own intelligence they can't imagine anyone doing what they can't”
“@VeryVandy It is far too complex for that. Requires a team of the world's best AI researchers with massive computing resources.”
“Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.”
“If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. https://t.co/2z0tiid0lc”
“Tim's piece on AI is excellent, but we actually face a double exponential rate of improvement. AI hardware & software are both exponential. https://t.co/hSfNU8zxDu https://x.com/john_blue9/status/889747464055513088”
“@__svndee Disruption certainly. Deep AI is the real risk, though, not automation.”
“@juancarlosrs @elpilot That is the aspiration: to avoid AI becoming other.”
“Top AI researchers agree on principles for developing benefical AI https://t.co/CATbd4oidF https://futureoflife.org/ai-principles/”
“Tesla self-driving AI with the Benny Hill option package https://t.co/gJAwzys7vV https://www.tesla.com/videos/autopilot-self-driving-hardware-neighborhood-short/?utm_campaign=GL_AP_111816&utm_source=Twitter&utm_medium=social”
“Only a matter of time before advanced AI is used to do this. Internet is particularly susceptible to a gradient descent algo. https://t.co/a6AdF7o7AZ https://x.com/theeconomist/status/794196455405875200”
“Would like to thank @nvidia and Jensen for donating the first DGX-1 AI supercomputer to @OpenAI in support of democratizing AI technology”
“@lessteza control of super powerful AI by a small number of humans is the most proximate concern”
“@VivekMGeorge Zuck doesn't (yet) have a deep tech understanding of AI. I spend hours every week being educated by world's best researchers.”
“Congrats to DeepMind! Many experts in the field thought AI was 10 years away from achieving this. https://t.co/5gGZZkud3K https://x.com/demishassabis/status/707474683906674688”
“Worth reposting the Wait But Why piece on AI. We are at the beginning of exponential growth in digital intelligence. https://t.co/1c30ZwrxQ1 http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html”
“Announcing formation of @open_ai … https://t.co/Fcouhwh6MC https://openai.com/blog/introducing-openai/”
“@ID_AA_Carmack Even if inevitable, we should at least attempt to postpone the advent of AI weaponry. Sooner isn't better.”
“If you’re against a military AI arms race, please sign this open letter: http://t.co/yyF9rcm9jz http://tinyurl.com/awletter”
“Worth watching @ExMachinaMovie. The AI would be in the network, not the robot, but otherwise good.”
“Good primer on the exponential advancement of technology, particularly AI http://t.co/1c30ZwJ8Y5 http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html”
“World's top artificial intelligence developers sign open letter calling for AI safety research: http://t.co/ShWc8F7Kyq http://futureoflife.org/misc/open_letter”
“Reading The Culture series by Banks. Compelling picture of a grand, semi-utopian galactic future. Hopefully not too optimistic about AI.”
“While on the subject of AI risk, Our Final Invention by @jrbarrat is also worth reading”
“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”
“Also dig Mass Effect. It's all fun & games until the AI decides people suck. Maybe we can be their limbic system.”
“Interesting interview with Vinge about superhuman AI and optimistic apocalypses http://t.co/TVoqGpEG http://bit.ly/GEkktb”
