• Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    I’m sarcastic because I would assign the same probability as a zombie apocalypse. At the nuts and bolts level I think they’re both technically flawed on a Hollywood fantasy level.

    What does an AI apocalypse even look like to you? Computers launching nuclear missiles or what? Shutting down power grids?

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 hours ago

      Please assign probabilities to the following (for the next 3 decades):

      1. probability an AI smarter than any human on any intellectual task a human can do might come to exist (superintelligence);
      2. given (1), probability it decides to kill all humans to achieve its goals (misaligned);
      3. given (2), probability it is successful at killing all humans;

      bonus: given 1 and 2, probability that we don’t even notice it wants to kill us, e.g. because we don’t know how to understand what it’s thinking.

      Since the AI is smarter than me, I only need to propose one plausible method by which it could exterminate all humans. It can come up with a method at least as good as me, most likely something much better though. The typical answer here would be that it bio-engineers a lethal virus which is initially harmless (to avoid detection), but responds to some trigger like the introduction of a certain chemical or maybe a strong radio signal. If it’s very smart, and has a very good understanding of bioengineering, it should be able to produce a virus like this by paying a laboratory to e.g. perform some CRISPR operations on some existing bacteria strain (or even just mix some chemicals together if Sagan turns out to be right about bioengineering) and mail a sample somewhere. It can wait until everyone is infected before triggering the strain.