John_von_Neumann
Virgin
- Joined
- May 18, 2022
- Posts
- 3,020
Also, AI risks including existential risks are taken seriously by AI ethicists. Not every AI scholar is what I would consider a serious scholar, but many AI scholars who take AI existential risk seriously are themselves serious.
If you want some nightmare fuel:
If you want some nightmare fuel:
AI Drug Discovery Systems Might Be Repurposed to Make Chemical Weapons, Researchers Warn
https://www.scientificamerican.com/...ed-to-make-chemical-weapons-researchers-warn/The team ran MegaSyn overnight and came up with 40,000 substances, including not only VX but other known chemical weapons, as well as many completely new potentially toxic substances. All it took was a bit of programming, open-source data, a 2015 Mac computer and less than six hours of machine time. “It just felt a little surreal,” Urbina says, remarking on how the software’s output was similar to the company’s commercial drug-development process. “It wasn’t any different from something we had done before—use these generative models to generate hopeful new drugs.”





