fresh bytes
Subscribe Now

How AI could ruin humanity, according to smart humans

life.gif

For the past 24 hours, scientists have been lining up to sign this open letter. Put simply, the proposal urges that humanity dedicate a portion of its AI research to “aligning with human interests.” In other words, let’s try to avoid creating our own, mechanized Horsemen of the Apocalypse.

While some scientists might roll their eyes at any mention of a Singularity, plenty of experts and technologists—like, say, Stephen Hawking and Elon Musk—have warned of the dangers AI could pose to our future. But while they might urge us to pursue our AI-related studies with caution, they’re a bit less clear on what exactly it is we’re being cautious against. Thankfully, others have happily filled in those gaps. Here are five of the more menacing destruction-by-singularity prophecies our brightest minds have warned against.

Continue reading at Gizmodo

Leave a Reply

featured blogs
Feb 6, 2026
In which we meet a super-sized Arduino Uno that is making me drool with desire....

featured chalk talk

Unlocking Cost-Effective and Low-Power Edge AI Solutions
In this episode of Chalk Talk, Miguel Castro from STMicroelectronics and Amelia Dalton explore how you can jump-start the evaluation, prototyping, and design your next edge AI application. They also investigate the details of the cost-effective and lower power edge AI solutions from STMicroelectronics and how the tools, the ecosystem, and STMicroelectronics MCUs are enabling sophisticated AI inference right on the device.
Jan 15, 2026
37,543 views