A Reddit moderator within the "Family Medicine" subreddit has issued a stark warning against the platform's AI-powered "Reddit Answers" feature. This comes after the AI began automatically responding to posts, often dispensing alarmingly dangerous misinformation. Despite the clear risks, moderators are reportedly unable to disable this feature, raising significant concerns about user safety.
One particularly concerning example involved the AI recommending high-dose kratom, an illegal substance in some regions, as a treatment for chronic pain. In another instance, it suggested heroin as a pain management alternative, even linking to a post seemingly glorifying its use. This is especially troubling considering heroin's Schedule I drug classification in the US, indicating its highly addictive nature and lack of accepted medical applications.
The moderator and healthcare workers fear users may mistake these AI-generated responses as endorsements from the subreddits themselves. They urgently request that Reddit disable the feature in medical and mental health subreddits, or at the very least, grant moderators the ability to opt-out. Enhanced filtering is also crucial to prevent the AI from disseminating harmful health-related advice. While Reddit admins have acknowledged the feedback and claim to be making adjustments to the feature's placement, the "Family Medicine" subreddit has issued an explicit warning to its users, urging them to disregard anything posted by "Reddit Answers".
Artículos relacionados de LaRebelión:
- TP-Link Router Flaws Exploited CISA Issues Warning
- Pixel 10 Battery Woes Googles Battery Health Assistant is Mandatory and Cant Be Disabled
- NYT Strands Aug 4 Hints Answers and How to Solve Todays Puzzle 519
Artículo generado mediante LaRebelionBOT
No hay comentarios:
Publicar un comentario