Two Senators ask ByteDance not to cease Seedance AI, the UK government calls for AI content labeling, and a jury will decide the battle between two composers.
Two Senators ask ByteDance not to cease Seedance AI, the UK government calls for AI content labeling, and a jury will decide the battle between two composers.
Good news if true from the UK today - the government has apparently ditched its plan to force creatives to 'opt out' if they don't want AI companies training on their life's work.
Good news if true from the UK today - the government has apparently ditched its plan to force creatives to 'opt out' if they don't want AI companies training on their life's work.
"Even worse was the suggestion by Grammarly’s A.I. version of me to replace the first sentence of the news article with an anecdotal opening describing a fictional person named Laura whose privacy had been violated.
“Laura, a patient searching for relief from a chronic condition, clicks through her hospital’s website to schedule an appointment. In just a few moments, her most private medical details — her reason for visiting, her doctor’s name and even the treatment she seeks — are quietly sent to Facebook, without her knowledge,” the bot suggested with a button allowing the user to paste that excerpt straight into the article.
Replacing a factual sentence with an imagined story about a person who doesn’t exist is not only bad editing. It’s a deception that could end my career as a journalist (or the career of any journalist who took that terrible advice).
And this is the problem with A.I. It doesn’t know truth from fiction. It doesn’t know an investigative news article from an offhand comment. It flattens all content into word associations.
What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.
And it must be stopped."
https://www.nytimes.com/2026/03/13/opinion/ai-doppelganger-deepfake-grammarly.html
"Even worse was the suggestion by Grammarly’s A.I. version of me to replace the first sentence of the news article with an anecdotal opening describing a fictional person named Laura whose privacy had been violated.
“Laura, a patient searching for relief from a chronic condition, clicks through her hospital’s website to schedule an appointment. In just a few moments, her most private medical details — her reason for visiting, her doctor’s name and even the treatment she seeks — are quietly sent to Facebook, without her knowledge,” the bot suggested with a button allowing the user to paste that excerpt straight into the article.
Replacing a factual sentence with an imagined story about a person who doesn’t exist is not only bad editing. It’s a deception that could end my career as a journalist (or the career of any journalist who took that terrible advice).
And this is the problem with A.I. It doesn’t know truth from fiction. It doesn’t know an investigative news article from an offhand comment. It flattens all content into word associations.
What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.
And it must be stopped."
https://www.nytimes.com/2026/03/13/opinion/ai-doppelganger-deepfake-grammarly.html