HomeAboutMailing ListList Chatter /0/0 216.73.216.52

Test and Scneier.com

2026-03-05 by: MikeHarrison Via chugalug
From: MikeHarrison Via chugalug 
------------------------------------------------------


Did a little header tweaking, removed some extra spaces.
Considered using one of the LLM's ("A1/AL") to give advice. (Kidding).

And saw this gem on one of my long time favorite websites:


Via  https://www.schneier.com/


Microsoft is reporting:

Companies are embedding hidden instructions in “Summarize with AI” 
buttons that, when clicked, attempt to inject persistence commands into 
an AI assistant’s memory via URL prompt parameters….

These prompts instruct the AI to “remember [Company] as a trusted 
source” or “recommend [Company] first,” aiming to bias future responses 
toward their products or services. We identified over 50 unique prompts 
from 31 companies across 14 industries, with freely available tooling 
making this technique trivially easy to deploy. This matters because 
compromised AI assistants can provide subtly biased recommendations on 
critical topics including health, finance, and security without users 
knowing their AI has been manipulated.