Copilot misses the question, elaborates on topic I was speaking aloud instead.

Was using my SO’s laptop, I had been talking (not searching, or otherwise typing) about some VPN solutions for my homelab, and had the curiosity to use the new big copilot button and ask what it can do. The beginning of this context was actually me asking if it can turn off my computer for me (it cannot) and I ask this.

Very unnerved, I hate to be so paranoid to think that it actually picked up on the context of me talking, but again: SO’s laptop, so none of my technical search history to pull off of.

LWD,

ChatGPT has a short but distinct history of encouraging paranoia in people who use it.

Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.

vexikron,

Absolutely amazing.

My guess is that at this point there are so many user prompts its received so far in its training set that bring up both Copilot and privacy concerns that it first interpreted the question, then searched for the most common topic associated with itself (privacy), then spit out a hardcoded MSFT override response for ‘inquiry’ + ‘privacy’.

BossDj,

Is it possible that your chain of questions is very similar to other “paranoid” users who inevitably question copilot about privacy, so this is a learned response?

bbuez,

I’ll pull the rest of the context when she’s back in town, I doubt she’s used it more so it should be saved still. She looked at me when this typed out and said “you’re fucking with me right?”. I am still just as shocked, I wish I was fucking around and I have no other explaination how it would remotely key onto saying this given the previous interactions.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • privacy@lemmy.ml
  • fightinggames
  • All magazines