bitwolf,

Dang, swearing was one of my strategies to get the bot to forward me to a representative

Cybrpwca,

I think I get what the article is saying, but all I can imagine is Siri calmly reading to me the vilest insults ever written.

AlexanderESmith,

This is fucked.

I worked in call centers for many years (technical support and sales). I need to hear the customer's tone; ecstatic, livid, and everything in between. I sit on the other end, shut my mouth, and listen to the whole rant, then calmly offer suggestions. Do they scream some more? Maybe. Do I need to take it personally? Of course not.

It drives me fucking crazy when some dipshit customer service rep hears one swear word (not even directed at them, like "I hate this fuckin' thing", not "you're a fuckin' dumbass") and start in on the "if you keep swearing at me, I'll end the call". Grow up, you work in a service industry, and your company probably fucked up.

My favorite calls were the ones where someone called to cancel and tore up their voice yelling about all the reasons our product was gabrage. Very, very roughly, about 15% of the time there was nothing I could do (even if I fixed the problem, they have lost faith and will get their money back, or sue trying, so I just refund and move on). Another 25% was me fixing the problem and offering a credit because we fucked up. About half the time, its something stupid and simple and they get their problem solved, and the rest of the time was some absolutely crazy broken shit that makes me work with someone two tiers above me for a few hours fixing it (for everyone, not just that caller), then the customer is so happy they renew everything for a year because they know they're gonna get great support.

I loved those calls. They were the reason I kept showing up to work. I learned a ton in those jobs, and my favorite thing was hearing someone go from completely apoplectic to surprised and elated that everything was fixed.

Nath,
@Nath@aussie.zone avatar

The biggest problem I see with this is the scenario where calls are recorded. They’re recorded in case we hit a “he said, she said” scenario. If some issue were to be escalated as far as a courtroom, the value of the recording to the business is greatly diminished.

Even if the words the call agent gets are 100% verbatim, a lawyer can easily argue that a significant percentage of the message is in tone of voice. If that’s lost and the agent misses a nuance of the customer’s intent, they’ll have a solid case against the business.

sneezycat,
@sneezycat@sopuli.xyz avatar

I see no problem: they can record the original call and postprocess it with AI live for the operators. The recordings would be the original audio.

geissi,

Besides providing verbatim records of who said what, there is a second can of worms in forming any sort of binding agreement if the two sides of the agreement are having two different conversations.

I think this is what the part about the missed nuance means.

blindsight, (edited )

This seems like it might work really well. We’ve evolved to be social creatures, and internalizing the emotions of others is literally baked into our DNA (mirror neurons), so filtering out the emotional “noise” from customers seems, to me, like a brilliant way to improve the working conditions for call centre workers.

It’s not like you can’t also tell the emotional tone of the caller based on the words they’re saying, and the call centre employees will know that voices are being changed.

Also, I’m not so sure about reporting on anonymous Redditor comments as the basis for journalism. I know why it’s done, but I’d rather hear what a trained psychologist has to say about this, y’know?

perishthethought,

Am I crazy or is 10,000 samples nowhere near enough for training people’s voices?

eveninghere,

If you have pre-trained model or a classical voice matching algorithm as the basis, few samples might suffice.

Kissaki,

I don’t think it seems like too few samples for it to work.

What they train for is rather specific. To identify anger and hostility characteristics, and adjust pitch and inflection.

Dunno if you meant it like that when you said “training people’s voices”, but they’re not replicating voices or interpreting meaning.

learned to recognize and modify the vocal characteristics associated with anger and hostility. When a customer speaks to a call center operator, the model processes the incoming audio and adjusts the pitch and inflection of the customer’s voice to make it sound calmer and less threatening.

Xirup,
@Xirup@yiffit.net avatar

In my country, 99% of the time you contact technical support, a poorly made bot responds (actually it is a while loop) with ambiguous and pre-written answers, and the only way to talk to a human is directly by going to the place in question, so nothing to worry about that here.

Kissaki,

So what you’re saying is that we need AI do interface in-store as well? /s

kibiz0r,

Interacting with people whose tone doesn’t match their words may induce anxiety as well.

Have they actually proven this is a good idea, or is this a “so preoccupied with whether or not they could” scenario?

sabreW4K3,
@sabreW4K3@lazysoci.al avatar

It’s probably the Jurassic Park effect

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • fightinggames
  • All magazines