Interacting with people whose tone doesn’t match their words may induce anxiety as well.
Have they actually proven this is a good idea, or is this a “so preoccupied with whether or not they could” scenario?
The biggest problem I see with this is the scenario where calls are recorded. They’re recorded in case we hit a “he said, she said” scenario. If some issue were to be escalated as far as a courtroom, the value of the recording to the business is greatly diminished.
Even if the words the call agent gets are 100% verbatim, a lawyer can easily argue that a significant percentage of the message is in tone of voice. If that’s lost and the agent misses a nuance of the customer’s intent, they’ll have a solid case against the business.
I see no problem: they can record the original call and postprocess it with AI live for the operators. The recordings would be the original audio.
This is giving me Black Mirror vibes. Like when that lady’s consciousness got put into a teddy bear, and she only had two ways to express herself:
- Monkey wants a hug
- Monkey loves you
I get that you shouldn’t go off on customer service reps (the reason you’re angry is never their fault), but filtering out the emotion/intonation in your voice is a bridge too far.
Most of the time angry customers don’t even understand what they’re angry at. They’ll 180 in a heartbeat if the agent can identify the actual issue. I agree, this is unnecessary.
Based on my experience working in a call center, I wouldn’t call it unnecessary. People are fucked up.
I did phones in a different century, so I don’t know whether this would fly today. But, my go-to for someone like this was “ok, I think I see the problem here. Shall we go ahead and fix it or do you need to do more yelling first?”
I can’t remember that line ever not shutting them down instantly. I never took it personally, whatever they had going on they were never angry at me personally.
Then again, I do remember firing a couple of customers (“we don’t want your business any more etc”) after I later became a manager and people were abusive to staff. So you could be right, also.
I think I get what the article is saying, but all I can imagine is Siri calmly reading to me the vilest insults ever written.
If they’re going to do this, then customers can get support via text messaging right? They’re not going to have to call in to talk to a computer to have their voice turned into text for an agent right?
This isn’t about asymmetrically wasting the time of the customer so they don’t call support at all, right?