According to António Pombeiro, Deputy Secretary-General of the Internal Administration, who spoke to journalists on 20 June in Porto, “if the pilot project goes well, we are prepared to start using the system to answer calls as of 2025.”
Currently, we are facing “a very recent technology”, and there is the “need to do many tests”, admitting that for now we are “very much in the unknown”, so the operation of the pilot project will be key.
“In certain situations, we have waiting periods due to the great amount of calls. This happens when there are incidents that involve a lot of publicity, a lot of people watching what is happening and everyone has the initiative to call 112”, said António Pombeiro, giving the example of urban fires.
I’m actually working on an LLM “AI” augmented call center application right now, so I’ve got a bit of experience in this.
Before anyone starts doomsaying, keep in mind that when you narrow the goal and focus of the machine learning model, it gets exponentially better at the job. Way better at the job than people.
ChatGPT on its own is a massive scope, and that flexibility means it’s going to do the bad things we know it too do. That’s why chatgpt sucks.
But build a LLM focused to managing a call center that handles just one topic. That’s what’s going on, virtually everywhere right now. This article gets that “based on chat gpt” in for clicks and fear mongering.
What should I change the title to? “Based on LLM” or “Based on Machine Learning”?
Are these systems able to recognize when they don’t know something, or when to contact a human? All the focused learning base in the world won’t help if the machine still confabulates answers, and so far afaik that’s a flaw with all the models.
Absolutely, 100%. We aren’t just plugging in an LLM and letting it handle calls willy nilly. We’re telling it like a robot exactly what to do, and the LLM only comes into play when it’s trying to interpret the intent of the person on the phone within the given conversation they’re having.
So for instance as we develop this for our end users, we’re building out functionality in pieces. For each piece where we know we can’t do that (yet), we “escalate” the call to the real person at the call center for them to handle. As we develop more these escalations get fewer, however there are many instances that will always escalate. For instance if the user says “let me speak to a person” or something to that effect, we’ll escalate right away.
For things the LLM can actually do against that users data, those are hard coded actions we control, it didn’t come up with them. It didn’t decide to do them, we do. It isn’t skynet and it isn’t close either.
The LLM’s actual functional use is quite limited to just understanding the intent of the user’s speech, that’s all. That’s how it’s being used all over (to great results).
I’m not calling customer service unless I need a human, so the automated assistants are a huge waste of my time.
The biggest problems I have with these systems is when the company using them force you to use them, especially on the phone. They can have big accessibility barriers and it’s really frustrating when they don’t have a “let me talk to a human” function - more and more companies are using these things and not offering that and it’s a genuinely horrible experience for me, every time.