Back

Intelligent Solutions

AI at Simwood

Simon Woodhead

Simon Woodhead

7th April 2025

I attended a conference recently where AI was everywhere. Seemingly every carrier and reseller there was working on AI yet, as the CEO of Gamma, succinctly summed it up on a panel – “nobody knows what it is” – for which I respect him enormously. Someone less honest, by contrast, made up hysterical variants of STIR/SHAKEN on stage (apparently if you use them you can carry all your KYC documentation in the SIP packet to give your customers’ confidential data to your suppliers – genius!!) – the same cretin who invented a variant of the Simwood Potato (his still doesn’t exist) on stage at another event last year. Another did just enough to ally themselves to the future of AI without providing any solutions to the problems they were presenting, i.e. a completely pointless presentation. It was a really good and valuable event though, with many educated and sensible speakers, although I do find myself increasingly wishing for a big button that somehow ejects some of them from the stage, violently in one case. Alas, now we’ve come out about our Conversational AI, I wanted to share what else we were actually doing here in the real world for those customers and peers who might be genuinely mystified where to start, despite recognising the inevitability and opportunity of AI. I suspect the Gamma CEO is bang on and hope sharing some of our learnings will help.

Like any buzzword, ‘AI’ conjures up images of a magic black box that can solve any problem. You don’t need to define the problem, or even know what you’re asking it to do, just deploy some AI and magic will happen. Of course, that is not the case! For most of the problems we need to solve we won’t be investing in training our own AI ‘model’ we’ll be using one off the shelf. Most likely this will be a Large Language Model (LLM) – trained to have conversations about knowledge it is given – something like ChatGPT perhaps. Personally, I use Grok 3 from xAI which is an LLM with a few extra tricks up its sleeve and very very smart. 

If you’ve used an LLM you’ll have discovered they’re amazing for quick facts and often a replacement to a search engine (especially Grok 3 which pulls live information from X rather than being locked in time whenever it was created like ChatGPT), but can do so much more. My first eye-opener was when I was running a strategy session for a Board I Chair and had all members do a SWOT analysis. Of course, they all did it in completely different formats, from bullet points through to 30 page verbose papers. Sitting in the car waiting for my daughter, I chucked the whole lot into an LLM (ChatGPT at that time) directing it to summarise the commonalities. It did it amazingly which saved me hours of work. It even had knowledge of the organisation and the competitors so was able to come up with its own SWOT analysis and propose initiatives – these were scarily close to the output of the subsequent human-day which followed. This kind of use of an LLM is something anyone with a web browser can do and I find to be far far more capable than anyone imagines, i.e. the limit is in knowing how powerful it is. I’ve subsequently done complicated investment and tax models, so much more easily than using a spreadsheet, with it doing the iteration to solve for a value!

In another example, I have a very complicated decision tree for my home automation – switching loads on or off according to the sun, buying or selling electricity, all depending on numerous factors such as what the sun and power price is doing later. It works ok but isn’t perfect. As a test I pasted the code into an LLM, as well as some of the ‘ah buts’ that are hard to capture in normal code and instructed it to accept input in JSON format as well as give me JSON out. Boom, this one conversation became a magic box that could perform complex logic on multiple inputs and give output in a machine-readable form.

There are also enumerate SaaS tools wrapping up this kind of capability, as well as other models such as image generation, into a web-interface. Need a custom image or some McKinsey’s style data visualisations? There’s tools out there to help. They can equally clean up videos or audio.

LLM’s are also really good at coding and can easily replace a junior developer. You can literally create applications, web sites etc. in minutes. However, the chat style interface doesn’t lend itself to anything complicated which is why there’s a plethora of tools to handle multiple files and deep dependencies. While I found the market leader (Cursor) to be pretty crap, we’ve now moved over to an awesome one. At this stage just Charles our CTO and myself are working this way; we haven’t extended it to the wider dev team yet but will. Don’t think they do all the work for you – they’re not perfect and while they can make a senior developer 100x more productive, it is a tiring workflow and needs the senior developer to supervise. This is pretty tiring as 100x more productive means supervising 100x the work. They will not turn a non- or junior-dev into a senior developer unless the requirement is really simple but can be really helpful for spot challenges like fixing a bug or writing a SQL query. They often forget state which several hours in can be tedious, and they will hallucinate and delete half your code which is really frustrating. But, the productivity gain is immense! There’s already a word for having AI write your code, fix its own bugs, all under human guidance – vibing.

When it comes to workflow, it is really tempting to use them in a decision-making capacity like I mentioned above with my electricity at home. Use cases for us here include our work on porting automation but, in all honesty, we haven’t done so yet. When one considers the work involved in extracting, say, a date from an Excel sheet, with all the potential for formatting differently or just putting garbage in there, an LLM is a natural fit but arguably is also a sledge-hammer to crack a nut where the data is vaguely structured. Answering a question like “is this emailer frustrated” is far harder to achieve in code and better justifies the overhead of an LLM, where the application requires it.

However, AI has been a huge part of some of our workflow work so far. Take our work on Nuisance Calls. Not only has the code to analyse them been written by AI, not only has internal tooling been written by AI, but AI has helped shape the algorithm. Faced with hundreds of data points for a given call one might assume that we’d train an AI to classify calls. There’s expense and potential privacy concerns here and we also come back to the ‘sledge-hammer to crack a nut’ scenario. Instead, what we did was pull off some sample data and use it to train the LLM in classification before working through other examples to capture edge-cases. Where it got it wrong, we were able to explain why which refined the understanding. Rather than having it then output code to classify as we’d taught it, we got it to create a scoring matrix, giving us maximum flexibility and a very lightweight architecture we could represent in code (which it wrote). That conversation persists in time too so when we come across a new edge case, we can throw it in there and have it refine the scoring matrix. Doing this by hand would have been a massive job and pretty hard to maintain hundreds of examples and the merits of each in one’s head. Some might conclude this is why we’ve gone from a standing start to taking 4% of calls off the network in less than a month, and you wouldn’t be wrong! 

Now, with Conversational AI, we’re exploring what can be achieved by opening up the two-way voice channel, with two way data and interaction with internal services. Customer ticket updates? Maybe processing porting activations for those porting partners who insist on phoning up? Maybe completing KYC paperwork? The potential to scale any internal operation is immense, and do it constantly and more cost-effectively, supporting human colleagues.

We’re only just scratching the surface but I suspect, as usual, we are light years ahead of those who are talking about it! If we can help you on your journey, or you’re way ahead and we’re doing something dumb, do please reach out.

Related posts