When You Don’t Need AI - Just Maths & Statistics

When You Don’t Need AI - Just Maths & Statistics

Richard Brown

2 April 2025 - 8 min read

AI
When You Don’t Need AI - Just Maths & Statistics

In the rush towards AI and machine learning, it’s easy to forget that many business problems can be solved – often more transparently and robustly – with traditional mathematical and statistical techniques. 

Organisations, particularly those with mature analytics teams, often find that “simpler is better” for a range of use cases. This article highlights examples where statistical models or mathematical techniques can provide appropriate solutions in place of complex AI.

Examples:

  • Time-series forecasting: 

Many organisations need to forecast things like sales, demand, or budgets. Classical statistical models (ARIMA, exponential smoothing, Holt-Winters) often perform as well as or better than machine learning models on these tasks when data is limited or seasonal patterns are strong. 

For example, in retail, a simple seasonal ARIMA model can predict weekly store sales, which can be a simple and fast alternative to implementing an AI model, as well as being easier to convey to stakeholders and to update regularly. 

In this instance, complex ML (like an LSTM neural network) might need far more data and could potentially still struggle with holiday effects that a human can manually adjust in a simpler model.

  • Fraud detection: 

While AI (like deep learning) is used in fraud, a lot of fraud rules in banking and insurance are essentially mathematical thresholds and if-else logic derived from statistical analysis. 

For example, a UK bank might use a logistic regression (a statistical model) to weigh factors for credit card fraud – this might catch 90% of fraud cases with a straightforward formula. 

More complex ML might only marginally improve that, and could introduce false positives that are harder to debug. One utility company executive noted regarding anomaly detection on the grid: “You don’t need AI to get the information you need… It’s basic signal processing, control theory, statistics, nothing really crazy.”​ – meaning that well-established statistical methods can detect anomalies in sensor data effectively. The quote emphasises that in some engineering contexts, known statistical techniques (like control charts or spectral analysis) can do the job without ML.

  • Inventory and supply chain optimisation: 

These often rely on operations research (linear programming, optimisation techniques) and statistical demand distributions. 

For example, a manufacturing organisation might improve its supply chain by using a linear programming model to optimise production schedules and inventory – essentially just math equations. Attempts to use ML to dynamically “learn” the best schedule can be less effective than an OR model that is grounded in known constraints and costs. Similarly, inventory decisions often use formulas derived from statistical safety stock theory (like demand variability times service factor, etc.). These are not AI, but they work and are interpretable to planners.

  • Customer segmentation and marketing: 

Often a simple RFM (Recency, Frequency, Monetary) analysis – a statistical scoring of customers – can segment customers for targeting just as well as a complex clustering algorithm. 

For example, in retail, organisations might look to use advanced clustering (k-means, etc.) on their customer base, but find that a few well-chosen features and thresholds can give segments that marketing managers understand and can act on (“high spend, lapsed 6 months” segment, etc.). Sometimes too much algorithmic complexity yields segments that are hard to label or understand, which hurts adoption by the business.

  • Quality control: 

Basic statistical process control (SPC) charts, which date back decades, are still fundamental in factories to detect when a process is out of control. 

They rely on simple statistical rules (e.g., 3-sigma limits). While AI-based computer vision might inspect products for defects (advanced use case), the overall monitoring of process variation still heavily uses statistics.

Why simpler models often suffice or excel: 

  1. Data volume & quality

Many enterprise problems lack big data. A machine learning model often needs lots of data to outperform simpler models. If you only have, say, 3 years of monthly data (36 points) to forecast something, it will be difficult for a deep learning model to beat a tuned exponential smoothing model on small datasets. 

  1. Transparency and trust 

Linear regression or statistical models provide coefficients and clear relationships that stakeholders trust. In contrast, an black-box AI model might be met with skepticism by regulators or executives. 

For example, financial services firms often prefer “explainable” logistic regression models for credit risk due to regulatory expectations, even if a black-box AI could result in a slightly better prediction. 

  1. Cost and speed 

Developing, testing, and deploying a complex AI solution can be resource-intensive. If a simpler analytic can achieve the business objective, it can be cost-effective and faster to implement. 

One might not need a full data science team to maintain a multiple regression model, whereas a neural network might.

Example 1 - Retail: 

A supermarket chain was considering machine learning to forecast product demand in each store. After trials, the data science team found that a relatively basic method (seasonal decomposition and linear regression with events like holidays) predicted demand as accurately as a gradient boosted trees model, with the added advantage that store managers understood the factors (they could see “last year’s sales + trend + holiday uplift” etc.). They chose to implement the simpler model company-wide, and reserved AI efforts for other areas like optimising personalised offers. 

Example 2 - Energy:

An energy utility company implemented an AI system for predictive maintenance on turbines, but found that it was flagging too many false positives. They went back to a physics-based statistical model that utilised vibration sensor thresholds determined by engineers; while maybe slightly less “sensitive,” it produced alerts that field engineers trusted (because it correlated with known failure modes). The AI system was then repurposed to learn from the statistical model outputs, effectively working as a supplement rather than the primary driver.

Example 3 - Finance

Banks often layer approaches, using business rules and simple models as the first line (fast, interpretable), and then a secondary AI model for the cases that slip through or for additional scoring. For example, one bank’s fraud workflow first applies a number of rules (like “Transaction far from home and high amount” triggers red flag) – those rules alone catch a majority of fraud. 

The key takeaway is not to overlook the power of basic analytics. Approach incrementally, use basic methods first, prove value, then gradually layer more complexity​.

Teams can look to start with getting the fundamentals right with math and statistics, getting people used to data-driven decision making with interpretable methods, and then consider adding AI complexity where the see it adding value.

Knowing when you don’t need AI: 

Not every problem requires AI. Some questions to consider: 

  • Can a set of straightforward rules or formula solve this problem to an acceptable level? 
  • Do we fully understand the domain (if yes, a model based on that understanding may suffice; AI is more useful when patterns are too complex to articulate)? 
  • Is the additional accuracy from an AI model worth the loss of interpretability or increased maintenance? 

Many times, the marginal gain can be a debate. For example, in marketing, a simple uplift model might identify target customers for a campaign with 80% accuracy. A complex ML might push that to 82%, but if it’s costly and people don’t trust it, the simpler approach might yield better overall results (because it gets implemented properly and acted upon).

Statistics as the backbone of AI: 

It’s also worth noting that AI/ML fundamentally is built on statistical principles. A neural network is effectively doing sophisticated statistics (just non-linear). 

Many solutions branded as AI might be solvable with simpler statistical models or even basic algebra. In some cases, organisations have realised that they could achieve the same outcomes with if-else logic or linear regression that they initially attempted with AI​.

This isn't to dismiss AI – there are certainly problems where AI is necessary (image recognition, natural language processing, very high-dimensional patterns etc.). But in some cases, enterprise data is structured and aggregated, which can make it suitable for simpler methods.

For example, in supply chain optimisation, linear programming (LP) and mixed-integer optimisation are tried-and-true techniques. Many scheduling, routing, and allocation problems are solved with these (or heuristic algorithms) rather than ML. There is a trend of “reinforcement learning” being applied to some operations problems, but they can sometimes struggle to beat well-tuned OR algorithms especially when constraints are hard (e.g. production capacities, shift schedules etc. – which OR handles efficiently).

The role of domain knowledge: 

Domain knowledge also plays a role. Often, an experienced analyst or engineer can craft a simple model leveraging deep domain knowledge that outperforms a generic ML that doesn’t incorporate that knowledge. For example, an actuary might incorporate known mortality tables and trends to forecast insurance claims – a machine learning model starting from scratch would have to “rediscover” those well-known patterns with lots of data.

Conclusion: 

When looking to solve these problems - start small. By doing so, organisations can build fundamental analytical skills and understanding. Simpler solutions can also be easier to deploy within existing data infrastructure and often easier to integrate into decision processes (people trust what they understand). This doesn’t mean avoiding AI – it means applying AI where it truly adds value that simpler analytics cannot.

Ebook Available

How to maximise the performance of your existing systems

Free download

Richard Brown is the Technical Director at Audacia, where he is responsible for steering the technical direction of the company and maintaining standards across development and testing.