Applying AI across the Software Development Lifecycle (SDLC)

Applying AI across the Software Development Lifecycle (SDLC)

Adam Brookes

20 May 2025 - 9 min read

AIEngineering
Applying AI across the Software Development Lifecycle (SDLC)

AI, in particular Generative AI, is becoming more and more prevalent across the engineering process. From GitHub Copilot accelerating code delivery to AIOps predicting outages before they occur, software development teams are racing to embed AI across the SDLC. Yet alongside 20–30% productivity gains come new governance, skills gaps and ethical challenges. This article looks at mapping the opportunities and the risks in implementing AI across engineering delivery processes.

Summary

  • AI is reshaping every SDLC stage

Gartner predicts that by 2027, 70% of software engineering leaders will oversee Generative AI projects. Early adopters report 20–30% delivery-speed gains.

  • Roles are evolving

Demand is rising for prompt-engineers, MLOps specialists and AI governance leads; traditional coding shifts toward system design, data curation and AI oversight.

  • Automation stretches from Dev to Ops

AI-generated tests, self-healing infrastructure and predictive incident response (AIOps) can cut support toil by a third, says McKinsey.

  • Risk & compliance matter

UK/EU regulations, IP exposure and model bias require robust policies – private models, zero-trust data handling and ethics checkpoints.

  • A strategy is important

Pilot now, upskill teams, update hiring criteria, invest in data infrastructure and set AI-usage guardrails to outpace slower movers.

The State of AI in Software Engineering

AI – particularly Generative AI – is reshaping software engineering. AI copilots can generate code, tests and even architecture suggestions, augmenting developers’ productivity to a new level.

Gartner predicts that by 2027, 70% of software engineering leader roles will explicitly require oversight of generative AI projects (up from <40% today). Organisations across industries are already experimenting with AI-powered development, using tools such as GitHub Copilot for coding, ChatGPT for documentation, and ML-driven analytics for decision-making.

IT leaders are now in a process of preparing for their organisations to leverage these AI capabilities, which could significantly accelerate delivery (some studies show up to 20-30% productivity gains) while also managing new risks, like AI-generated errors or security vulnerabilities.

The future will see greater demand for skills in prompt engineering in guiding AI tools, data engineering to feed AI models, and AI governance to ensure ethical, compliant AI usage. Traditional coding may take less time, shifting the focus to higher-level design, integration and training AI components.

AI-driven automation will extend beyond coding to testing, ops and beyond. In QA, AI can create and execute tests, in operations AI (AIOps) can predict and auto-resolve incidents by analysing logs. This means a more autonomous systems lifecycle – e.g. self-healing infrastructure and intelligent assistants in deployment.

However, the rise of AI brings strategic considerations:

●     how to govern AI usage (to prevent intellectual property or security risks from code suggestions, for example),

●     how to handle AI ethics (ensuring AI systems engineers build are fair and compliant), and

●     how to leverage AI for competitive advantage.

Through a compliance lens, the UK’s regulatory environment (and possibly forthcoming EU AI Act) will influence enterprise AI adoption. CTOs should collaborate with HR and legal on policies for AI use in development (e.g. what data can/can’t be fed into public AI tools), and consider building internal AI capabilities (private models fine-tuned on company code) for safety and customisation.

AI-Augmented Development:

The software development lifecycle is being transformed by AI, especially generative models (like GPT-4) that can produce human-like text, including code. Developers can now use AI pair programmers: GitHub Copilot, Amazon CodeWhisperer, etc., which suggest lines or blocks of code as they type.

Early research and anecdotal evidence indicate significant productivity improvements – a widely cited experiment by GitHub found Copilot users could complete tasks ~55% faster. Even if one takes conservative numbers, that’s a huge efficiency gain at scale.

For large development teams, if each developer becomes, say, 20% more efficient thanks to AI assistance, that could equate to millions saved or many more features delivered per year. But beyond efficiency, AI can also improve quality by catching errors or offering better solutions (trained on vast codebases).

However, AI can sometimes generate insecure or incorrect code (because it’s predicting likely patterns, not guaranteeing correctness). Thus, oversight remains crucial. Many organisations adopt a policy of: AI can suggest, but human devs must review and test. This requires developers to have skills in reviewing AI output critically – a new angle to code review processes.

We also see AI being integrated in IDEs (Integrated Development Environments) to do things like explain code (useful for onboarding), convert one language to another, or even generate whole boilerplate modules from requirements. Microsoft’s research on “AI copilots for software engineers” suggests future IDEs will be conversational – devs might ask “hey IDE, create a data access class for customer records with these fields” and get a stub ready.

AI in DevOps and Ops (AIOps):

Beyond coding, AI is also making inroads into operations:

Incident Management:

AI can sift through monitoring data to detect anomalies faster, helping in predicting issues or quickly pinpointing root causes by correlating logs. Tools like Dynatrace or Splunk incorporate AI to highlight unusual patterns. Over time, it’s possible that an AI could learn what infrastructure behaviours precede a failure and alert the team earlier.

Automated Remediation:

There are systems being developed that, upon detecting a known pattern, for example a memory leak, can trigger automated responses (e.g. restart a service, clear a cache). In very advanced cases, AI could propose a code fix for a known bug, generating pull requests to fix vulnerabilities or memory leaks.

Capacity Planning:

AI can analyse usage trends to forecast when more capacity is needed or where costs can be optimised, which is great for cloud management in areas such as auto-scaling policies etc., guided by predictive models.

Service Desk Automation:

Externally, many user support queries can be handled by AI chatbots. Internally, AI can assist developers by answering questions about internal systems, for example an AI trained on your internal docs and code that could answer questions such as “how does the payment service validate transactions?”.

Workforce and Role Changes:

AI doesn’t eliminate the need for human engineers – rather it can change what they focus on. It is possible that routine coding tasks might largely be handled by AI, while engineers will spend more time on high-level problem solving, integrating components and fine-tuning AI outputs.

Roles looking to grow:

  • ML Engineers and Data Scientists: Many products will incorporate AI features such as recommendation engines, personalisation and predictive analytics. Dev teams will need embedded ML expertise to build or integrate these models.
  • Prompt Engineers: Writing effective prompts to get the best output from AI models is a new skill. There is a case where teams could have specialised testers who create adversarial prompts to test AI systems (for biases or failures).
  • Ethics and Policy Specialists: Ensuring AI usage complies with policies. As well as, if the company develops AI-driven software, making sure it’s fair and transparent as required by regulators or corporate values. The EU AI Act likely will require certain documentation and risk assessments that teams need to review and produce.
  • AI Product Managers: People who understand AI’s capabilities and limitations and can shape product features around them – bridging technical and business.
  • Security re-focus: AI opens new threat vectors (e.g. prompt injection attacks, data poisoning). Security teams will adapt to monitor and defend against these in AI-augmented systems.

Regulatory and Ethical Landscape:

UK organisations will operate under evolving guidelines for AI. The UK government has signalled a pro-innovation approach but will likely align with some principles of EU regulations.

Key issues:

Data Privacy:

Developers must ensure no personal data is inadvertently fed into AI tools (like code or logs containing personal data). Policies might restrict use of cloud AI for sensitive projects.

IP and Licensing:

If an AI suggests code that was seen in training data under certain license, could there be legal exposure? Microsoft/GitHub faced a lawsuit around Copilot’s outputs potentially regurgitating licensed code. Companies might mitigate by using only in-house trained models or by scanning AI output for any direct matches to open source code.

AI Ethics:

If engineering teams build AI features (for example, an algorithm deciding credit risk), they need to avoid biases. The future of engineering includes working with compliance and ethics officers to validate models and outputs.

Skills Modernisation:

The UK has initiatives to address the AI skills gap. Companies should support their engineers in AI training (through courses or internal programmes).

Embracing AI in Strategy:

Teams should start looking to incorporate AI into their roadmaps. This could include budgeting for AI tools, encouraging “AI champions”, or even establishing R&D type teams to pilot and disseminate AI solutions.

Within this, organisations should consider where AI could create new business value, for example in predictive maintenance, advanced analytics or personalised customer service, and ensure the engineering organisation is ready to deliver those. This can tie into resource and technical requirements in needing MLOps capabilities, data pipelines etc.

Human Factor and Change Management:

To help with adoption, leaders can frame AI as augmenting, not replacing – much like DevOps automation freed ops from repetitive tasks, AI will free devs from boilerplate and allow more focus on creative design and solving complex problems.

There is a strong emphasis that human oversight remains critical. AI is a tool. It’s similar to how calculators didn’t eliminate mathematicians, but changed their focus.

Organisations with typically collaborative engineering cultures, can approach this positively through initiatives such as hackathons for engineers to try AI tools, or dedicated times and sandboxes where teams can experiment with AI to solve a problem.

Getting Started:

  1. Pilot, measure, scale. Start small, e.g. use Copilot on a non-critical repo, capture hard metrics, then roll out with evidence-based confidence.
  2. Write (and teach) AI policies. Clarify what code or data may enter public models, mandate human review of AI output, and audit for licensing conflicts.
  3. Invest in data foundations. High-quality, well-governed data lakes and MLOps pipelines are prerequisites for AI success, and harder to retrofit later.
  4. Upskill the workforce continuously. Blend AI fundamentals, prompt-engineering and ethical AI modules into learning paths for developers, QA and ops.
  5. Create an AI Centre of Excellence. Appoint an AI engineering lead, share best practices, and monitor the fast-moving regulatory and tooling landscape.
Ebook Available

How to maximise the performance of your existing systems

Free download

Adam is Head of Consulting at Audacia, specialising in delivering advice and strategic roadmaps for the delivery of technology projects across engineering, data, AI and cloud.