About the position
Role summary
This is a hands-on engineering role focused on building real, deployable AI solutions rather than experimentation or isolated prototypes. The engineer operates across the full lifecycle of AI delivery, from data preparation and knowledge engineering to LLM orchestration and application development.
The role requires the ability to work across multiple platforms, including cloud-native environments with an AWS focus and some Azure exposure, and integrate AI capabilities into secure, scalable, and reusable systems. The engineer is expected to contribute to both rapid delivery and the establishment of repeatable engineering patterns.
Key responsibilities
Full stack AI solution development
- Design and develop end-to-end AI applications, including backend services, APIs, and user-facing components.
- Build production-grade solutions that expose AI capabilities through web applications, services, or agents/bots.
- Support authentication, integration, and deployment of AI-enabled applications into enterprise environments.
AI and LLM engineering
- Design and implement LLM-based workflows, including prompt engineering, tool calling, and structured outputs.
- Build and integrate AI agents with defined capabilities, tools, and execution logic.
- Contribute to multi-step reasoning flows and agent-based architectures where required.
- Pattern recognition, identifying where similar use-cases can be adopted in more than one area.
- Retrieval and knowledge engineering.
- Develop retrieval-based AI solutions, including vector search and knowledge grounding patterns.
- Transform structured and unstructured data into AI-ready formats, including embeddings and indexed datasets.
- Ensure AI outputs are grounded, explainable, and aligned to defined controls and quality standards.
Data engineering and integration
- Build data pipelines that enable AI systems to interact with enterprise data sources.
- Integrate AI capabilities into core systems using APIs and microservices.
- Support patterns such as text-to-SQL, curated data views, and AI-driven access to structured data.
Cloud and platform engineering
- Develop and deploy AI solutions on cloud platforms, with a focus on AWS (e.g. Lambda, S3, API Gateway, Bedrock, SageMaker) and exposure to Azure-based services.
- Integrate AI tooling and services into cloud-native architectures using secure and scalable design patterns.
- Apply modern DevOps practices, including CI/CD, environment management, and automated deployment pipelines.
Engineering standards and delivery
- Write high-quality, production-grade code in Python using object-oriented and modular design principles.
- Contribute to architecture discussions and support the evolution of reusable AI engineering patterns.
- Ensure solutions are maintainable, testable, and aligned to enterprise engineering standards.
RequirementsMinimum AI skill sets include:
- Production Grade Python Engineering - ability to build and operate reliable backend services and orchestration layers, object-oriented programming, utilising CI/CD pipelines etc
- AI retrieval & grounding engineering – ability to design retrieval pipelines that control hallucination and ensure AI model outputs within CIB Model Risk frameworks and appetite, including explainability and auditability.
- LLM orchestration & tool integration including engineering prompt flows, function/tool calling, and structured outputs as part of systems, not just ad-hoc prompting. Having built solutions that orchestrate multi-agent frameworks is a plus open-standard agent integration (MCP)
- Practical experience implementing Model Context Protocol or equivalent open standards to integrate AI safely with enterprise systems, enterprise API & micro services design
- Hands-on experience working with LLM platforms such as OpenAI, Claude, Gemini, or similar
- Experience with prompt design, orchestration patterns, and structured outputs
- Understanding of retrieval-augmented generation (RAG) and vector-based search approaches
Core engineering
- 4 to 5 years’ experience in software or full stack development
- Strong Python development capability, particularly for backend and AI-related use cases.
- Experience building APIs and working with micro services architectures.
- Familiarity with CI/CD pipelines and production deployment practices.
- Low code development for simpler, well defined automation use cases, leveraging platforms such as Copilot Studio and the Power Platform to enable faster delivery with reduced infrastructure and coding requirements.
- Pro code development for more complex use cases, building production grade AI solutions to deliver the highest business value.
Data and knowledge engineering
- Experience working with both structured and unstructured datasets
- Understanding of embeddings, vectorisation, and knowledge base design
- Ability to design data pipelines that support AI model consumption
- Cloud platforms (AWS Focus)
- Experience building and deploying solutions on AWS, including services such as compute, storage, APIs, and managed AI services
- Exposure to Azure environments and AI tooling is advantageous
- Understanding of cloud architecture principles for scalability, resilience, and security
Full stack development
- Experience developing front-end enabled solutions or integrating APIs into user-facing applications
- Understanding of authentication, access control, and secure application design
- Ability to work across backend and frontend layers to deliver complete solutions
Experience level
- 4 to 5 years’ experience in Knowledge, Data and Agent Engineering with hands-on
exposure to AI, AWS and Data platforms preferably within Financial Services, Investment Banking, or Similarly Regulated Industries.
Desired Skills:
- AI
- Data and Agent Engineering
- AWS
- CI/CD
- Azure
- Python
- LLM Platforms
Desired Qualification Level:
About The Employer: