We are a digital consultancy. We do client work. This means some weeks you will be deep in a single codebase, building something carefully. Other weeks you will be context-switching between three clients, answering questions that should have been answered six weeks ago, and producing a prototype in two days that will take a real team three months to productionise properly. We are not going to pretend this is a product company. It isn't. If the variety appeals to you — if working across industries, technology stacks, and problem types every few months sounds like a good way to grow — this is genuinely a strong place to do that. If you want depth on a single problem over years, you should probably work for a product company. We have been building AI and ML solutions for clients since 2018. We have shipped LLM integrations for legal, logistics, financial services, and media clients. We are not the biggest consultancy and we are not trying to be. We are 45 people, we are profitable, and we choose clients whose problems we find interesting. The Generative AI Developer we're hiring will work primarily on LLM integration and RAG projects for our current client pipeline, with a mix of implementation and advisory work.
Responsibilities
Build and deliver LLM integration and RAG solutions for clients across legal, logistics, and media verticals
Conduct discovery sessions with client stakeholders to scope AI use cases and define evaluation criteria
Write clear technical documentation that client engineering teams can use to extend and maintain what we deliver
Support the sales team with technical input on proposals for new AI engagements
Contribute to our internal AI practice knowledge base — patterns, templates, and lessons from completed projects
Requirements
3–5 years of software engineering with at least 2 years of LLM or AI integration experience in a client-facing or production context
Strong Python — clean, maintainable code you're comfortable delivering and walking a client through
LangChain or LlamaIndex for RAG and agentic workflow implementation — you know which one to reach for when
OpenAI API experience: function calling, structured output, streaming, and context window management
Prompt Engineering as a systematic practice — you build evaluation sets and measure what changes
RAG pipeline design — chunking strategies, embedding model selection, retrieval evaluation, and re-ranking
Client communication skills — you will present work, take requirements, and manage expectations regularly
Benefits
Exposure to ML and AI problems across five or six different industries per year
Full remote with optional access to our London office
£65,000 – £80,000 base salary + annual discretionary bonus
£1,500 annual learning budget
Client travel is occasional and always covered — we don't require it, but some clients want it