We are an MGA — a Managing General Agent — which means we design insurance products, underwrite risk, and distribute through broker partners, but we don't hold the paper ourselves. Our capacity providers are Lloyd's syndicates and two Bermuda reinsurers. The reason that matters for this job is that the models we build have direct, immediate consequences for how risk is priced and what our loss ratios look like at the end of each underwriting year. There is no A/B test buffer between a model going live and it affecting real pricing decisions. Our current data science team is four people. We have been profitable since year two, which is uncommon in our sector and a direct result of taking model accuracy seriously. We're hiring a mid-level data scientist to work primarily on pricing model development and portfolio analysis for our cyber and tech E&O lines — two fast-moving risk classes where the data is imperfect, the claim patterns are non-stationary, and the underwriters need analytical support they can trust. The role requires someone comfortable working with imperfect data in a highly consequential environment, and who understands why underwriter intuition and actuarial constraint matter alongside model output.
Responsibilities
Build and validate pricing models for our cyber and tech E&O portfolios in collaboration with underwriters and actuaries
Conduct portfolio performance analysis: loss emergence, segment profitability, and exposure monitoring
Design and implement data quality checks for policy and claims data feeding into pricing models
Prepare analytical deliverables for our capacity providers — Lloyd's syndicates require quarterly model performance reporting
Contribute to the team's model documentation and validation framework
Requirements
3–5 years of data science with at least two years in financial services, insurance, or a similarly consequential domain
Strong Python: Pandas, NumPy, and Scikit-learn for data manipulation, feature engineering, and model development
Statistics at depth — GLMs, survival models, and Bayesian methods for pricing and loss modelling, not just classification metrics
SQL for extracting and analysing policy, claims, and exposure data from relational systems
R is used by our actuarial partners — reading and translating R analysis is a practical advantage
Familiarity with insurance concepts — loss ratio, combined ratio, frequency/severity decomposition — is significantly valued
Benefits
Work in a data environment where model quality is measured in loss ratio points, not Kaggle scores
Hybrid — two days in our London office, three days remote
£62,000 – £78,000 base salary + annual discretionary bonus
Exposure to Lloyd's of London underwriting cycle and reinsurance structures
£1,200 annual actuarial or data science learning budget