You've shipped models to production. You know the difference between a model that performs well in a notebook and one that holds up when real users and real data pressure-test it for six months. You're comfortable with the full cycle — hypothesis formation, exploratory analysis, feature engineering, model training, evaluation, and a clean handoff to the engineering team. You probably have a strong opinion about when not to use a neural network. We're building predictive analytics for a B2B HR platform: churn prediction, hiring velocity forecasting, and early warning signals for account health. We have 200+ customers and three years of data. The job is real, the data is real, the stakes are real. We're a team of two data scientists and three engineers. Everyone reviews everyone's code. We move fast but we don't ship broken things.
Responsibilities
Own two to three predictive models from hypothesis to production deployment
Conduct exploratory data analysis and present findings to the product team
Collaborate with engineers on feature engineering and model integration
Monitor model performance in production and iterate based on drift signals
Document modelling decisions, experiments, and assumptions clearly
Requirements
3–5 years in data science with at least two models shipped to production
Strong Python — PyTorch or TensorFlow for deep learning, Pandas and Scikit-learn for everything else