Eight months ago, one of our data scientists deployed a model update by running a script on his laptop and copying the output file to a production server. Three days later our recommendation engine was producing results nobody could explain. We rolled back — manually — and spent a week rebuilding what had been overwritten. That incident ended our cowboy deployment era. We're now building a proper MLOps stack: Docker-containerised model serving, MLflow for experiment tracking, GitHub Actions for CI/CD, and AWS for deployment. We need a junior engineer to grow alongside this infrastructure. You don't need to have built a full MLOps system before — our two senior engineers designed it and will work with you every day. You need to understand why these tools exist, have touched Docker and at least one cloud platform, and be excited about making ML deployment boring in the best possible sense.
Responsibilities
Maintain and improve our Docker-based model serving infrastructure
Help build and monitor CI/CD pipelines for model deployment
Assist with MLflow experiment tracking setup and maintenance
Monitor deployed models and flag performance degradation
Document infrastructure components and update runbooks
Requirements
Hands-on experience with Docker — you've built and run containers yourself
Basic CI/CD understanding — GitHub Actions or a similar pipeline tool
Familiarity with AWS (EC2, S3, or Lambda at minimum)
Python competency — you can read and modify ML training and serving scripts
Exposure to MLflow or a similar experiment tracking tool is a bonus
Curiosity about reliability and reproducibility in ML systems
Benefits
Real MLOps greenfield — not maintaining legacy spaghetti
Full remote, async-first working culture
$58,000 – $75,000 base salary
Direct mentorship from two senior infrastructure engineers