Our company was founded in Osaka in 1987 and has operated data infrastructure for the Japanese financial services sector for nearly four decades. Three years ago we established a US engineering office in Seattle to build the next generation of our data platform — a cloud-native architecture that supports our expanding work with international clients in asset management and securities processing. We value stability. We value precision. We value engineers who stay long enough to develop deep expertise and institutional knowledge rather than cycling through roles every eighteen months. Our US team currently has eleven engineers. The average tenure in our Japan headquarters is nine years. We are not the fastest-moving environment in Seattle, and we are transparent about that. What we offer in return is serious engineering problems, a platform that processes significant financial data volumes daily, and the kind of job security that is increasingly rare in this industry. The role requires an experienced data engineer who can lead the migration of our remaining on-premises batch processing to our cloud platform, and who has the patience to do it carefully in an environment where data correctness is non-negotiable.
Responsibilities
Lead the migration of remaining on-premises batch processing jobs to our Snowflake and Spark-based cloud platform
Design and implement data pipeline monitoring, alerting, and data quality validation frameworks
Define and document data engineering standards for the US team: pipeline structure, testing requirements, and deployment process
Collaborate with our Japan engineering team on shared infrastructure components and cross-regional data flows
Mentor two mid-level data engineers through code review and structured technical development
Requirements
7+ years of data engineering with at least three years in a financial services or similarly regulated data environment
Python and SQL at a professional standard — clean, documented, version-controlled, tested
Apache Spark for large-scale financial data processing — you have run production Spark jobs on real financial datasets
Airflow for orchestrating complex multi-step batch pipelines with dependency management and alerting
Snowflake for cloud data warehousing, including performance optimisation, cost governance, and access control design
Experience migrating on-premises batch systems to cloud environments without service disruption
Familiarity with financial data concepts: trade lifecycle, reconciliation, and settlement data is an advantage
Benefits
Exceptional job stability — we have not conducted involuntary layoffs in the history of our US office
Hybrid model: three days in our Seattle office, two days remote
$125,000 – $148,000 base salary + annual bonus
Defined benefit pension contribution — 5% employer match
25 days annual leave plus US and Japanese public holidays