VITech is looking for a talented Senior Software Engineer in our offices and remotely to join a new team working on a project in the healthcare domain.
About the client
Our client is one of the biggest players in the population health management sector and leading it several years in a row by supporting healthcare organizations to achieve financial success in value-based care (recognized as Best in KLAS in Value-Based Care Managed Services). The company delivers enterprise-level transformational healthcare outcomes.
About the project
The team specialize in developing data ingestion, transformation, and eligibility-filtered data brokerage services that ensure compliant, secure, cost-effective, and scalable delivery of healthcare datasets. The project has architectural impact, opportunity for domain and technology growth, autonomy at work, high trust and collaborative team culture.
Project technical stack
- Programming Languages: Java, Scala
- Data Processing & Transformation: Apache Spark, DBT (via dbt-spark), Apache NiFi
- Orchestration & Deployment: Kubernetes, Argo Workflows, Argo CD, Helm
- Build & CI/CD: Gradle, Maven, GitHub Actions
- Other Tools & Context: GitHub (code repository)
Responsibilities
- Contribute as a senior software engineer to the migration of legacy data connectors and ETL workflows to a modern architecture using Apache Spark and DBT.
- Design and implement scalable data pipelines for ingestion, transformation, and export.
- Work with structured and semi-structured data delivered via SFTP, REST APIs, and stored in AWS S3.
- Contribute to orchestration and deployment using Argo Workflows, Argo CD, and GitHub Actions.
- Collaborate with engineers and architects to analyze legacy code and propose improved data flow designs.
- Write clean, testable code and build reusable components for common ingestion and processing logic.
- Monitor and troubleshoot pipelines in staging and production environments.
- Participate in agile processes including daily standups, sprint planning, and code reviews.
- Stay up to date with industry best practices in data engineering, ETL modernization, and cloud-native data processing.
Qualification and skill set
- Experience:
- 5+ years of professional experience in data engineering or backend development.
- Proven experience designing and implementing ETL/ELT pipelines.
- Hands-on expertise with Apache Spark.
- Experience with Apache NiFi or comparable data pipeline and orchestration tools.
- Infrastructure & tools:
- Practical experience deploying and running applications in Kubernetes-based environments.
- Familiarity with DBT or similar tools for data modeling and transformation (e.g., SQL-based ELT workflows).
- Understanding of CI/CD pipelines; experience with GitHub Actions, Gradle, or Maven is a plus.
- Working knowledge of cloud storage and infrastructure (e.g., AWS S3, IAM roles, VPC networking) is a plus.
- AWS certification (e.g., AWS Certified Data Engineer, Solutions Architect, or Developer) is a plus.
- Collaboration & communication:
- Strong sense of ownership, autonomy, and ability to drive tasks to completion.
- Comfortable contributing to technical designs and reviewing code.
- Strong communication and collaboration skills; ability to work cross-functionally with DevOps, Data, and Backend teams.
- Willingness to support and mentor less experienced engineers when needed.
Why join VITech?
- Work in an agile team with a high-quality code environment and use modern technologies to deliver software that provides value.
- Interesting projects with a focus on the Healthcare industry, where communication with clients is a part of daily work.
- Professional growth opportunities with our corporate development programs.
- People-oriented corporate culture where your individuality is appreciated.
We thoughtfully create and adapt benefits to improve your life. Unlimited free treats and coffee are not the main things we can offer. We strive to simplify your life and we take care of your mental and physical health.