Middle Software Developer

October 27, 2025

Lviv, Ivano-Frankivsk, Remote

Our client is one of the biggest players in the population health management sector and leading it several years in a row by supporting healthcare organizations to achieve financial success in value-based care(recognized as Best in KLAS in Value-Based Care Managed Services). The company delivers enterprise-level transformational healthcare outcomes.

Project Overview

The project delivers a Database-as-a-Service (DaaS) platform designed to manage and expose population health data at scale. The system provides both data access APIs and analytical capabilities, enabling downstream applications and services to consume normalized, secure, and high-quality healthcare data.

Each month, the platform ingests and processes roughly 1 petabyte of data, performing normalization, enrichment, storage, and analytics generation.

Architecture and Responsibilities

Our engineering team owns the end-to-end data pipeline, from raw ingestion to analytical data delivery in Amazon Redshift. Key components include:

  • Data Ingestion & Transformation:
    • Ingest data from multiple source systems into the data lake (Amazon S3, AWS Glue).
    • Transform and model data using dbt (SQL-based transformations).
    • Employ Python Jinja templates within dbt to build efficient, parameterized transformations.
  • Orchestration & Workflow Management:
    • Define and maintain Directed Acyclic Graphs (DAGs) for data workflows.
    • Deploy and orchestrate these DAGs in Apache Airflow, ensuring reliable scheduling and execution.
    • Integrate orchestration with Spring Boot job runners for custom workloads.
  • Application Layer:
    • Implement multiple Spring Boot microservices performing data validation, ingestion, and monitoring tasks, all orchestrated via Airflow.
  • Infrastructure & Deployment:
    • Use Terraform to provision and manage cloud infrastructure.
    • Deploy all components within AWS, leveraging IAM for authentication and role-based access, and Redshift for data warehousing and analytics.
    • Configure and maintain security and authorization policies across Redshift and other AWS services.

Technology Stack

  • Languages & Frameworks: Java (Spring Boot), SQL, Python (Jinja)
  • Data & Orchestration: dbt, Apache Airflow, Amazon Redshift, AWS Glue, S3
  • Infrastructure & Cloud: Terraform, AWS IAM

Responsibilities

  • Participate in the entire application lifecycle, focusing on coding and debugging;
  • Write clean code to develop functional services;
  • Troubleshoot and debug applications;
  • Address technical and design requirements;
  • Build reusable code and libraries for future use;
  • Collaborate with architect, developers, and system administrators to deliver new features;
  • Follow emerging technologies;
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technology;
  • Develop and maintain optimal data pipeline architecture — including development related to data acquisition and monitoring, data quality, integration, normalization, and analytics development.

Qualification and skill set

  • At least 4 years’ experience in Java-based programming;
  • Significant coding skills in Java;
  • Exceptional problem-solving and analytical abilities;
  • Experience with the CI/CD (Jenkins, GitHub Actions, Terraform) and Docker containerization;
  • Experience with the AWS cloud (S3, Redshift, Apache Airflow for orchestration, ASW Glue, ECS);
  • Advanced SQL knowledge and experience working with relational SQL databases and NOSQL databases;
  • Experience with developing microservices and understanding the peculiarities of microservices architecture;
  • Ability to work with other developers and assist junior team members;
  • Good English written and verbal communication.

Would be a plus

  • AWS Certification (AWS Developer, AWS Data Engineer, etc.);
  • Experience with enterprise-class RDBMS (SQL-Server), Cloud Data warehouse (Snowflake/Redshift/BigQuery), and dbt (data build tool);
  • Experience with big data tools: Hadoop, Spark, Kafka etc;
  • Python programming language;
  • Work in an Agile Development environment;
  • Degree in computer science, coding, or a related field.

Why join VITech?

  • Work in an agile team with a high-quality code environment and use modern technologies to deliver software that provides value.
  • Interesting projects with a focus on the Healthcare industry, where communication with clients is a part of daily work;
  • Professional growth opportunities with our corporate development programs.
  • People-oriented corporate culture where your individuality is appreciated.

We thoughtfully create and adapt benefits to improve your life. Unlimited free treats and coffee are not the main things we can offer. We strive to simplify your life, and we take care of your mental and physical health.

Zoriana Shelest
Recruiter contacts:

Apply

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.