Jobs Career Advice Post Job
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Mar 17, 2026
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • RPO is a Specialist Recruitment Agency that focuses on select market segments. These are Engineering, Finance, Supply Chain and Manufacturing. With over 12 years of experience in the recruitment industry, RPO Recruitment has access to over 80,000 candidates across various industries, all accessible through our highly trained and specialised recruiters. At RPO Recruitment, we have access to the most popular job portals and recruitment sites to enable us to hunt for those positions.
    Read more about this company

     

    Senior Data Engineer

    Job Description

    • A dynamic technology company is looking for an experienced Senior Data Engineer (Spark & Python Specialist) with deep expertise in building, optimizing, and maintaining high-performance data processing engines and modern data lakehouse architectures.
    • The ideal candidate will contribute to technical excellence, refactor legacy ETL logic, and support scalable, cloud-agnostic data solutions.

    Responsibilities:

    • Apply Spark best practices for memory management, shuffle tuning, and partitioning to optimize data pipelines.
    • Develop and maintain modular Python/PySpark data pipelines with Delta Lake/Parquet in a cloud-agnostic environment.
    • Refactor complex SQL-based ETL into maintainable Python libraries.
    • Build and optimize Medallion Architecture layers (Bronze/Silver/Gold) in the data lakehouse.
    • Support code-first orchestration using tools like Airflow or Dagster, reducing dependency on GUI-based orchestration.
    • Participate in code reviews, mentor junior engineers, and contribute to automated testing frameworks (Pytest).
    • Collaborate with data scientists, analysts, and business stakeholders to translate requirements into actionable data solutions.
    • Lead initiatives to modernize data workloads, introduce new technologies, and drive best practices.
    • Ensure compliance with data security, governance, and quality standards.

    Requirements:

    • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
    • 6+ years of experience with Spark/PySpark, with the ability to optimize complex DAGs and diagnose performance issues.
    • Advanced Python skills, including building reusable libraries and implementing automated testing.
    • Strong SQL (T-SQL) skills for interpreting and migrating legacy logic.
    • Hands-on experience with data lakehouse technologies, including Delta Lake and Parquet.
    • Experience with Azure Synapse Analytics, Dedicated SQL Pools, and Data Factory for complex pipelines.
    • Familiarity with containerization (Docker) and open-source standards for cloud-agnostic workloads.
    • Proven collaborative experience within technical teams and mentoring junior engineers.

    Check how your CV aligns with this job

    Method of Application

    Interested and qualified? Go to RPO Recruitment on rporecruitment.simplify.hr to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at RPO Recruitment Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
 
Send your application through

GmailGmail YahoomailYahoomail