Jobs Career Advice Signup

Send this job to a friend


Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Nov 4, 2020
    Deadline: Not specified
  • Want to get a job fast? Signup and complete your profile on MyJobMag. Employers will find you 4x faster with a complete profile. You can download your completed profile anytime
    At DotModus, we offer the best in industry specific, bespoke big data solutions. View our Services and Products below or contact us today to find out how big data analytics technology is transforming your industry. Dotmodus is a Google Cloud Partner in EMEA and specialises in helping our customers analyse their customers'​ data using Goog...
    Read more about this company


    Big Data Engineer

    The Role

    We value a data engineer as someone who works behind the scenes to obtain, process and supply data via various methodologies and technologies, to various consumers, in ways and forms that make sense and add value. This definition is very broad, as the field of data engineering is just as broad.

    You may be the type of data engineer that develops API endpoints for the consumption of data by end users or even another data pipeline, or you may be the type of data engineer that develops highly distributed, high availability data processing pipelines in an effort to satisfy the need of the ever questioning data analysts and/or data scientists. 

    For this role, we’re looking for experienced Big Data Engineers with experience in building data warehouses and / or transactional data models within the Big Data environment. 

    That means that Spark / PySpark experience, as well as experience with the Hadoop EcoSystem, which leads on to any Python, Scala or Java coding experience. We need someone who can build data pipelines (ELT / ETL) within the Big Data environment. 

    Some additional skills and experience required would be:

    • In depth architectural knowledge of Spark and Hadoop
    • Expert in building ETL pipelines using Spark (Pyspark)
    • Experience using Spark with HDFS,
    • Experienced writing data pipelines using functional programming (Python, Java, Scala)
    • Advanced ANSI SQL experience
    • Firm understanding of Big Data and traditional data processing, understanding the differences in depth to make informed design decisions
    • Firm understanding of data Modelling OLAP vs. OLTP vs. Hybrid models
    • Firm understanding of dimensional modelling i.e. Kimball 

    Skills that would be beneficial:

    • Experience using Spark with Cloud Storage and Hiveserver2 LLAP (HWC)
    • Proven track record of developing DAGs using Apache Airflow
    • Exposure to Ranger and Atlas

    If you have experience in these areas, have the ability to solve problems and you’re looking for a challenging project to work on - let us know.

    Method of Application

    Interested and qualified? Go to DotModus on to apply

    Learn how to get a job in any industry you want. Read 72 Hours to The Job You Love

  • Send your application

Back To Home

Career Advice

View All Career Advice

Subscribe to Job Alert


Join our happy subscribers

This website uses cookies to improve your experience. By using this site you agree to the storing of cookies on your device to enhance navigation, analyze site usage, and assist in our marketing efforts. To learn more, see our Cookie Policy. Accept and Close
Send your application through

Yahoomail Gmail Hotmail