Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: May 7, 2020
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    At DotModus, we offer the best in industry specific, bespoke big data solutions. View our Services and Products below or contact us today to find out how big data analytics technology is transforming your industry. Dotmodus is a Google Cloud Partner in EMEA and specialises in helping our customers analyse their customers'​ data using Goog...
    Read more about this company

     

    Big Data Developer

    Data Engineer 

    We value a data engineer as someone who works behind the scenes to obtain, process and supply data via various methodologies and technologies, to various consumers, in ways and forms that make sense and add value. This definition is very broad, as the field of data engineering is just as broad.

    You may be the type of data engineer that develops API endpoints for the consumption of data by end users or even another data pipeline, or you may be the type of data engineer that develops highly distributed, high availability data processing pipelines in an effort to satisfy the need of the ever questioning data analysts and/or data scientists. 

    For this role, we’re looking for experienced Big Data Engineers with experience in building data warehouses and / or transactional data models within the Big Data environment. 

    That means that Spark / PySpark experience, as well as experience with the Hadoop EcoSystem, which leads on to any Python, Scala or Java coding experience. We need someone who can build data pipelines (ELT / ETL) within the Big Data environment. 

    Some additional skills and experience required would be:

    • In depth architectural knowledge of Spark and Hadoop
    • Expert in building ETL pipelines using Spark (Pyspark)
    • Experience using Spark with HDFS,
    • Experienced writing data pipelines using functional programming (Python, Java, Scala)
    • Advanced ANSI SQL experience
    • Firm understanding of Big Data and traditional data processing, understanding the differences in depth to make informed design decisions
    • Firm understanding of data Modelling OLAP vs. OLTP vs. Hybrid models
    • Firm understanding of dimensional modelling i.e. Kimball

    Skills that would be beneficial:

    • Experience using Spark with Cloud Storage and Hiveserver2 LLAP (HWC)
    • Proven track record of developing DAGs using Apache Airflow
    • Exposure to Ranger and Atlas

    Method of Application

    Interested and qualified? Go to DotModus on dotmodus.bamboohr.com to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at DotModus Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail