Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Oct 26, 2023
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    Impact is transforming the way enterprises manage and optimize all types of partnerships. Our Partnership CloudTM is an integrated end-to-end solution for managing an enterprises partnerships across the entire partner lifecycle to activate rapid growth through the emerging Partnership Economy.Impact was founded in 2008 by a team of Internet marketing and ...
    Read more about this company

     

    Senior Analytics Engineer

    Your Role at Impact:

    The Senior Analytics Engineer is a technical data professional; able to manage, process and analyse large datasets using big data technologies such as Apache Spark, SingleStore and BigQuery as well as being able to visualise and report on these datasets. The ideal candidate will be proficient in designing and implementing efficient data workflows to move, transform, aggregate and enrich data from various sources into a centralised data warehouse and purpose-built data marts, ensuring internal code management and data quality standards are adhered to, in addition to providing users access to standard reports, rich visualisations and other analytical data assets.

    The position requires a strong analytical mindset, attention to detail, programming skills and experience with big data technologies. This is a highly collaborative role as the engineer needs to engage with Subject Matter Experts to implement business logic, understand source data structures and ensure data outputs are accurate, fit-for-purpose, pass quality assurance and provide value to the business.

    What You'll Do:

    • Design, develop and maintain data models, data marts and analytical data stores
    • Work closely with Subject Matter Experts (SMEs), Business and Technical stakeholders to define and document business logic and transformation rules to be used in data load jobs and (materialised) analytical views
    • Build and maintain data load and transformation jobs to populate data lakes, data marts and data warehouses following the Extract-Load-Transform (ELT) and Extract-Transform-Load (ETL) paradigms as appropriate
    • Create and maintain reusable data assets ready for consumption by machine learning models, data visualisation tools and data analysts
    • Create and maintain entity-relationship diagrams (ERDs), data dictionaries and data flow diagrams
    • Create and maintain table and column metadata
    • Manage code releases, deployment cycles and the associated change management processes
    • Build and maintain standard reports for internal stakeholders
    • Contribute to the development and expansion of common utility libraries used by data teams
    • Maintain high standards of quality, integrity and accuracy in produced data assets
    • Troubleshoot and resolve any issues that arise relating to data assets in the production environment in a timely manner
    • Optimise total system performance related to ETL/ELT workloads and analytical queries, ensuring efficient use of compute resources and stability of data systems
    • Optimise code related to ELT/ETL workloads for simplicity, reusability and efficiency and in line with best practice
    • Conduct periodic integrity checks on productionalized data assets
    • Safeguard sensitive company data
    • Work with the data Quality Assurance (QA) function to extend and enhance programmatic validation of productionalized data assets
    • Stay up-to-date with the latest big data technologies and best practices
    • Automate manual data load, data transformation and data management processes
    • Review and Sign off code changes
    • Mentor and train junior colleagues
    • Actively participate in the hiring process and performance management of team members

    What You Have:

    • Bachelor's or Master's degree in Computer Science, Data Science or related field
    • 6+ years of experience in data pipeline development and data warehousing using big data technologies such as Apache Spark, Google DataFlow, SingleStore, Impala, Kudu and/or BigQuery
    • Proven track record in developing enterprise-level data marts
    • Experience with Databricks advantageous
    • Experience with dbt advantageous
    • Experience with Google Cloud Platform and BigQuery advantageous
    • Strong SQL development experience required
    • Strong Python programming skills required
    • Strong knowledge of relational database management systems
    • Strong data modelling and schema design experience
    • Experience with workflow management tools such as Airflow, Luigi or Oozie advantageous
    • Knowledge of data integration patterns, data load patterns and best practices required
    • Knowledge of software development best practices and version control tools
    • Strong analytical and problem-solving skills
    • Strong written and verbal communication skills
    • Good leadership and workload management skills and experience advantageous
    • Ability to work in a team environment and collaborate with internal stakeholders

    Method of Application

    Interested and qualified? Go to Impact on boards.greenhouse.io to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Impact Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail