Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Jun 2, 2022
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    On any given day, two billion people use Unilever products to look good, feel good and get more out of life. With more than 400 brands focused on health and wellbeing, no company touches so many people’s lives in so many different ways. Our portfolio ranges from nutritionally balanced foods to indulgent ice creams, affordable soaps, luxurious shampoos...
    Read more about this company

     

    Data Engineer

    The Data Analytics team is composed of data analysts, data experts, business insights analysts and data scientists, who can: quickly understand the business context + problem, apply advanced mathematics and/or statistics to large data sets; and operate in cloud based environments where data, models, and user interfaces reside in the same platforms-core technologies.

    We are looking for a passionate Senior Data Engineer, who enjoys optimising data systems and building them from the ground up, to join the Data Analytics team.

    MAIN JOB PURPOSE:

    Senior Data Engineer will provide technical data leadership in our Data Analytics team. You will be responsible for creating, expanding, optimising and managing our data depositary warehouse and data pipeline architecture, as well as optimising data flow and collection. This role will support our data initiatives and will ensure optimal data delivery architecture is consistent throughout the organisation and ongoing projects.

    Main accountabilities:

    • Be the expert in UL Data, current and future
    • Create and maintain optimal data pipeline architecture to balance performance and cost
    • Assemble large, complex data sets that meet functional / non-functional business requirements.
    • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, refactoring current architecture, data governance inc. lineage, catalog and quality.
    • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Data Warehousing, PySpark/Spark, Python and Cloud technologies.
    • Work with software engineering best practices such as version control, continuous integration and test-driven development including Azure Dev Ops and Git.
    • Work with MS Azure including Databricks (Pyspark and Delta Lake), Data Factory, Dev Ops, ADLS, Blob Storage and Power BI Datasets
    • Use of advanced SQL skills and an understanding of query and storage optimisation
    • Ensure that deliverables meet or exceed functional, technical and performance requirements
    • Contribute to the development of your own and team’s technical acumen
    • Continuous learning of new tools & technologies via external sources that brings direct value to the business

    EXPERIENCE AND QUALIFICATIONS NEEDED:

    • Standards of Leadership Required in This Role
    • Personal Mastery
    • Agility
    • Passion for High Performance
    • Business Acumen

    Key Skills Required

    • Professional Skills
    • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases, Basic use of Python would be desired
    • Experience building and optimising data pipelines, architectures and data sets
    • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
    • Strong analytic skills related to working with unstructured datasets
    • Experience with building processes supporting data transformation, data structures, dependency and workload management
    • A successful history of manipulating, processing and extracting value from large, disconnected datasets
    • Working knowledge of message queuing, stream processing, and highly scalable data stores
    • Experience supporting and working with cross-functional teams in a dynamic environment
    • Limitless curiosity and imagination to create novel business solutions

    RELEVANT EXPERIENCE:

    • B.S. or M.S in a relevant field (Business Analytics, Operations Research), or PhD, M.S. in a relevant technical field (Operations Research, Computer Science, Statistics, Business Analytics, Econometrics, or Mathematics)
    • Overall experience of 2-5+ years preferred
    • Experience with Azure cloud services (Azure Databricks, Azure Data Factory, Azure DevOps, Azure Analysis Services and ADLS)
    • Experience with Power BI (including Dax and M)
    • Experience with relational SQL and NoSQL databases
    • Proven familiarity with Data Governance processes and managing data quality, access, lineage and profiling using automated testing and continuous integration where relevant.
    • Preference for working knowledge of SQL, Python and Spark.
    • Preference for working knowledge of architectures to support advanced analytics and data science

    Method of Application

    Interested and qualified? Go to Unilever on careers.unilever.com to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Unilever Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail