Jobs Career Advice Post Job
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

Oops! It seems this job from Companies & Intellectual Property Commission has expired
View current and similar jobs using the button below
  • Posted: Oct 22, 2025
    Deadline: Oct 31, 2025
    • @gmail.com
    • @yahoo.com
    • @outlook.com
    • The CIPC does registration of companies, co-operatives and intellectual property rights (trade marks, patents, designs and copyright) and maintenance thereof.
    • Disclosure of Information on its business registers.
    • Promotion of education and awareness of Company and Intellectual Property Law. Read more about this company

       

      Senior Data Engineer

      Job Purpose:

      • The primary purpose of this role is to take a leading role in designing, implementing, and optimising CIPC's data infrastructure to support strategic goals. This position is central to building scalable, high-performance data pipelines, driving data engineering best practices, and ensuring the delivery of robust data solutions that empower data scientists and analysts to generate critical business insights.

      Required Minimum Education / Training

      Candidates must meet one of the following requirements:

      Formal Education Pathway (Preferred):

      • Education:Bachelor’s Degree / Advanced Diploma in Computer Science, Engineering, or a related technical field. Advanced degrees are a plus.
      • Added Advantage Certifications: Holding one or more of the following advanced professional certifications is a strong advantage:
      • Specialized Tools: Databricks Certified Data Engineer (Professional or Associate), Confluent Certified Developer for Apache Kafka, or Certified Kubernetes Administrator (CKA).
      • Cloud Platforms: Google Cloud Certified Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate, or AWS Certified Data Analytics – Specialty.
      • Data Management: Certified Data Management Professional (CDMP) (Practitioner or Professional level).

      Specialized Certification and Experience Pathway (Alternative):

      • Education: Senior Certificate (NQF 4) and relevant technical certifications

      Mandatory Certifications:The candidate must hold a combination of at least two advanced professional certifications, which must include one specialized tool certification and one cloud platform certification:

      Specialized Tool Certification (MUST include one):

      • Databricks Certified Data Engineer (Professional or Associate)
      • Confluent Certified Developer for Apache Kafka
      • Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD)

      Cloud Platform Certification (MUST include one):

      • Google Cloud Certified Professional Data Engineer
      • Microsoft Certified: Azure Data Engineer Associate
      • AWS Certified Data Analytics – Specialty

      Added Advantage Certification (Choose one more from any category):

      • Certified Data Management Professional (CDMP) (Practitioner or Professional level)
      • DAMA Certified Data Management Professional (CDMP)
      • Any other certification listed above not yet counted.

      Required Minimum Experience

      • Experience: Minimum 5+ years (preferred 8+ years) of proven experience as a Data Engineer or in a similar technical role, with a strong track record of building scalable data solutions.
      • Experience: Minimum 10+ years of proven experience as a Data Engineer or in a similar technical role, with a strong track record of building scalable data solutions.

      Key performance areas

      Data Pipeline & System Development

      • Pipeline Design and Implementation: Lead the design, build, and maintenance of scalable and secure data pipelines (batch and real-time) and databases to process complex, high-volume structured and unstructured data.
      • ETL/ELT Development: Develop and optimize ETL (Extract, Transform, Load) processes to efficiently transform raw data into usable, high-quality formats for analysis and consumption.
      • Infrastructure Optimization: Manage and optimize data warehousing solutions (e.g., Databricks, Snowflake, Redshift, BigQuery, Synapse) and implement and maintain data storage solutions, including SQL and NoSQL databases.
      • Automation: Automate data processing tasks using frameworks like Apache Airflow and optimize deployment and orchestration to improve efficiency and reduce manual effort.

      Data Quality, Performance & Compliance

      • Performance Monitoring: Monitor data pipeline performance, troubleshoot issues promptly, and optimize data processing frameworks to handle increasing data volumes with low latency.
      • Quality and Standards: Develop and enforce standards and best practices for data quality, security, documentation, and compliance across all data systems and processes.
      • Architecture Contribution: Ensure data pipelines and databases are optimized for performance, security, availability, and scalability, and contribute actively to overall data architecture decisions.

      Collaboration, Mentorship & Strategy

      • Stakeholder Collaboration: Work closely with data scientists, data analysts, and business teams to understand their data needs, ensure data accessibility, and deliver solutions tailored for their analysis requirements.
      • Technical Leadership: Serve as a technical leader, coach, and mentor for junior data engineers and adjacent data and engineering teams.
      • Project Leadership: Lead end-to-end data engineering projects, including requirements gathering, technical deliverable planning, output quality control, and stakeholder management.
      • Strategy Contribution: Contribute technical expertise to the development and evolution of the CIPC data strategy.

      Minimum Functional Requirements (Technical Skills & Knowledge)

      • Core Programming: Expertise in programming languages such as Python, Java, or Scala.
      • SQL Mastery: Advanced proficiency with SQL and deep experience in database optimization techniques for high-volume data.
      • Big Data & Distributed Systems: Strong hands-on experience with distributed systems and big data technologies, including Apache Spark, Hadoop, or Flink.
      • Cloud Data Platforms: Strong knowledge of cloud-based data platforms and their services across major providers (AWS, Azure, and GCP).
      • ETL/Orchestration Tools: Proven experience with ETL tools and orchestration frameworks (e.g., Apache Kafka, Apache Airflow, Apache Spark).
      • Architecture Design: Experience in designing and implementing data architectures that specifically support large-scale data processing and machine learning initiatives.
      • Soft Skills: Strong problem-solving and critical thinking skills, excellent interpersonal skills, and the ability to work effectively with cross-functional teams.
      • Mentorship: Proven experience leading and mentoring junior data engineers.

      Check how your CV aligns with this job

      Method of Application

      Build your CV for free. Download in different templates.

    • Send your application

      View All Vacancies at Companies & Intellectual Prope... Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail