Jobs Career Advice Post Job
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Nov 17, 2025
    Deadline: Dec 18, 2025
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Kifiya is an AI-powered ecosystem technology company building intelligent infrastructures that expand access to finance and markets for underserved communities. For more than a decade, we have applied data, digital platforms, and financial innovation to solve market failures and enable economic participation for micro, small, and medium enterprises (MSMEs...
    Read more about this company

     

    Head of AI & Platform Engineering

    About the Role

    • The Head of AI & Platform Engineering will lead the design, development, and scalability of Kifiya’s AI and data platforms, ensuring seamless integration of AI/ML capabilities, production-grade systems, and automated infrastructure.
    • Overseeing two specialized teams—the Platform Engineering Team, responsible for infrastructure, automation, DevOps, and scalable data systems, and the AI/ML Engineering Team, responsible for AI model deployment, MLOps pipelines, and real-time intelligent systems—the role’s mission is to establish a high-performing, automated, and scalable AI platform ecosystem that drives business growth, operational resilience, and innovation across IDD and the wider enterprise.

    What You’ll Do

    • Define and execute the AI & Platform Engineering strategy aligned with IDD’s and CDO’s objectives.
    • Build and lead a high-performing dual-team structure, fostering collaboration between Platform Engineers and AI/ML Engineers.
    • Translate business goals into scalable technical architectures and actionable engineering roadmaps.
    • Serve as a bridge between Data Science, Data Engineering, and Credit Risk streams to ensure seamless operationalization of analytics and models.
    • Lead the development of cloud-native, containerized, and automated platforms (e.g., AWS, Kubernetes, EKS, Terraform, CI/CD pipelines).
    • Drive the modernization of data and compute infrastructure to support advanced analytics, ML workloads, and large-scale data pipelines.
    • Oversee platform reliability, performance, monitoring, and cost optimization.
    • Ensure security, compliance, and governance are embedded into platform design and operations.
    • Oversee the end-to-end AI/ML engineering lifecycle , from model packaging and deployment to monitoring, retraining, and scaling.
    • Implement robust MLOps frameworks for model versioning, reproducibility, and real-time inference.
    • Collaborate with Data Science teams to transition prototypes into production-grade intelligent systems.
    • Drive automation of model retraining, performance tracking, and A/B testing (Champion–Challenger frameworks).
    • Partner with Solutions Architecture and Data Engineering teams to ensure seamless interoperability between systems and tools.
    • Design modular, API-driven architectures for model serving, feature stores, and AI services.
    • Evaluate emerging tools and technologies to continuously evolve the AI and data platform stack.
    • Define engineering standards, policies, and documentation practices for AI and platform functions.
    • Promote DevSecOps, MLOps, and DataOps best practices across the IDD ecosystem.
    • Ensure systems comply with enterprise data governance, security, and privacy frameworks.
    • Work closely with the CDO, Chief of IDD, and departmental leads to align infrastructure capabilities with business needs.
    • Provide technical advisory support to Data Science, Analytics, and Risk teams for scalable solution design.
    • Drive collaboration with IT, InfoSec, and Cloud Infrastructure teams to ensure alignment on enterprise standards.

    What You’ll Bring

    • Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI/ML, or related field.
    • 10+ years of experience in software engineering, data platform management, or AI/ML engineering roles, with at least 5 years in leadership.
    • Proven experience building AI platforms, MLOps environments, or cloud-based data ecosystems.
    • Hands-on experience with Kubernetes (EKS/GKE), CI/CD, Spark, MLFlow, Airflow, Kafka, or equivalent tools.
    • Deep expertise in cloud platforms (AWS, Azure, GCP), Kubernetes, Docker, and infrastructure as code (Terraform, CloudFormation).
    • Advanced understanding of AI/ML systems, model deployment pipelines, feature stores, and real-time APIs.
    • Excellent understanding of DevOps, MLOps, and automation frameworks.
    • Strong architectural mindset with the ability to balance innovation and operational excellence.
    • Exceptional communication and stakeholder management skills.
    • Familiarity with modern data stacks (e.g., Snowflake, Databricks, StarRocks, Presto, ClickHouse) is a strong advantage.
    • Experience in financial services or fintech environments preferred.

    Deadline for submission: 18 December 2025.

    go to method of application »

    Head of Data Engineering

    What You’ll Do

    • Define and execute the Data Engineering strategy aligned with IDD and enterprise data goals.
    • Lead and mentor high-performing teams of Data Engineers and BI Developers to deliver enterprise-grade solutions.
    • Collaborate closely with the CDO, Chief of IDD, and department leads to ensure data infrastructure supports business and analytical requirements.
    • Champion the vision of data as a product, enabling reusable, governed, and high-quality data assets.
    • Design and oversee scalable, secure, and automated data architectures supporting both batch and real-time processing.
    • Manage and optimize data pipelines, data lakes, warehouses, and streaming systems (e.g., Presto, StarRocks, Snowflake, ClickHouse).
    • Partner with Platform Engineering and AI teams to integrate data infrastructure with MLOps, API services, and advanced analytics.
    • Ensure data lineage, versioning, and cataloguing through integration with governance and metadata systems.
    • Lead the development of data ingestion frameworks from multiple internal and external sources (APIs, databases, third-party feeds).
    • Implement ETL/ELT pipelines that are efficient, reliable, and easily scalable.
    • Work with the Data Governance team to ensure data quality, integrity, and standardization across all domains.
    • Introduce automation and testing practices to minimize manual data handling and improve consistency.
    • Oversee BI and data visualization development, ensuring timely and accurate delivery of dashboards, reports, and insights.
    • Partner with business and risk stakeholders to design data models and metrics frameworks aligned to KPIs.
    • Support self-service analytics capabilities through governed data layers and tools.
    • Drive continuous improvement of BI performance, usability, and scalability.
    • Collaborate with the CDO and Head of Data Governance to ensure compliance with data security, privacy, and protection regulations (e.g., POPIA, GDPR).
    • Enforce access control, data retention, and audit standards.
    • Ensure all data processes are aligned with enterprise governance and audit requirements.
    • Partner with AI & Platform Engineering, Credit Risk, Data Science, and Research & Analytics teams to meet data availability and performance requirements.
    • Act as a key contributor in architecture forums, technology steering committees, and innovation sessions.

    What You’ll Bring

    • Bachelor’s or Master’s degree in Computer Science, Information Systems, or Data Engineering.
    • Minimum 8–10 years of experience in data engineering or analytics, with at least 5 years in a leadership capacity.
    • Proven leadership in managing data engineering and BI teams within a complex, data-driven organization.
    • Advanced knowledge of cloud data architectures (AWS, Azure, or GCP).
    • Expertise in SQL, Python, Spark, Airflow, Kafka, and modern data tools such as Snowflake, Databricks, or StarRocks.
    • Strong understanding of data warehousing, lakehouse architectures, and ELT/ETL patterns.
    • Experience in BI and visualization tools (Power BI, Tableau, Metabase, Looker).
    • Excellent communication, project management, and stakeholder engagement skills.
    • Solid understanding of data governance, metadata, and data quality frameworks.
    • Exposure to financial services, fintech, or regulated environments is highly advantageous.

    go to method of application »

    Head of Data Governance

    About the Role

    • The Head of Data Governance will lead the establishment, implementation, and continuous improvement of Kifiya’s data governance framework to ensure that data across all business functions is accurate, consistent, secure, and compliant. This role balances strategic oversight with operational enablement and plays a pivotal part in enabling intelligent, data-driven decisioning by driving policies, standards, and controls that promote trusted data across the enterprise.

    What You’ll Do

    • Develop, implement, and maintain the enterprise-wide Data Governance Framework, aligned with organizational strategy and regulatory requirements.
    • Define and enforce data ownership, stewardship, and accountability structures across all data domains.
    • Establish clear data policies, standards, and operating procedures for data quality, metadata, lineage, and classification.
    • Oversee data quality programs to ensure accuracy, completeness, timeliness, and consistency of data across systems.
    • Work closely with compliance, risk, and legal teams to ensure adherence to data protection and privacy regulations (e.g., POPIA, GDPR).
    • Drive remediation initiatives for data-related risks and ensure consistent application of governance standards.
    • Lead the adoption of tools and practices for metadata management, data cataloguing, and data lineage tracking.
    • Partner with data engineering and architecture teams to integrate governance into data pipelines and lifecycle management.
    • Support the development and management of Master and Reference Data to ensure consistency across business domains.
    • Establish and lead a community of Data Stewards across business and technical units.
    • Define roles and responsibilities for data owners, custodians, and users to ensure accountability.
    • Provide guidance and training to embed governance practices into everyday operations.
    • Partner with data engineering, data science, analytics, and credit risk teams to ensure governance principles are integrated throughout the IDD value chain.
    • Serve as the primary governance liaison between IDD and other business units to ensure alignment on data priorities and standards.
    • Report regularly to the CDO on governance maturity, data risks, and improvement initiatives.
    • Define and track Key Data Governance Metrics (DG KPIs) such as data quality scores, policy adoption rates, and issue resolution trends.
    • Maintain a governance dashboard to support management and regulatory reporting.
    • Support audit and assurance processes related to data management.

    What You’ll Bring

    • Bachelor’s degree in Information Management, Data Science, Computer Science, or a related field (Master’s preferred).
    • Minimum of 8–10 years of experience in data governance, data management, or information architecture roles.
    • Proven experience in establishing or scaling data governance frameworks within complex organizations.
    • Familiarity with data governance tools (e.g., Collibra, Alation, Apache Atlas, Informatica).
    • Experience working within financial services, fintech, or regulated industries is highly advantageous.
    • Strong understanding of data management principles (DAMA-DMBOK, DCAM, or equivalent frameworks).
    • In-depth knowledge of data quality management, metadata management, and data privacy regulations.
    • Excellent stakeholder management, communication, and influencing skills.
    • Ability to work cross-functionally with technical and business stakeholders.
    • Strong analytical and strategic thinking capabilities with a focus on operational excellence.

    go to method of application »

    AI Engineering Manager

    About the Role

    • The AI Engineering Manager leads the engineering team responsible for designing, building, and operationalizing scalable AI/ML systems within the Intelligent Data Decisioning (IDD) ecosystem. Combining hands-on machine learning systems expertise with technical leadership, the role ensures that models developed by Data Science teams are production-ready, governed, monitored, and delivering measurable business value.
    • The incumbent owns the MLOps architecture, automation frameworks, and AI service pipelines that enable real-time intelligent decisioning across credit, analytics, and risk domains, working closely with Platform Engineering, Data Engineering, and Solutions Architecture teams to ensure seamless model deployment, performance optimization, and compliance across the full AI lifecycle.

    What You’ll Do

    • Lead, mentor, and develop a team of AI/ML Engineers and MLOps specialists.
    • Translate IDD’s AI strategy into executable engineering roadmaps and measurable outcomes.
    • Manage sprint planning, performance tracking, and delivery of production-grade AI components.
    • Foster a culture of innovation, collaboration, and technical excellence.
    • Partner with Data Science leadership to align modeling initiatives with infrastructure capabilities.
    • Oversee the end-to-end AI engineering lifecycle , model packaging, deployment, monitoring, and retraining.
    • Design and implement MLOps pipelines for reproducibility, version control, model registry, and CI/CD integration.
    • Enable real-time inference and low-latency model serving via APIs and streaming services.
    • Implement Champion–Challenger frameworks, A/B testing, and automated model retraining workflows.
    • Collaborate with Data Scientists to transition prototypes into reliable, scalable production systems.
    • Architect modular, containerized AI microservices integrated with the broader IDD data platform.
    • Ensure seamless interoperability between AI systems, feature stores, and cloud data services (AWS S3, Databricks, EMR, Aurora).
    • Partner with Platform Engineering to ensure compute, storage, and networking configurations support high-throughput model workloads.
    • Evaluate emerging AI frameworks, serving technologies, and vector databases to enhance the stack.
    • Establish model observability frameworks for drift detection, bias monitoring, and performance degradation alerts.
    • Implement governance standards for explainability, traceability, and ethical AI practices.
    • Maintain documentation, lineage tracking, and audit readiness for all production models.
    • Ensure compliance with enterprise data privacy and regulatory requirements.
    • Work with Platform, Data, and Credit Risk teams to integrate model outcomes into decision engines and operational systems.
    • Partner with Business and Analytics teams to ensure AI systems deliver measurable value and insights.
    • Liaise with InfoSec to align AI workloads with data protection and infrastructure policies.
    • Contribute to the evolution of the IDD AI platform roadmap in alignment with enterprise strategy.

    What You’ll Bring

    • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or Artificial Intelligence.
    • 7–10+ years of experience in AI/ML engineering, data platform development, or related roles.
    • 2–4 years in a team lead or managerial position overseeing AI/ML deployment.
    • Proven experience implementing MLOps and production-grade AI pipelines at scale.
    • Hands-on experience with machine learning systems engineering and AI platform design.
    • Experience with MLOps frameworks such as MLflow, Kubeflow, or SageMaker.
    • Proficiency in Python, SQL, and ML libraries (TensorFlow, PyTorch, Scikit-Learn, XGBoost).
    • Strong knowledge of data pipelines and workflow orchestration (Airflow, Prefect, or similar).
    • Experience deploying models on AWS cloud environments (EKS, Lambda, API Gateway, ECS).
    • Familiarity with feature stores, model registries, and model-as-a-service APIs.
    • Strong architectural mindset with experience in scaling real-time and batch inference systems.
    • Excellent communication, stakeholder engagement, and leadership skills.
    • Experience in fintech, banking, or other data-driven environments is preferred.
    • Familiarity with credit risk modeling, decision engines, or intelligent automation is a plus.

    go to method of application »

    Platform Engineering Manager

    About the Role

    • The Platform Engineering Manager leads the design, delivery, and reliability of Kifiya’s data and AI platform infrastructure, overseeing the engineering team responsible for automating, scaling, and securing the hybrid data platform that underpins analytics, AI, and intelligent decisioning capabilities.
    • Acting as a hands-on technical manager, the role guides engineers in cloud infrastructure, DevOps, and platform reliability while ensuring strategic alignment with business and data initiatives, working closely with Data Engineering, AI/ML, and Solutions Architecture teams to enable scalable, resilient, and compliant platform operations.

    What You’ll Do

    • Lead, mentor, and grow a high-performing team of platform and DevOps engineers.
    • Translate IDD’s platform strategy into clear technical roadmaps and execution plans.
    • Manage workload planning, sprint priorities, and performance outcomes for the team.
    • Collaborate with Data Science, AI/ML, and Engineering leads to align platform capabilities with business objectives.
    • Drive a culture of engineering excellence, automation, and continuous improvement.
    • Oversee the design and implementation of scalable, resilient cloud and on-prem infrastructure (AWS, Kubernetes, EKS, Terraform).
    • Ensure high availability, reliability, and security across compute, storage, and data systems.
    • Define standards for Infrastructure-as-Code (IaC), container orchestration, and deployment pipelines.
    • Lead modernization initiatives, cloud migration, containerization, and data platform upgrades.
    • Manage capacity planning, resource utilization, and cost optimization initiatives.
    • Implement and maintain robust CI/CD frameworks supporting data and ML product delivery.
    • Streamline infrastructure provisioning and deployment workflows across environments.
    • Oversee monitoring, observability, and alerting to ensure platform health and SLA compliance.
    • Champion DevSecOps practices to embed security, reliability, and compliance into the delivery lifecycle.
    • Ensure adherence to enterprise cloud security and compliance standards (IAM, VPC, encryption, audit).
    • Partner with InfoSec and Data Governance teams to maintain regulatory and operational compliance.
    • Manage platform access, data encryption, and identity control policies.
    • Oversee risk assessments and implement mitigation plans for critical systems.
    • Partner with Data Engineering, Data Science, and AI/ML teams to operationalize workloads at scale.
    • Work with the PMO and Solutions Architect to align platform initiatives to project timelines and budgets.
    • Act as a technical liaison to IT infrastructure and security functions.
    • Provide status updates, reports, and recommendations to senior leadership.

    What You’ll Bring

    • Bachelor’s or Master’s in Computer Science, Software Engineering, or related field.
    • 7–10+ years of experience in Platform, Cloud, or DevOps engineering, including 2–4 years in a leadership or managerial role.
    • Proven experience building and managing scalable platform infrastructure in AWS or hybrid environments.
    • Demonstrated ability to lead engineering teams in high-availability, data-intensive environments.
    • Deep expertise in cloud computing (AWS preferred), EC2, EKS, S3, Aurora, IAM, CloudWatch.
    • Proficiency in Infrastructure-as-Code (Terraform, CloudFormation) and Kubernetes orchestration.
    • Solid understanding of DevOps, CI/CD, and automation frameworks (GitHub Actions, Jenkins, ArgoCD).
    • Familiarity with data and AI platform components (Databricks, EMR, Airflow, Kafka, Delta Lake).
    • Experience with observability and monitoring tools (Prometheus, Grafana, ELK).
    • Excellent communication and stakeholder management across business and technical teams.
    • Strategic thinker with a balance of execution focus and long-term architectural vision.
    • Experience working in fintech, banking, or data-driven organizations preferred.

    go to method of application »

    Senior AI Engineer

    About the Role

    • The Senior AI Engineer is responsible for designing, developing, and operationalizing machine learning models and intelligent systems that power IDD’s decisioning capabilities.
    • This role bridges the gap between Data Science experimentation and production-grade AI systems, ensuring models are deployed, monitored, and scaled effectively within IDD’s cloud and on-prem environments.
    • The engineer will focus on building automation pipelines (MLOps), model APIs, and real-time decisioning frameworks that directly support business-critical use cases in risk, credit, and analytics, thriving at the intersection of AI, software engineering, and data infrastructure with a passion for building robust and scalable systems.

    What You’ll Do

    • Build and deploy machine learning models into production using modern frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost).
    • Collaborate with Data Scientists to transform prototypes into efficient, maintainable, and scalable applications.
    • Develop APIs and microservices for model inference and decision automation.
    • Optimize model performance and resource utilization for low-latency inference.
    • Implement versioning, packaging, and containerization standards for ML models.
    • Develop and maintain CI/CD pipelines for model deployment, retraining, and monitoring.
    • Use MLOps tools such as MLflow, Kubeflow, SageMaker, or Airflow for automation.
    • Implement model tracking, performance dashboards, and automated drift detection.
    • Build and maintain feature stores and model registries integrated with the IDD platform.
    • Integrate AI models into data pipelines, APIs, and decision engines (batch and real-time).
    • Collaborate with Platform and Data Engineering teams on infrastructure design (AWS, Databricks, EMR, Aurora, S3).
    • Develop robust data ingestion, preprocessing, and transformation pipelines for ML workloads.
    • Support event-driven and streaming model architectures using Kafka or Kinesis.
    • Build observability into all AI components, monitor drift, bias, and performance degradation.
    • Ensure compliance with governance, explainability, and data privacy standards.
    • Maintain documentation, model lineage, and reproducibility for all deployed systems.
    • Work closely with Data Scientists, Engineers, and Product teams to operationalize AI solutions.
    • Contribute to reusable components, internal libraries, and best-practice templates.
    • Participate in code reviews, design sessions, and architecture discussions.
    • Mentor junior AI Engineers and contribute to a culture of technical excellence.

    What You’ll Bring

    • Bachelor’s or Master’s in Computer Science, Artificial Intelligence, or a related field.
    • 5–8+ years of experience in AI/ML engineering, software engineering, or data science.
    • Proven experience deploying and maintaining ML models in production environments.
    • Demonstrated understanding of end-to-end model lifecycle (training → serving → monitoring → retraining).
    • Strong proficiency in Python and ML libraries (TensorFlow, PyTorch, Scikit-learn, XGBoost).
    • Practical experience with MLOps tools (MLflow, SageMaker, Kubeflow, Airflow).
    • Knowledge of CI/CD, Docker, Kubernetes, and API development (FastAPI, Flask).
    • Experience working with AWS cloud (EKS, Lambda, S3, Aurora, CloudWatch).
    • Familiarity with data engineering principles and ETL workflows.
    • Solid understanding of model observability, monitoring, and drift detection.
    • Analytical, detail-oriented, and capable of balancing research with production delivery.
    • Experience working in cross-functional engineering teams using Agile/Scrum methodologies.
    • Experience with data-driven fintech, credit risk, or analytics-based decision systems is advantageous.

    go to method of application »

    Senior Data Scientist

    About the Role

    • The Senior Data Scientist is a key member of the Data Science department, responsible for developing advanced models, driving innovation in predictive analytics, and delivering insights that shape business strategy.
    • This role requires a strong mix of technical expertise, business acumen, and leadership to design and implement high-impact data science solutions in areas such as credit risk, customer analytics, fraud detection, and portfolio optimization.
    • The Senior Data Scientist will mentor junior team members, collaborate with cross-functional teams (Data Engineering, Credit Risk, AI & Platform Engineering, and Research & Analytics), and ensure that data science initiatives are production-ready, scalable, and aligned with business objectives.

    What You’ll Do

    • Design, build, and validate predictive models (classification, regression, clustering, etc.) to support credit risk, customer analytics, and decisioning.
    • Partner with Data Engineering to design and refine feature-engineered datasets for scalable model development.
    • Work with AI & Platform Engineering to deploy, monitor, and maintain models in production environments.
    • Lead deep-dive analyses to identify trends, improve decisioning strategies, and unlock business opportunities.
    • Collaborate with the Credit Risk team to ensure models meet regulatory, compliance, and governance standards.
    • Guide junior data scientists and analysts, fostering best practices in model development, testing, and documentation.
    • Explore new machine learning methods, AI frameworks, and analytical techniques to enhance departmental capabilities.
    • Translate complex data-driven insights into clear, actionable recommendations for senior leadership and business stakeholders.

    What You’ll Bring

    • Master’s or PHD in Data Science, Statistics, Mathematics, Computer Science, or related field.
    • 5+ years of experience in data science, with a proven track record of delivering models into production.
    • Strong programming skills in Python (preferred) or R, with expertise in libraries such as scikit-learn, TensorFlow, or PyTorch.
    • Proficiency in SQL and experience working with large datasets.
    • Hands-on experience with model lifecycle management, MLOps, or deployment frameworks.
    • Strong background in statistical modeling, machine learning, and data visualization.
    • Excellent communication skills, with the ability to present complex concepts to non-technical audiences.
    • Experience with Agile delivery and tools such as Jira and Confluence.

    go to method of application »

    Senior Platform Engineer

    About the Role

    • The Senior Platform Engineer is responsible for building and maintaining the foundational data and AI infrastructure that powers IDD’s intelligent decisioning systems. This role ensures the scalability, reliability, and security of the platform, enabling seamless deployment of analytics, tools, AI models, and data pipelines across the enterprise.
    • Working closely with Data Engineering, Data Science, and Solutions Architecture teams, the Senior Platform Engineer drives automation, cloud integration, and performance optimization across hybrid environments (AWS and on-prem).
    • The role combines hands-on technical depth with architectural thinking to operationalize the platform for speed, governance, and innovation.

    What You’ll Do

    • Design, deploy, and optimize scalable cloud-native and on-premise platform components (AWS, Kubernetes, EKS, Terraform, CI/CD).
    • Automate environment provisioning, configuration, and infrastructure updates using Infrastructure-as-Code (IaC) principles.
    • Manage platform performance, cost efficiency, reliability, and fault tolerance.
    • Support integration of data pipelines, APIs, and AI workloads into the enterprise platform.
    • Implement strong observability standards — logging, monitoring, and alerting — to ensure system health and uptime.
    • Collaborate with Solutions Architecture to define modular, API-first architectures that integrate data services, ML pipelines, and decisioning systems.
    • Contribute to the evolution of the IDD technical stack (Databricks, Aurora, EMR, S3, Delta Lake, Kafka, Airflow, etc.).
    • Develop and maintain CI/CD pipelines for data and ML systems (GitHub Actions, Jenkins, or GitLab CI).
    • Implement configuration management and container orchestration (Docker, Helm, Kubernetes).
    • Enforce cloud security standards, IAM policies, and encryption protocols.
    • Ensure compliance with enterprise data governance and privacy frameworks.
    • Partner with Data Engineers, Scientists, and Architects to operationalize models and data pipelines.
    • Provide mentorship to junior engineers on platform automation and reliability engineering best practices.
    • Engage cross-functionally with IT, Cloud, and Risk teams to align platform capabilities to business priorities.

    What You’ll Bring

    • Bachelor’s or Master’s in Computer Science, Software Engineering, or related field.
    • 5–8+ years of experience in Platform, Cloud, or DevOps Engineering (preferably in fintech, banking, or large-scale data ecosystems).
    • Proven track record of designing and managing scalable platform environments in AWS or hybrid settings.
    • Strong expertise in AWS (S3, EC2, EKS, Aurora, CloudWatch, IAM), Kubernetes, Docker, Terraform, and CI/CD pipelines.
    • Solid understanding of data lakehouse and analytics tools — Spark, Databricks, EMR, Airflow.
    • Scripting ability in Python, Bash, or Go.
    • Working knowledge of networking, security, and cost optimization in hybrid cloud environments.
    • Familiarity with MLOps and DataOps principles.
    • Excellent collaboration and problem-solving skills.

    go to method of application »

    Technical Product Owner

    What You’ll Do

    • Define, prioritize, and maintain the product backlog for IDD teams to maximize value delivery.
    • Translate business goals and technical needs into clear user stories, acceptance criteria, and deliverables.
    • Work with Scrum/Kanban teams to ensure sprint goals are achieved and dependencies are managed.
    • Collaborate with stakeholders across business, data science, engineering, and risk.
    • Ensure solutions align with data architecture, AI/ML standards, and platform best practices.
    • Measure and communicate the value and impact of delivered features.
    • Support retrospectives and drive continuous improvement in Agile practices.

    What You’ll Bring

    • Bachelor’s degree in Computer Science, Engineering, Business, or related field.
    • 3–5 years of experience as a Product Owner, Business Analyst, or similar technical role.
    • Strong knowledge of Agile methodologies (Scrum/Kanban) and backlog management.
    • Experience with cross-functional technical teams (data engineering, data science, AI/ML, or platform engineering).
    • Ability to write clear user stories, define acceptance criteria, and manage trade-offs.
    • Familiarity with Agile tools (Jira, Confluence, Azure DevOps).
    • Excellent communication and stakeholder management skills.
    • Experience in fintech, financial services, or credit risk.
    • Knowledge of data infrastructure, APIs, cloud platforms, or ML/AI lifecycle tools.
    • Agile certifications (CSPO, SAFe PO/PM, or equivalent).

    go to method of application »

    Data Scientist

    What You’ll Do

    • Model Development: Build, test, and validate predictive models for credit risk, customer analytics, and efficiency.
    • Data Preparation: Collaborate with Data Engineering to clean, structure, and transform raw data into usable datasets.
    • Feature Engineering: Design new features to improve model accuracy and business impact.
    • Analysis & Insights: Conduct exploratory data analysis to uncover risks, opportunities, and trends.
    • Collaboration: Work with cross-functional teams to integrate models into production and decision-making.
    • Documentation: Maintain records of methodologies, experiments, and results for transparency and reproducibility.
    • Continuous Learning: Stay current with new ML methods, tools, and best practices.

    What You’ll Bring

    • Bachelor’s or Master’s in Data Science, Statistics, Mathematics, Computer Science, or related field.
    • 2–4 years’ experience in data science or advanced analytics.
    • Proficiency in Python or R, with libraries such as pandas, scikit-learn, TensorFlow.
    • SQL expertise and experience working with structured/unstructured data.
    • Solid grounding in statistical modeling, hypothesis testing, and machine learning.
    • Strong communication skills for both technical and non-technical stakeholders.
    • Familiarity with Agile methods and tools (Jira, Confluence).

    Preferred:

    • Experience in financial services, fintech, or credit risk modeling.
    • Knowledge of MLOps frameworks and model deployment practices.
    • Exposure to cloud platforms (AWS, Azure, GCP) and big data tools (Spark, Databricks, Hadoop).
    • Experience with visualization tools (Power BI, Tableau, etc.).

    go to method of application »

    Product Manager

    What You’ll Do

    • Define, prioritize, and maintain the product backlog for IDD teams to maximize value delivery.
    • Translate business goals and technical needs into clear user stories, acceptance criteria, and deliverables.
    • Work with Scrum/Kanban teams to ensure sprint goals are achieved and dependencies are managed.
    • Collaborate with stakeholders across business, data science, engineering, and risk.
    • Ensure solutions align with data architecture, AI/ML standards, and platform best practices.
    • Measure and communicate the value and impact of delivered features.
    • Support retrospectives and drive continuous improvement in Agile practices.
    • Bachelor’s degree in Computer Science, Engineering, Business, or related field.
    • 3–5 years of experience as a Product Owner, Business Analyst, or similar technical role.
    • Strong knowledge of Agile methodologies (Scrum/Kanban) and backlog management.
    • Experience with cross-functional technical teams (data engineering, data science, AI/ML, or platform engineering).
    • Ability to write clear user stories, define acceptance criteria, and manage trade-offs.
    • Familiarity with Agile tools (Jira, Confluence, Azure DevOps)
    • Excellent communication and stakeholder management skills.

    What You’ll Bring

    • Bachelor’s degree in Computer Science, Engineering, Business, or related field.
    • 3–5 years of experience as a Product Owner, Business Analyst, or similar technical role.
    • Strong knowledge of Agile methodologies (Scrum/Kanban) and backlog management.
    • Experience with cross-functional technical teams (data engineering, data science, AI/ML, or platform engineering).
    • Ability to write clear user stories, define acceptance criteria, and manage trade-offs.
    • Familiarity with Agile tools (Jira, Confluence, Azure DevOps).
    • Excellent communication and stakeholder management skills.
    • Experience in fintech, financial services, or credit risk.
    • Knowledge of data infrastructure, APIs, cloud platforms, or ML/AI lifecycle tools.
    • Agile certifications (CSPO, SAFe PO/PM, or equivalent).

    Preferred Skills

    • Experience in fintech, financial services, or credit risk.
    • Knowledge of data infrastructure, APIs, cloud platforms, or ML/AI lifecycle tools.
    • Agile certifications (CSPO, SAFe PO/PM, or equivalent).

    go to method of application »

    AI & Platform Engineer

    What You’ll Do

    • Design, implement, and optimize AI/ML infrastructure (model deployment pipelines, feature stores, monitoring).
    • Build and maintain cloud-native, containerized, distributed systems for high-volume data and AI workloads.
    • Develop APIs, microservices, and integrations to embed AI-driven decisioning into core systems.
    • Ensure robust monitoring, logging, reliability, security, and scalability of models in production.
    • Partner with Data Science, Data Engineering, and Credit Risk to accelerate AI solution delivery.
    • Evaluate emerging AI tools, frameworks, and platforms to strengthen the tech stack.
    • Implement best practices for compliance, explainability, and responsible AI.

    What You’ll Bring

    • Bachelor’s or Master’s in Computer Science, Engineering, or related field.
    • 3–5 years in AI/ML engineering, platform engineering, or software engineering with data-intensive systems.
    • Strong programming in Python, Java, or Scala; proficiency with TensorFlow, PyTorch, or Scikit-learn.
    • Hands-on with ML lifecycle tools (MLflow, Kubeflow, SageMaker, Vertex AI, etc.).
    • Expertise in cloud (AWS/Azure/GCP), Docker/Kubernetes, and CI/CD pipelines.
    • Experience with data infrastructure (Spark, Kafka, Databricks, or equivalent).
    • Knowledge of MLOps/DevOps and modern API design.
    • Strong problem-solving and cross-functional collaboration skills.
    • Experience in fintech, financial services, or credit risk.
    • Exposure to XAI, model monitoring, and regulatory compliance frameworks.
    • Familiarity with OLAP engines, data warehouses, and real-time decisioning platforms.

    Method of Application

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Kifiya Financial Technology Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail