Jobs Career Advice Post Job
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: May 8, 2026
    Deadline: May 8, 2026
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • MTN Group Limited entered the telecommunications scene at the dawn of South Africa’s democracy, in 1994. In 1998, we began our expansion by acquiring licences in Rwanda, Uganda and Swaziland. Since then, we continued to grow, with a view of bringing world-class telecommunications and digital services to markets across Africa and the Middle East. Through ou...
    Read more about this company

     

    Senior Manager - Artificial Intelligence Information Security

    Responsibilities

    The Senior Manager Information Security AI COE is responsible for the following key performance areas:

    AI COE (Security Pillar) & Governance

    • Establish the AI COE security mandate, operating model, and RACI across Group functions and OPCOs; embed BRAIN policy requirements into standards, procedures, and product gates.
    • Support activities of AI Steerco, ISF, AWC and TSGC; integrate with Group Risk, Legal, Privacy, and Procurement for policy, third‑party, and contract controls.
    • Define enterprise AI control objectives, assurance plans, and attestation mechanisms aligned to BRAIN and Group policies (in particular GISP).

    Enterprise AI Security Strategy & Architecture

    • Define and implement MTN’s AI security strategy aligned to Group cyber reference architecture and recognized industry principles (e.g., Zero Trust, cloud security frameworks, secure SDLC, IAM, and detection & response). 
    • Publish secure architecture standards for AI platforms, MLOps stacks, and model-serving patterns; embed security-by-design across the AI lifecycle (data, training, evaluation, deployment, operations) whence approved by AWC and TSGC

    Generative AI & LLM Security (BRAIN Policy Enablement)

    • Operationalize BRAIN policy via guardrails: prompt security, input/output filtering, data loss prevention for prompts/outputs, access control & usage monitoring, and secure LLM architecture patterns for hosted and API models. 
    • Define approval workflows for model access, use cases, and sensitive data handling; implement usage analytics and model egress controls to prevent leakage. 

    Adversarial Machine Learning Defence & AI Red Teaming

    • Build MTN’s AI threat modelling methodology, in collaboration with GIS; institute adversarial robustness testing (poisoning, evasion, prompt injection, API exploitation) and AI red teaming exercises for models and applications, in collaboration with GIS Cyber-defense
    • Define secure model deployment controls and post‑deployment behavioural drift monitoring for manipulation detection in collaboration with the CCOE.

    Secure MLOps & Data Security

    • Set Secure MLOps standards: CI/CD for models, integrated security testing, hardened model registry & artifact stores, signed/attested models, and pipeline authN/authZ, in collaboration with S2 COE and main repevant suppliers. 
    • Protect training/validation datasets; enforce patterns for secure data ingestion, sensitive data minimization, and prevention of training data leakage. 

    AI Security Monitoring & Incident Response

    • Integrate AI platforms, data pipelines, and model-serving endpoints with enterprise SIEM/SOC for continuous monitoring and anomaly detection. 
    • Extend cyber incident response playbooks to AI scenarios, in collaboration with Cyber-defense: containment of compromised models, forensic acquisition of model artifacts, and post‑incident model integrity verification. 

    Risk, Compliance, and Policy Alignment

    • Map AI security controls to BRAIN and Group policy; align with POPIA, GDPR, ISO/IEC 27001/27701, ISO/IEC 42001 (AI), NIST AI RMF, and applicable sectoral obligations in telco and fintech.
    • Establish Model Risk Management (MRM) with 1st/2nd line—including risk taxonomy, criticality tiers, control baselines, testing cadence, and assurance reporting to Group risk committees.
    • Oversee third‑party & cloud AI risk, in collaboration with Legal, Procurement and GIS-GRC: vendor due diligence, contract clauses (data, IP, security SLAs), and ongoing assurance.

    Platform & Product Enablement

    • Partner with Connectivity, Fintech (MoMo), and Infraco product lines—and with the GenAI adoption stream—to embed design‑time controls, privacy-by-design, and production guardrails into AI‑enabled services.
    • Define reference architectures and “secure patterns” for common use cases (RAG, copilots, fraud analytics, network optimization), in collaboration with GIS.

    People Leadership & Capability Uplift

    • Build and lead a high‑performing AI Security team within the AI COE; develop community of practice with GIS, OPCO CISOs and DPOs.
    • Launch training & awareness for engineers, data scientists, and product teams on BRAIN policy, secure GenAI usage, and adversarial ML, in collaboration with GIS OPCO Operations.

    Performance, KPIs & Reporting

    • Define and track KPIs: % AI use cases cleared by BRAIN gates, time-to-approve models, adversarial test coverage, model drift MTTR, policy exceptions, third‑party assurance completion, and reduction in AI security incidents.
    • Provide executive dashboards and Board/Exco reporting on AI risk posture and maturity. There KPI will be periodically reviewed at Steerco.

    Budget, Tooling & Vendor Management

    • Own budget for AI security tooling (e.g., secrets scanning for ML, model provenance/attestation, content safety, AI Security Posture Management), and manage vendors/partners via Group procurement.

    Qualifications

    Minimum Qualifications

    • Bachelor’s degree in Computer Science/Engineering/Mathematics
    • Honours degree advantageous
    • Relevant security certifications (e.g., CISSP, CCSP), plus data/AI credentials (e.g., cloud AI specialties, ML engineering) are advantageous.

    ​​​​​​​Experience

    • 5–10+ years across cybersecurity/platform security with 3-5+ years securing AI/ML platforms, GenAI/LLM ecosystems, or data-intensive analytics at enterprise scale.
    • Demonstrable track record establishing security operating models/COEs and driving group-wide policy adoption in complex, multi‑country organizations (preferably telco/fintech).
    • Hands‑on exposure to cloud-native AI stacks (in particular Azure), MLOps toolchains, and embedding controls in agile product delivery.
    • Experience integrating AI systems into SOC/SIEM, designing AI incident response, and conducting AI red teaming and robustness testing. 

    ​​​​​​​Core Competencies and skills

    • Security architecture for AI/ML; 
    • Threat modelling and adversarial ML; 
    • Secure MLOps; data security & privacy; 
    • GenAI/LLM guardrails; 
    • Risk & compliance; 
    • Leadership and stakeholder management.

    Check how your CV aligns with this job

    Method of Application

    Interested and qualified? Go to MTN on ehle.fa.em2.oraclecloud.com to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at MTN Back To Home

Career Advice

View All Career Advice
 

Subscribe to Job Alert

 

Join our happy subscribers

 
 
 
Send your application through

GmailGmail YahoomailYahoomail