Qualifications
Job Requirements (Education, Experience and Competencies)
Education:
- Minimum of 3-year tertiary degree / diploma (specialisation in Communication/Commerce/ Management/Human Resources/Behavioural Sciences/Digital Marketing/as appropriate)
- Relevant certification / accreditation / membership with professional body for Internal Business Communication, design and branding, employee engagement, etc. (advantageous)
Experience:
- Minimum of 5 years’ experience working in the communication, marketing or digital media environment
- Experience in digital content creation and management within an internal communication, marketing or digital environment
- Exposure to enterprise social network platforms and digital publishing tools
- Experience working with multimedia content and campaigns (advantageous)
- Worked across diverse cultures and geographies (advantageous)
- Experience working in a small to medium organisation
- Telecommunications, digital services, or technology industry experience
- Project management experience
Competencies:
- Functional Knowledge:
- Project Management
- Engagement Programme Design
- Branding & Communication
- Digital content development and copywriting
- Working knowledge of content management systems (CMS)
- Familiarity with digital platforms, enterprise social networks and analytics
- Strong editorial, storytelling and content planning skills
- Ability to manage multiple content streams and priorities without compromising quality
- AI literacy, including a working understanding of how AI tools can be applied ethically and effectively in internal communication and content development
- Practical experience in AI prompting to support content creation, summarisation, ideation and adaptation across channels
- Data literacy, with the ability to interpret communication, engagement and platform analytics to inform decisions and improve outcomes
- Experience analysing content performance, engagement trends and audience data to refine communication strategies
- Knowledge of change management principles and their application in internal communication and employee engagement initiatives
- Ability to design and deliver communication that supports behavioural change, adoption and mindset shifts
- Experience supporting communication for digital transformation, process change or organisational change initiatives
- Ability to adapt messaging for different platforms, audiences and stages of change
- Comfort working in data driven, agile and technology enabled environments
- Design management
- Media Platforms
- Global Working and Collaboration
- Organisational considerations
Skills
- Conceptual Thinking
- Problem Solving
- Improvement Driver
- Culture and Change Champion
- People Manager
- Relationship Manager
- Results Achiever
- Operationally Astute
- Research
- Information Processing
- Dealing with ambiguity and complexity
- Presentation Skills
- Communication Skills
- Judgement
- Conflict Management
- Project Management
- Risk Management
Behavioural Qualities
- Accountable
- Adaptable
- Agile
- Culturally aware
- Gets work done
- Innovation
- Inquisitive
go to method of application »
Responsibilities
The Senior Manager Information Security AI COE is responsible for the following key performance areas:
AI COE (Security Pillar) & Governance
- Establish the AI COE security mandate, operating model, and RACI across Group functions and OPCOs; embed BRAIN policy requirements into standards, procedures, and product gates.
- Support activities of AI Steerco, ISF, AWC and TSGC; integrate with Group Risk, Legal, Privacy, and Procurement for policy, third‑party, and contract controls.
- Define enterprise AI control objectives, assurance plans, and attestation mechanisms aligned to BRAIN and Group policies (in particular GISP).
Enterprise AI Security Strategy & Architecture
- Define and implement MTN’s AI security strategy aligned to Group cyber reference architecture and recognized industry principles (e.g., Zero Trust, cloud security frameworks, secure SDLC, IAM, and detection & response).
- Publish secure architecture standards for AI platforms, MLOps stacks, and model-serving patterns; embed security-by-design across the AI lifecycle (data, training, evaluation, deployment, operations) whence approved by AWC and TSGC
Generative AI & LLM Security (BRAIN Policy Enablement)
- Operationalize BRAIN policy via guardrails: prompt security, input/output filtering, data loss prevention for prompts/outputs, access control & usage monitoring, and secure LLM architecture patterns for hosted and API models.
- Define approval workflows for model access, use cases, and sensitive data handling; implement usage analytics and model egress controls to prevent leakage.
Adversarial Machine Learning Defence & AI Red Teaming
- Build MTN’s AI threat modelling methodology, in collaboration with GIS; institute adversarial robustness testing (poisoning, evasion, prompt injection, API exploitation) and AI red teaming exercises for models and applications, in collaboration with GIS Cyber-defense
- Define secure model deployment controls and post‑deployment behavioural drift monitoring for manipulation detection in collaboration with the CCOE.
Secure MLOps & Data Security
- Set Secure MLOps standards: CI/CD for models, integrated security testing, hardened model registry & artifact stores, signed/attested models, and pipeline authN/authZ, in collaboration with S2 COE and main repevant suppliers.
- Protect training/validation datasets; enforce patterns for secure data ingestion, sensitive data minimization, and prevention of training data leakage.
AI Security Monitoring & Incident Response
- Integrate AI platforms, data pipelines, and model-serving endpoints with enterprise SIEM/SOC for continuous monitoring and anomaly detection.
- Extend cyber incident response playbooks to AI scenarios, in collaboration with Cyber-defense: containment of compromised models, forensic acquisition of model artifacts, and post‑incident model integrity verification.
Risk, Compliance, and Policy Alignment
- Map AI security controls to BRAIN and Group policy; align with POPIA, GDPR, ISO/IEC 27001/27701, ISO/IEC 42001 (AI), NIST AI RMF, and applicable sectoral obligations in telco and fintech.
- Establish Model Risk Management (MRM) with 1st/2nd line—including risk taxonomy, criticality tiers, control baselines, testing cadence, and assurance reporting to Group risk committees.
- Oversee third‑party & cloud AI risk, in collaboration with Legal, Procurement and GIS-GRC: vendor due diligence, contract clauses (data, IP, security SLAs), and ongoing assurance.
Platform & Product Enablement
- Partner with Connectivity, Fintech (MoMo), and Infraco product lines—and with the GenAI adoption stream—to embed design‑time controls, privacy-by-design, and production guardrails into AI‑enabled services.
- Define reference architectures and “secure patterns” for common use cases (RAG, copilots, fraud analytics, network optimization), in collaboration with GIS.
People Leadership & Capability Uplift
- Build and lead a high‑performing AI Security team within the AI COE; develop community of practice with GIS, OPCO CISOs and DPOs.
- Launch training & awareness for engineers, data scientists, and product teams on BRAIN policy, secure GenAI usage, and adversarial ML, in collaboration with GIS OPCO Operations.
Performance, KPIs & Reporting
- Define and track KPIs: % AI use cases cleared by BRAIN gates, time-to-approve models, adversarial test coverage, model drift MTTR, policy exceptions, third‑party assurance completion, and reduction in AI security incidents.
- Provide executive dashboards and Board/Exco reporting on AI risk posture and maturity. There KPI will be periodically reviewed at Steerco.
Budget, Tooling & Vendor Management
- Own budget for AI security tooling (e.g., secrets scanning for ML, model provenance/attestation, content safety, AI Security Posture Management), and manage vendors/partners via Group procurement.
Qualifications
Minimum Qualifications
- Bachelor’s degree in Computer Science/Engineering/Mathematics
- Honours degree advantageous
- Relevant security certifications (e.g., CISSP, CCSP), plus data/AI credentials (e.g., cloud AI specialties, ML engineering) are advantageous.
Experience
- 5–10+ years across cybersecurity/platform security with 3-5+ years securing AI/ML platforms, GenAI/LLM ecosystems, or data-intensive analytics at enterprise scale.
- Demonstrable track record establishing security operating models/COEs and driving group-wide policy adoption in complex, multi‑country organizations (preferably telco/fintech).
- Hands‑on exposure to cloud-native AI stacks (in particular Azure), MLOps toolchains, and embedding controls in agile product delivery.
- Experience integrating AI systems into SOC/SIEM, designing AI incident response, and conducting AI red teaming and robustness testing.
Core Competencies and skills
- Security architecture for AI/ML;
- Threat modelling and adversarial ML;
- Secure MLOps; data security & privacy;
- GenAI/LLM guardrails;
- Risk & compliance;
- Leadership and stakeholder management.
go to method of application »