Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Mar 31, 2021
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    Silicon Overdrive is a leading information technology company providing business-critical services such as bespoke software development, enterprise-level business strategies for IT, as well as strategies for leading, maintaining and managing IT. Founded in 1995 and based in Cape Town, South Africa, we have service ports in Johannesburg, Pretoria, and Durban...
    Read more about this company

     

    MLOps Engineer (DevOps)

    About the Job

    • Machine Learning Operations Engineer (MLOps). MLOps is the discipline of integrating ML workloads into release management, CI/CD, and operations. MLOps requires the integration of software development, operations, data engineering, and data science.

    What we do:

    • Our vision is to be the first choice MLOps team for our clients. Our mission is to help business productionise their machine learning and HPC workloads within the cloud using industry best practices.

    Why work here:

    • You’ll be given the opportunity develop your skills as an MLOps engineer while working on the most innovative solutions that our clients are developing.
    • We are looking to employ the brightest scientist and engineering talent with a passion for solving complex problems. If you join our team, you will form an essential part of the solution, working across multiple industry sectors.

    What MLOps Engineer does here:

    • As a MLOps Engineer (DevOps Engineer) at Silicon Overdrive, you will be responsible for a range of duties. Core responsibilities include but are not limited to ml proposals, client liaising, configuring client environments, ETL workflows, pipelines, data lakes, HPC workloads, serverless workflows and ML development environments.

    Typical day-to-day tasks:

    • Assist with ML proposals, including defining solutions architectures for ML/HPC/Data Lake workloads based on best practices
    • Estimate cloud usage costs and identify operational cost control mechanisms per project
    • Migrate on-premises workloads to edge and cloud for clients
    • Configure client environments, including pipelines (i.e. CI/CD, Data, Inference) using IaC (Infrastructure as Code) tools (i.e. Terraform) and programming languages (i.e. Python/Go/Bash).
    • Configure data lakes for storing data (i.e. raw, transformed, training/validation/test sets and inference results)
    • Develop Glue ETL workflows using Python scripts for data transformations
    • Develop Lambda/Step functions as part of serverless workflows for triggering pipelines and transforming datasets using NodeJS/Python.
    • Configure ML development environments (i.e. SageMaker Studio) for processing, building, training and testing models.
    • Configure ML inference endpoints for production workloads.
    • Configure HPC environments within the cloud using ParallelCluster/NICE DCV

    Knowledge/Experience:

    • Knowledge with Linux operating systems including:Ubuntu/AWS Linux CIS
    • SSH connections and tunnelling
    • Bash scripts.
    • Proficient in a scripting language (such as Python, NodeJS, Golang, Bash etc)
    • Basic understanding of CI/CD pipelines
    • Basic understanding of IaC (i.e. CloudFormation, Terraform)
    • Proficient in version control systems (i.e. GitHub, BitBucket)
    • Proficient in Jupyter Notebooks and JupyterLab
    • Basic understanding of ML algorithms and when to use which.
    • Proficient in Python packages such as Pandas/NumPy
    • Basic understanding of Docker and deploying containers.
    • Basic understanding of RDBMS, and NoSQL databases
    • Understanding the basics of requests using RESTful API’s and interpreting their response codes.
    • Ability to translate architectural requirements based on architectural diagrams from Solution Architects.

    What we are looking for:

    • A Bachelor’s degree in Engineering, Computer Science, Information Technology or closely related field, or equivalent experience.
    • 2+ years DevOps/MLOps experience.
    • 1+ years of cloud (AWS) experience.
    • Basic understanding of ML algorithms and workflows.
    • Proficient in Python, and data wrangling packages such as Pandas and NumPy.
    • Strong knowledge with Linux (AWS Linux/Ubuntu/RHEL) operating systems.
    • Someone who is proficient in designing efficient and robust ETL workflows.
    • Someone that can work and integrate with multiple teams.
    • Someone who enjoys solving complex problems.
    • Experience with Agile software development.
    • Excellent written and verbal communication skills.
    • Proven ability to set goals, develop and execute strategies, and track and measure results.
    • Proven skills to work effectively across internal functional areas in ambiguous situations.

    Nice to have:

    • Experience working in the AI industry.
    • Understanding of virtualization technology, including containers (Docker) and orchestration tools (Kubernetes).

    Method of Application

    Interested and qualified? Go to Silicon Overdrive on www.linkedin.com to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Silicon Overdrive Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail