Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Dec 1, 2022
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    Impact is transforming the way enterprises manage and optimize all types of partnerships. Our Partnership CloudTM is an integrated end-to-end solution for managing an enterprises partnerships across the entire partner lifecycle to activate rapid growth through the emerging Partnership Economy.Impact was founded in 2008 by a team of Internet marketing and ...
    Read more about this company

     

    Big Data Engineer II

    Why this role is exciting:

    As a Big Data Engineer II, your focus will be on delivering stories for the squad, monitoring production environments and managing deployments to production. This role assumes that you are able to utilize the latest features of a language and can effectively select and implement the right design pattern to solve problems independently.

    You will have experience implementing integration tests, be comfortable working with CI and confidently reuse existing frameworks.

    At this level you are expected to have an understanding of the business requirements of all stories in the sprint, implement stories on existing cloud infrastructure and services and independently implement agreed design to spec. You should feel comfortable escalating appropriately.

    You are also expected to help team members with implementation and troubleshooting.

    Responsibilities:

    • Collaborate with the team to fulfill the department's quarterly objectives
    • Design, implement features, and write tests on the Impact Data Platform leveraging our Big Data Tech Stack
    • Perform Releases, maintain continuous integration pipeline, code merging
    • Intermediate knowledge of Hadoop, Spark, SQL, NoSQL, Streaming
    • On-Call for Monitoring and Alerting and communicate to team / Company as needed
    • Analyze any job failure, log tickets in JIRA, deliver analysis, code fix, possibly data fix
    • Communicate cross squad via slack, email, JIRA, Zoom
    • Create & Maintain proper documentation
    • Approve & Merge Pull Requests
    • Able to tune performance of Systems, Pipeline flows, Applications, Datastores
    • Assist systems group with database and other infrastructure upgrades (sometimes off- hours/weekends)
    • Gain and maintain enough understanding of The Business to deliver effective solutions
    • Perform data quality analysis and introduce monitors with proper alerting for the team
    • Be part of the team conducting interviews of new candidates
    • Regularly share technical approaches with team
    • Mentor Associate Engineers as well as knowledge share within the Team and broader Engineering Department
    • Identify potential new technologies in our stack

    Does this sound like you?

    • Initiative
    • Adaptability
    • Personal Development
    • Completed B.S. In Computer Science or related field or equivalent professional experience
    • Any open source contributions are strongly desired
    • Desire to work with Big Data and surrounding Technologies
    • 3+ years experience with numerous ETL / Streaming Pipeline Technologies
    • 4+ years Software Development experience
    • Agile / Iterative processes. Kanban / Scrum
    • Experience working in a Start-up or Internet business is valuable
    • Customer Focus
    • Service-Oriented
    • Experience working with Large Data Volumes - Terabytes to Petabytes - required
    • Experience working with Big Data Tech - Spark, Kafka, Google Pub/Sub, HBase
    • Exposure and experience in any Google Cloud technology highly desired
    • Knowledge of Digital Marketing or Web Analytics is a big plus
    • Continuous Integration / Delivery methods, tooling, integrations
    • Ability to Implement core principles of Ralph Kimball - Star Schemas / Facts / Dimensions etc.
    • Ability to tune numerous types of system and applications in a Data Pipeline
    • Experience with Relational Databases, Table design, SQL
    • Exposure and Knowledge of Scheduling Frameworks; Azkaban a plus
    • Experience writing enterprise-level application code in a JVM Language (Scala preferred)
    • Experience writing enterprise-level application code in Python highly advantageous

    Benefits/Perks:

    • Casual work environment, including working from home
    • Flexible work hours

    Unlimited PTO policy

    • Take the time off that you need. We are truly committed to a positive work-life balance, recognising that it is important to be happy and fulfilled in both
    • 6 month paternity/maternity leave

    Training & Development

    • Learning the advanced partnership automation products

    Medical Aid and Provident Fund 

    • Group schemes with Discovery & Bonitas for medical aid
    • Group scheme with Momentum for provident fund

    Restricted Stock Units

    • 3-year vesting schedule pending Board approval
    • Internet Allowance
    • Fitness club fee reimbursements

    Method of Application

    Interested and qualified? Go to Impact on boards.greenhouse.io to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Impact Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail