Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Mar 2, 2021
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    Amazon strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon continues to grow and evolve as a world-class e-commerce platform.


    Read more about this company

     

    Senior Data Engineer - EC2 Capacity Data Analytics

    This position involves on-call responsibilities, typically for one week every two months. Our team is dedicated to supporting new team members. We care about your career growth, we try to assign projects and tasks based on what will help each team member develop into a more well-rounded engineer and enable them to take on more complex tasks in the future.

    Our team values work-life balance and we are flexible when people occasionally need to work from home.

    Job Duties

    • Lead the team to develop and maintain automated ETL pipelines for big data using languages such as Scala, Spark, SQL and AWS services such as S3, Glue, Lambda, SNS, SQS, KMS. Example: ETL jobs that process a continuous flow of JSON source files and output the data in a business-friendly Parquet format that can be efficiently queried via Redshift Spectrum using SQL to answer business question.
    • Design and implement data architecture to support analytics use cases that yield significant data quality, availability and/or business value.
    • Develop and maintain automated ETL monitoring and alarming solutions using Java/Python/Scala, Spark, SQL, and AWS services such as CloudWatch and Lambda.
    • Implement and support reporting and analytics infrastructure for internal business customers using AWS, services such Athena, Redshift, Spectrum, EMR, and QuickSight.
    • Develop and maintain data security and permissions solutions for enterprise scale data warehouse and data lake implementations including data encryption and database user access controls and logging.
    • Develop and maintain data warehouse and data lake metadata, data catalog, and user documentation for internal business customers.
    • Develop, test, and deploy code using internal software development toolsets. This includes the code for deploying infrastructure and solutions for secure data storage, ETL pipelines, data catalog, and data query.

    Basic Qualifications

    • Bachelor’s degree in Computer Science or related technical field, or equivalent work experience.
    • 7+ years of overall work experience in either Software Engineering, Data Engineering, Database Engineering, Business Intelligence, Bigdata Engineering, Data Science.
    • Experience with AWS technologies stack including Lambda, Glue, Redshift, RDS, S3, EMR or similar big data solutions stack

    Preferred Qualifications

    • Demonstrate efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
    • Demonstrable proficiency in distributed systems and data architecture; design and implementation of batch and stream data processing pipelines; knows how to optimize the distribution, partitioning, and MPP of high-level data structures.

    Method of Application

    Interested and qualified? Go to Amazon on www.amazon.jobs to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Amazon Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail