LexisNexis Legal & Professional is a leading global provider of legal, regulatory and business information and analytics that help customers increase productivity, improve decision-making and outcomes, and advance the rule of law around the world. As a digital pioneer, the company was the first to bring legal and business information online with its Lexis® ...
Read more about this company
As a Senior Data Engineer at LexisNexis Intellectual Property (LNIP), you will play a key role in delivering high-quality, scalable data solutions that power our Strategic Data Platform — the backbone for products like PatentSight+ and other analytics offerings.
In this role, you will design data pipelines, mentor team members, promote engineering best practices, and help deliver customer datasets using Databricks, APIs, and event-driven systems. You will manage complex tasks, influence architecture decisions, and work closely with stakeholders. This position suits those who thrive on large-scale data challenges, team development, and platform growth.
Responsibilities:
Designing, implementing, and optimising scalable data pipelines using Python, PySpark, and Databricks.
Driving the delivery of customer-facing data products through APIs, Databricks-based sharing, and event-driven mechanisms (e.g., Kafka or similar).
Taking ownership of end-to-end features — from scoping and development to deployment and monitoring.
Leading and participating in technical design discussions, contributing to architectural improvements and long-term data strategy.
Mentoring and coaching junior engineers actively while supporting their technical and professional development.
Collaborating with cross-functional teams to understand business requirements and translate them into technical solutions.
Contributing to the team’s agile delivery process and provide input during planning and retrospectives.
Requirements:
Have extensive experience in data engineering or backend software engineering with a data focus.
Be proficient in Python, PySpark, and working within Databricks environments.
Have hands-on experience designing and delivering data products via REST APIs, event-driven systems, or data sharing platforms like Databricks Delta Sharing.
Have a solid understanding of distributed data processing, ETL/ELT workflows, and data lake/lakehouse architectures.
Have experience with cloud platforms (e.g., AWS, Azure, GCP) and modern data tooling.
Have a proven track record of mentoring or coaching other engineers.
Have effective communication and collaboration skills, including the ability to explain complex concepts to non-technical stakeholders.