Job offer
Senior Data Engineer
As a Senior Data Engineer, you will be responsible for implementing methods and solutions to strengthen data reliability and quality, and developing data models and pipelines for research, reporting, and machine learning. The position offers a 12-month contract term with an external agency as the payroll provider.
Job description
Tasks
- Implementation of methods and solutions to strengthen data reliability and quality
- Collaborate with data scientists and data analysts to develop data models and pipelines for research, reporting, and machine learning
- Application of software engineering expertise to implement state-of-the-art platforms for execution, sales, and trading
- Takeover and maintenance of the data platform to ensure stability, performance, and continuous improvement
Main tasks and responsibilities
- Agile program planning and execution
- Active participation in Program Increment (PI) planning processes within GPS
- Environment and configuration management of Murex applications/FICC modules and Java applications
- Change management
- Development, implementation, and maintenance of scalable data pipelines and robust backend services to support real-time decision-making, reporting, data ingestion, and related functions
- Collaborate on gathering data requirements and contribute to conceptual and logical data modeling to tailor solutions to business needs
- Implementation of efficient data management strategies, including transformation, integration, and orchestration across diverse sources and systems
- Delivery of high-quality, secure, tested code that complies with software development best practices and regulatory standards
- Developing and improving software solutions to strengthen data governance frameworks to ensure compliance, traceability, and accountability
- Creation and maintenance of automated processes to enforce data security protocols and adhere to strict data quality standards throughout the organization
- Regulatory responsibilities and/or risk management
- Demonstration of appropriate values and behaviors, including but not limited to standards of honesty and integrity, data maintenance and diligence, fair treatment of customers, management of conflicts of interest, competence and continuous development, and appropriate risk management.
Requirements
- Technical field or equivalent education and experience
- 5+ years of experience integrating data processing and workflow management tools into pipeline design
- 5+ years of experience in ETL/ELT, data warehousing, and/or business intelligence development
- 5+ years of experience in building and maintaining end-to-end data systems and supporting services in Python, Scala, or similar
- 5+ years of experience with SQL for data analysis and investigating data issues or questions and providing solutions
- 2+ years of experience with cloud data technologies
- Experience with structured, semi-structured, and unstructured data
- Experience with various databases, including data warehouses, RDBMS, in-memory caches, and searchable document databases
- Experience with large data sets in SQL (Gatabricks/PySpark)
- Experience with Spark Streaming and Delta (Live) Tables
- Experience with storage systems such as Azure Storage/Data Lakes
- Experience with data modeling
- Strong design, implementation, and testing skills
- Experience with continuous integration and automated deployments
- Experience with cloud platforms (preferably Azure) in a continuous delivery environment
Preferred qualifications
- Experience with microservices platforms (Kubernetes, Docker, Helm Charts)
- Experience with event-driven streaming systems (Kafka, Event Hub, Event Grid, Apache Flink)
- Knowledge and use of OBT
- Experience with data validation and Plus
- Experience with BI
Job details