We are looking for a Data Engineering Team Lead to join our in-house infrastructure group and play a pivotal role in contributing to the company’s on-going success – Come join us!
What you will do:
First, you will design a data pipeline architecture for obtaining, storing, ingesting, and routing large-scale amounts of data from various sources worldwide, both proprietary and commercial, in a distributed manner.
You will handle real time and vast data feeds which need to be accurately timestamped and recorded without congesting the network.
You will then lead a team of engineers to implement, maintain and consistently enhance the data pipeline, evaluate different data providers based on accuracy, cost, and richness, keep track of changes, monitor the quality of data sources, and suggest new ways to optimize storage and reduce latency.
- Conceiving innovative and novel ideas in all stages of the ETL pipeline.
- End-to-end ownership – POC, design, development cycles, deployment, and support.
- Creative thinking using state of the art tech stack.
- Solid programming foundation (e.g. data structure and algorithm, performance, paradigm, revision control, CI/CD, testing).
- Proven experience in designing and building a large-scale data pipeline
- BSc in Computer Science or equivalent software engineering fundamentals (Autodidact, Military experience)
- Can-do mentality, intellectual curiosity, self-motivation, and ability to communicate within and across teams.
- 5+ years’ experience with modern C++ (C++11 /14 /17).
- Experienced with Linux and scripting (python and/or bash)
- Knowledge and understanding of communication protocols in different layers (TCP, HTTPS and more)
- A passion for capital markets
- Experience with Spark, Hadoop, Snowflake or similar