Data architecture (regular)
Azure data links (regular)
Aspire Global is a leader in the iGaming space offering solutions spanning game aggregation, sportsbook and managed services offered through an industry leading core platform.
The Data & Analytics team owns the group’s core data stack spanning Data Engineering, Business Intelligence and Analytics & Insights to lead the group’s data-driven modernisation both internally and for its partners.
The Data Engineer will play a vital role as part of a cross-functional squad to develop data pipelines and the data lake to ingest, structure and expose data from a central location for integration, reports, analytics, and robotic process automation.
The chosen candidate needs to be passionate about building scalable data models and architecture for use by every team with the aim of making it easy for BI, Analytics, Product, and other data consumers to build data-driven solutions, features and insights.
Create data pipelines to ingest data from dissimilar sources (performance, reliability, monitoring).
Serve data models as a product to the entire organisation through implementation and debugging.
Carry out research and development and work on PoCs to trial and adopt new technologies.
Collaborate with the other teams to address data sourcing and provision requirements.
Coordinate with the Product & Technology teams to ensure all platforms collect and provide appropriate data.
Liaise with the other Data & Analytics teams to ensure reporting and analytics needs can be addressed by the central data lake.
Support the Data Quality and Security initiatives by building into the architecture the necessary data access, integrity, and accuracy controls.
3+ years of experience in Data Engineering
Degree in Computer Science, Software Development or Engineering
Proficient in Python. Past exposure to Java will be considered an asset
Understanding of RDMS, Columnar and NoSQL engines & performance
Experience with cloud architecture and tools : Microsoft Azure, Amazon or GCP
Strong background in stream data processing technologies such as NiFi, Kinesis, Kafka
Experience with orchestration tools such as Apache AirFlow, dbt
Prior exposure to the Snowflake ecosystem will be considered an asset
Familiarity with DevOps methodologies and concepts
Understanding of distributed logging platforms - ideally the ELK stack
Fluency in spoken and written English is essential
Passionate about data and on the lookout for opportunities to optimise
Passionate about technology and eager to recommend new platforms