Spark (nice to have)
Terraform (nice to have)
IaC (nice to have)
Microsoft Azure (nice to have)
Amazon Web Services (nice to have)
Data Engineering (regular)
We are a team of nerds dedicated to collect other nerds that happen to be the best in their respective fields in the broadly understood IT service and analytics.
And with them we go out of our way to create IT and Analytical Hubs which bring together best experts from the IT world to help our partners to become real data driven entities.
Currently we are looking for Regular / Senior Python Developer as we are supporting our partner in developing Global Analytics unit which is a global, centralized team with the ambition to strengthen data-driven decision-making and the development of smart data products for day-to-day operations.
The team is meant to be a nucleus radiating data-driven entrepreneurial culture by acting as incubator to realize ideas by simply doing it : creating smart data products in all areas of business, be it sales, marketing, purchasing, logistics or any other part of the company.
The team has a lot of freedom to shape this, especially in the use of tools and technology, but also by introducing new concepts, solutions and ways of working.
The first project you would participate in is focused on implementation of IoT solutions supporting almost all operations, processes and functions run in production sites.
If you want to :
take part in the development and implementation of a complex system of smart data solutions used as a core part of IoT venture
have opportunity to work on bleeding-edge projects
have a chance to see how your visions come true
be a member of an international and diverse team of ground-breaking data scientists, data engineers, BI developers, UX designers and analytics translators
carry out projects which address real business challenges
have a real impact on the projects you work on and the environment you work in
have a chance to propose innovative solutions and initiatives
opportunity and tools to grow, develop and drive your career forward,
it’s probably a good match.
Moreover, if you like :
flexible working hours
casual working environment and no corporate bureaucracy
having an access to such benefits as Multisport and Luxmed
working in modern office in the centre of Warsaw with good transport links or working remotely as much as you want
a relaxed atmosphere at work where your passions and commitment are appreciated
vast opportunities for self-development (e.g. online courses and library, experience exchange with colleagues around the world, partial grant of certification),
it’s certainly a good match!
If you join us, your responsibilities will include :
structuring whole processes of data extraction, data transformation and data storing using serverless AWS services to deploy models / analytical solutions
writing and maintaining ETL processes in Python
implementing, maintaining and further developing the functionality of the Python packages for ETL processes, data lineage, operator inputs, including building logic
design and implementation of company’s data standard as database models
design and implementation of data flow
unit and integration tests of Python modules
participating in mission critical processes of the data pipeline
technical support in understanding business problems and designing smart data products
We expect :
significant commercial experience on similar position
fluency in extracting information from databases
excellent SQL skills
strong software engineering skills in Python (including unit testing, integration testing, OOP)
ability to write clean, efficient, documented and scalable code
ability to take a responsibility for the development of the whole user story
experience working in the organizations with the agile culture
fluent English as you will communicate in English almost all the time
Nice to have :
good working knowledge of Amazon Web Services and / or Microsoft Azure
experience in building and releasing Infrastructure as Code with working knowledge of such tools as Terraform
experience with working with large datasets through Spark and RDBMS
If interested please let us get to know you by sending your CV using "Apply" button.
Please add to your CV the following clause :
I hereby agree to the processing of my personal data included in my job offer by hubQuest spółka z ograniczoną odpowiedzialnością located in Warsaw for the purpose of the current recruitment process.
If you want to be considered in the future recruitment processes please add the following statement :
I also agree to the processing of my personal data for the purpose of future recruitment processes.