MS SQL Server (advanced)
Apache Airflow (advanced)
Paramount Tech in Warsaw plays a crucial role in Paramount s global engineering organization. Through our projects we make sure that millions of users worldwide can enjoy Paramount content through web, mobile, and TV applications.
Put the volume up , and see more here : https : / / bit.ly / 3mwjlbg
What we do :
Our job as a team is to help deliver to our customers within Paramount the best Data Solutions possible. This includes the latest efforts to migrate and consolidate our data pipelines / warehouses in the cloud (GCP), as well as delivering cutting edge highly scalable data pipelines in GCP.
You will :
Design and develop highly scalable and reliable Data Pipelines and Data Warehouse solutions and objects, including SQL code, Data Pipelines (Python / Airflow, SSIS) and data structures in both on-premise (SQL Server) and cloud database solutions (Big Query / Snowflake),
Create complex SQL queries to achieve efficient transformation layer,
Provide functional data analysis, problem identification, research and troubleshooting in support of clients,
Migrate data solutions from on-premise to cloud environments (GCP, AWS, Snowflake),
Partner with the internal product and business intelligence teams to determine the best approach around data ingestion, structure, and storage and work with the team to ensure these are implemented correctly,
Operate effectively as part of a larger team and in managing own work; .
Tech stack : Python,
GCP (Big Query / Composer / GKE),
Apache Airflow (Google Cloud Composer),
Kubernetes / Docker,
MS SQL Server (SSMS, SSIS),
AWS (S3, Data Pipelines),
How we work :
You will work will various engineers and professionals from our Warsaw office and other Paramount locations all around the world (we have Backend, Mobile, DevOps, Test Automation and System Engineers, Product Owners, Scrum Masters, Agile Coaches, and other business professionals on board),
Our products are exposed to millions of users globally,
The majority of our business clients are located in US,
Our teams own how decisions - we are autonomous regarding the architectural choices, technologies, and approach to providing high quality solutions,
We focus on test automation and code quality and we do that by automating whatever is possible!
We offer :
Hybrid / Remote (home office if applicable),
Multisport card + private health plan,
Well located, modern office with lots of amenities adjustable desks, electronics toolkit, 3D printer ready for you to use, pool table, console, table tennis, massage chair,
We participate in various conferences,
Access to e-learning and self-development platforms (Linkedin learning, Pluralsight),
In-house activities : tech talks, hackathons,
You can use 10% of your working time to pursue your personal development and side projects,
Active global inclusion groups (various Employee Resource Groups).
We are looking for Engineers that :
Will provide support for less experienced engineers - knowledge sharing and mentoring is important to us,
Keep up to date with modern technologies,
Are curious, with a desire to learn and the ambition to quickly become a self-reliant, top-notch engineer,
Want to actively participate in decision making,
Have high level abstraction thinking skills, the ability to extrapolate narrow solutions to a general, reusable module or process, and a push to abstract common cases and patterns,
Have the ability to think critically about a problem and visualize various scenarios for an outcome, communicating any risks,
Describe themselves as a creative thinker who can devise solutions for complex problems - challenging the status quo and suggesting improvements are required!
Can and want to independently lead projects,
Enjoy working in a team,
Easily communicate in English.
And on the technical side, we need :
5 years of experience in data engineering environments,
Expert SQL knowledge (preferably T-Sql),
Experienced in ETL processes design,
GCP or AWS experience,
Snowflake or Big Query,
Workflow orchestration (Airflow / Google Cloud Composer),
Kubernetes, Docker, Github,
Knowledge of Data Analytics techniques such as described by Kimball & Ross,
Database programming and an understanding of the principles of database design.