WE ARE
Successfully cooperating with our client helping people move forward with credit, providing products that responsibly meet their needs.
Whether this is offering new ways of accessing credit with a leading retailer or providing tools to facilitate customers’ management of their accounts, client will continuously strive to help customers responsibly make the most of their credit.
We offer an approach to financial services that is in touch with people and their lives. It is an approach grounded in customer knowledge and differentiated by our passion to deliver the products, services, tools and expertise that best meet our customers’ needs.
Currently we are looking for a Middle Big Data Engineer to make our team even stronger.
YOU ARE
Proficient in Big Data integration technologies such as : Spark, Scala, AWS Data Pipeline for orchestration, Dremio, Glue, AWS Athena, AWS S3, EMR
Excellent with API and library design skills
Proficient with traditional database SQL technologies (Oracle, SQL Server, DB2)
Having pracctice with integration of data from multiple data sources
Able to write high-performance, reliable and maintainable code
Pretty knowledgeable of NoSQL database structures theories, principles, and practices
Confident in CI / CD best practice, multi-threading and concurrency concepts
Accustomed to cloud deployment
Familiar with the fundamentals of Linux scripting language
Nice-to-have skills :
Python
Building applications for cloud environment
YOU WANT TO WORK WITH
Modeling, manipulation, transformation, and loading of the Data Lake technology layers
Designing and coding of data transformation and loading processes through the various layers of the Data Lake using Big Data open-source technologies such as Kafka, Spark, Scala, and associated AWS ETL tooling
Owning all of the Data Content, manipulation, business rules, and associated processes within the Data Lake
Champion quality and simplicity in the system code and leading the enforcement to quality within the data landscape
Us according to Agile approach
Monitoring and profiling performance and suggesting / implementing solutions where appropriate
Recommending and testing new data structures, and physical data layouts, to improve throughput and performance
Championing the new Data Lake technology across the organization to address a broad set of use cases across data science and data warehousing
Researching, testing, and recommending new layers or products in the Data Lake stack as these fast-moving technologies develop, keeping Client at the forefront able to attract the best talent
Educating and training other resources in the organization whenever needed
TOGETHER WE WILL
Model, manipulate, transform and load Data Lake technology layers. Build knowledge of all data resources within Client and prototype new data sources internally and external to Client
Building knowledge of all existing data manipulation within both data warehouse systems and within the business marts
Design and code data transformation and load processes through the various layers of the Data Lake
Get a great deal of learning and development opportunities along with our structured career path
Process dynamic projects and still have a stable place of work
Take part in internal and external events where you can build and promote your personal brand
Work with experienced specialists willing to share their knowledge
Care about your individual initiatives we are open for them, just come and share your ideas