Python (regular)
Data Visualization (advanced)
Data analysis (advanced)
Statistics (advanced)
About us :
edrone is an AI-fueled SaaS platform, providing Customer Experience solutions for eCommerce. It is used by hundreds of SMEs online retailers in CEE, Southern EU, and LatAm.
edrone won multiple awards, including Computer World’s "Best in Cloud" in 2017 and 2018. TECH5 by TNW, 2nd place in commerce software worldwide competition 2022 by G2.
Responsibilities :
Close collaboration with Product Team
Preparing reports and analyses
Drawing conclusions and building recommendations from your findings
Automating update and the delivery of reports
Working inside Data Scientist Team
Skills - required :
min. 2 years of experience in field
Data Analytics
the ability to present the results transparently and clearly
focus on giving value to business
knowledge of A / B tests
analytical skills
data modelling
experience in Relational databases
ability to perform complex selects
preferably MySQL, presto
Practical knowledge of BI Tools (AWS QuickSight is preferred)
Experience in Python
Basic knowledge of GIT
Skills - nice to have :
AWS (Athena, S3, Lambda, RDS, DynamoDB, QuickSight)
Google Analytics
Basic knowledge of data processing pipelines
How we work :
DevOps - you build it you run it
Small, tightly-knit groups of very skilled people
Code Reviews
Directly Responsible Individual
Pair programming
Paying back technical debt whenever you can
1 : 1s
Blameless postmortems
What we value :
Seeking mastery. We read books, attend conferences and meetups. We have a library of books. We study alone and in groups.
Company has a budget to support us. We do this because it is our passion.
Curiosity. If we use something we want to know how it works exactly. What are the constraints, when does it fail
Direct, honest and timely feedback. This is how we improve.
Autonomy. We support each other and we actively avoid micromanagement.
Being a good human. We don’t tolerate jerks, no matter how brilliant they are
Interesting technical challenges :
Scalability - we ingest tons of data that must be processed near-real time. Traffic patterns change constantly and we have to adapt dynamically.
We rely on horizontal partitioning and auto-scaling a lot.
Reliability - uptime, latencies, queue processing delays - we live and breathe by these metrics. We assume machines, disks, network and software will fail.
Our approach is resilience engineering and automation.