Data Engineer

FairMoney is hiring!


FairMoney is building the leading mobile bank for emerging markets.

We started with a digital microloans application on Android, and we are currently roll out additional financial services (current account, savings, debit card) while expanding the product to Western Africa and South-East Asia.
We have disbursed close to 500,000 microloans and give more than 5000 loans daily. We are backed by top notch US/EU investors and have raised 15m EUR + Venture Capital supporting our growth.

We are a team of 60 between Paris and Lagos in Engineering (Android, Data, Backend), Product, Marketing, Customer Service and Operations.

Our Values

  • Start with the client, never assume πŸ“Š
  • Churn 9-5 🎯
  • Care for Impact πŸ₯°
  • A - Players only πŸ’ͺ
  • Fly High - Fly Low πŸ™Œ
  • Young & Hungry πŸ‹οΈ
  • Progress starts where comfort ends πŸ‘ŠπŸΎ
  • 70/30 ⚑
  • Direct Feedback 🀝
  • Logic and Data wins over β€œI think” πŸ’₯

More info here.[]

Job Description

You will be working as our Data Engineer in our Tech Division which is now composed of 11 people. You will be in our Data Team with Jadd and Loic who are our Data Scientists. As you may understand, you will be our first Data Engineer!

Your mission will be to manage our data pipelines in order to optimise their use for our data science, finance and analytical teams. For example, your work will allow us to iterate more quickly on the features used in our credit scoring algorithm in order to be more competitive in the market and helping our rapid growth.

We are waiting from you to work on :

  • Improving our data infrastructure (data warehousing);
  • Allowing access to risk managers (data scientists/finance) to a catalog of features easily usable, regardless of the volume of raw data;
  • Setting up good practises regarding data to make sure that pipeline is replicable.

πŸ”§ Our technical Stack

  • Our customer product is an android application;
  • It communicates with a Ruby on Rails backend (sikekiq, rspec);
  • Our ML pipeline is an API-flask (Python);
  • We are hosted on AWS: we use Elastic Beanstalk, a RDS Aurora mysql database, S3, Kinesis (data-stream, firehose);
  • Prototype of a data warehouse on GCP (GCS, BigQuery, Cloud Composer);
  • We have automated tests that run on Travis.

Preferred Experience

  • 2-5 years of experience in data engineering and data pipeline (Ingestion, ETL, data serving) / data warehousing / online and offline data flows,
  • You have an Engineering or University Diploma (we are also open to self-taught with 5 years experience in data engineering),
  • You work with cloud providers (AWS / GCP),
  • You are able to code on Python,
  • You pay attention to details,
  • You are able to go for the most efficient solution.

🎁 What do we offer 🎁

  • 50 % Health insurance provided by Alan (quick & easy);
  • Free snacks for the team provided by Totem;
  • 50% of your Navigo Pass;
  • 1.5kEUR per year for your training and personal development;
  • Gymlib (Partly paid by the company)
  • 1 month of paid salary if you’re the second parent because your children are babies only once..!

Recruitment Process

  • Phone screening of 30 minutes with Maud - our Talent Manager,
  • Technical Test,
  • Face-to-Face interview,
  • Meeting with the team.

Additional Information

  • Contract Type: Full-Time
  • Location: Paris, France (75010)
  • Education Level: Master's Degree
  • Experience: > 2 years
  • Possible full remote