Data Engineer at Front
San Francisco, CA, US
At Front, we’re redefining work communications and collaboration with our shared inbox for teams. Front brings all of your communication channels (email, Facebook, Twitter, Twilio SMS, live chat etc.) into one place, so you can triage and assign messages, have internal conversations around them, and even sync work across other apps you use from Salesforce to GitHub, without ever leaving your inbox. Today, over 3,000 companies rely on Front to power their communications, and we’re just getting started!
As the first Data Engineer at Front, your core responsibility will be to maintain and expand our infrastructure for analytics, scaling it as we grow at a very rapid pace. This is a high impact role, where you will be driving initiatives affecting teams and decisions across the company. You’ll be a great fit if you thrive when given ownership, as you would be the key decision maker in the realm of architecture and implementation. 


    • Architect systems and end-to-end solutions that provide fast, efficient and reliable interfaces for internal clients to work with.
    • Model and create data sets that meet our business requirements. Transform existing manual processes with automation and create self-service data consumption.
    • Own the quality of our analytics data. Implement a robust monitoring & logging framework that guarantees the traceability of inevitable incidents.
    • Evaluate whether the best solution for each problem at hand is to build, buy or contract the work.
    • Interface with data scientists, analysts, product managers and all other customers of the analytics infrastructure to understand their needs and expand the infrastructure as we grow.


    • BS/BA in Technical Field, Computer Science or Mathematics.
    • Ability to manage data warehouse plans and communicate them to internal clients.
    • At least 4 years of experience as a Data Engineer specifically, or as a Backend Engineer looking to move into Data Engineering.
    • Strong overall programming skills, able to write modular, maintainable code.
    • Strong SQL proficiency.
    • Experience with R or Python required.
    • Experience with either MapReduce or MPP technology, ideally both.
    • Direct experience with one of HDFS, S3, Redshift, Spark, EMR or Presto is a plus.
    • You are proactive, have a positive attitude with a“can-do”, service-oriented mentality.
    • Ability to juggle multiple projects/tasks with multiple stakeholders