- Create and maintain optimal data pipeline architecture, assemble large and complex datasets to meet functional/nonfunctional business requirements.
- Build the infrastructure required for optimal extraction, transformation, and loading data from a wide variety of data sources.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for greater scalability, etc.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Support cross functional, cross BU data integration tasks.
- 3+ years of data and software engineering experience and at least two completed project life cycle.
- 3+ years of experience with data modeling, data warehousing, and building ETL pipelines.
- Programming experience with manipulating and analyzing data (e.g. Python, Shell, SQL).
- Experience with at least one SQL/NoSQL databases (e.g. MySQL, PostgreSQL, Mongo).
- Experience with workflow management tools (e.g. Airflow, Dagster, Nifi).
- Experience with backend API development (e.g. Django, Flask, FastAPI).
- Experience with containerized application development (e.g. Docker, Kubernetes).
- Experience with Google Cloud Platform Service (e.g. BigQuery, CloudSQL, PubSub).
- Strong knowledge of message queue (e.g. Kafka, RabbitMQ).
- Basic knowledge of Linux operating system.
- Basic knowledge of Machine Learning algorithm.
- Experience with deploying and scaling Machine Learning models.
- Experience with Golang development and GraphQL implementation.
- Experience with first line or second line system operation and maintenance.
To apply for this job email your details to firstname.lastname@example.org