Website iKala
【Responsibilities】
- Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Support cross functional, cross BU data integration tasks.
【Requirements】
- Experience with Computer Science relevant knowledge or degree in Computer Science, IT, or similar field; a Master's is a plus.
- Experience with python 3.8+ at least 2 years for 1 -2 completed project life cycle.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP 'big data' technologies, e.g. Airflow, Dagster.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Experience with backend api development, deployment, e.g. Fast, Django, Flask.
- Experience with Kubernetes, docker, ex build docker image.
- Experience with GNU/Linux system and do deployment and debug under the environment.
- Experience with Google Cloud Service, e.g. GCS, BigQuery, CloudSQL, data proc.
【Pluses】
- Basic knowledge of machine learning algorithm.
- Experience with completed project lifecycle.
- Experience with first line or second line system operation and maintenance.
To apply for this job email your details to amy.chen@ikala.tv