What will you do:
-Build data driven systems for risk control, fraud detection, recommendation, customer segmentation, adaptive pricing etc.
-Build, validate, test, and deploy models and algorithms.
-Work with backend engineers to architect data storage and processing pipelines.
-Work with product managers to develop new product features based on insights from data.
What should you have:
-Bachelor’s Degree in Computer Science/ Mathematics/ Statistics, or fields related to data mining preferred.
-Experience in big data processing with Python, R or Scala.
-Experience in building data pipelines using distributed processing frameworks (e.g Spark, Hadoop, Kafka) and MPP databases (e.g BigQuery).
-Operational experience with industrialization, orchestration (e.g Kubernetes), containers in cloud (e.g Docker, GCP).
-Knowledge in supervised/unsupervised learning, classification/clustering algorithms, feature engineering/optimization is a plus.
-Outstanding analytical and problem-solving skills.
-Self-motivated, innovative, and proactive. Willing to learn new knowledge and explore unfamiliar domains.