Zupee is India’s fastest growing Technology backed Behavioral Science company. We are innovating Skill-Based Gaming with a mission to become the most trusted and responsible entertainment company in the world. We have been constantly focusing on innovation of indigenous games to entertain the mass.
Our strategy is to invest in our people & user experience to drive profitable growth and become the market leader in our space. We have been experiencing phenomenal growth since inception and running profitable at EBT level since Q3, 2020. We have closed Series B funding at $102 million, at a valuation $600 million.
The company also announced a partnership with Reliance Jio Platforms, post which Zupee games will distribute its content across all customers using Jio phones. The partnership now gives Zupee the biggest reach of all gaming companies in India, transforming it from a fast-growing startup to a firm contender for the biggest gaming studio in India.
About the Job
Here's what you will do:-
● Lead and drive the deployment, life cycle management and monitoring of Machine Learning and Deep Learning models in all stages leading to production.
● Provision and configure secure high-performance computing environments
(clusters, storage, API gateways etc) to support hosting of ML/DL models at scale.
● Collaborate with Data Scientists, Data Engineers and application engineers to create and implement policies and governance for ML model life cycle.
● Running tests, interpreting test results, performance tuning and scaling.
What are we looking for:-
● 3+ years of experience with ML infrastructure and ML DevOps
● 3+ years of overall engineering experience in distributed systems and data infrastructure
● 3+ years experience coding in Python (preferred) or other languages like Java, nodeJS etc.
● Experience working with ML engineers to build tooling and automation to support the entire Machine Learning engineering lifecycle, from experimentation to production operations.
● Experience with Kafka, Docker, Kubernetes and ML CI/CD workflows.
● 3+ years experience with AWS or other public cloud platforms (GCP, Azure, etc.)
● Excellent verbal and written communication skills.