Sorry, this job is no longer available.

(Loading More Opportunities)

Site Reliability Engineer

Position Title: Site Reliability Engineer (SRE)
Location: India
Role Responsibilities :
• Implementing and maintaining Big Data tools and frameworks required to provide
requested capabilities
• Work with stakeholders including the Product owner, Data and Design teams to assist
with data-related technical issues and support their data infrastructure needs
• Support customers implementing ETL process
• Monitoring performance and advising any necessary infrastructure changes
• Responsible for implementation and ongoing administration of Hadoop infrastructure
• Aligning with the systems engineering team to propose and deploy new hardware and
software environments required for Hadoop and to expand existing environments
• Cluster maintenance as well as creation and removal of nodes using tools like Grafana,
Cloudera Manager Enterprise
• Performance tuning of HadoL2op clusters and Hadoop MapReduce, Spark routines.
• Monitor Hadoop cluster connectivity and security
• Manage and review Hadoop log files
• File system management and monitoring
• Act as the subject matter expert for Alluxio product

Education and Experience :
• Bachelor in Computer Engineering / Science, Information Technology or equivalent
• Minimum 5 experience as SRE and doing L2/L3 production support, practices and skills or in IT
operations in a financial institution
• Well versed with SRE practices
• Strong programming and scripting skills - Java, shell scripts, Python, etc.
• Proficient understanding of distributed computing principles
• Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
• Experience with integration of data from multiple data sources
• Experience with web-services (REST, SOAP) and/or experience in Microservices
• Experience with various messaging systems, such as Kafka or RabbitMQ
• Experience with Cloudera/MapR/Hortonworks
• Knowledge of Hadoop, Spark, Hive, S3
• Knowledge of cloud native applications on AWS or, Azure or, GCP
• Knowledge of stream-processing systems, using solutions such as Flink or Spark-Streaming
• Knowledge of Big Data querying tools, such as Presto and Hive, and Druid
• Knowledge of Kubernetes
• Knowledge of various ETL techniques and frameworks
• Knowledge of DB (Mysql,PSQL,Hive), DDL, DML
• Knowledge of FileSystem, FileTypes good to have
• Good understanding of Lambda Architecture, along with its advantages and drawbacks
• Understanding of Storage - SQL, MariaDB, Apache HBase
• Understanding of CI-CD - Maven, Git, Jenkins
• Understanding of Monitoring - ELK, Grafana, Prometheus
• Understanding of Java 8 - Cloud Native Application Development - Microservices
• Understanding of Spring Boot, Spring Cloud - REST APIs.

Location : Hyderabad
Nationality: Indian
working Mode: Onsite - Hyderabad
working hours: Monday - Friday, 9am to 6pm
Salary Range: Rs. 15,00,000 - 30,00,000 Per annum

Hyderabad / Secunderabad, TG, IN