Sorry, this job is no longer available.
loading...

(Loading More Opportunities)

Software Engineer_Hadoop_3B 1 (34635)


Company Overview
Company Overview
Incedo is a US-based consulting, data science and technology services firm with over 2,500 people helping clients from our six offices across US and India. We help our clients achieve competitive advantage through end-to-end digital transformation. Our uniqueness lies in bringing together strong engineering, data science, and design capabilities coupled with deep domain understanding. We combine services and products to maximize business impact for our clients in telecom, financial services, product engineering and life science & healthcare industries.
Working at Incedo will provide you an opportunity to work with industry leading client organizations, deep technology and domain experts, and global teams. Incedo University, our learning platform, provides ample learning opportunities starting with a structured onboarding program and carrying throughout various stages of your career. A variety of fun activities are also an integral part of our friendly work environment. Our flexible career paths allow you to grow into a program manager, a technical architect or a domain expert based on your skills and interests.
Software Developer_Hadoop_3B
. Working in shifts - 24.5
. Location: Chennai and Hyderabad
. 10-15% diversity
. Timeline - immediate
. Night shift - wfh
. Work from office otherwise
Here with the JD's.
Hadoop/Atlas experience (3/4 openings, with 4 to 5 years of exp)
Description:
JOB DUTIES:
. Responsible for implementation and ongoing administration of Atlas data governance and IBM Cloud Pak infrastructure initiatives.
. Responsible for implementation and ongoing administration of Atlas and Hadoop infrastructure initiatives.
. Support our Kubernetes based projects to resolve critical and complex technical issues.
. Experience in CoreOS & Linux OS installations
. Experience in setting up the NFS shared drive in the Unix servers
. Experience in setting up the Kafka/Solr/Ranger & Cassandra tools
. Experience in setting up the Atlas tool
. Experience in setting up the PostgreSQL DB
. Working with application/ operations teams to set up the production implementation for the Orchestration platform. This job includes setting up Linux users, setting up Kerberos principals, Installing Tomcat, Spring Boot and other tools in the Orchestration platform.
. Setup the Atlas platform with Kafka, Solr, Ranger & Atlas tools.
. Strong Knowledge in Unix platform
. Strong Knowledge in shell scripting
. Sound knowledge in Ranger, Nifi, Kafka, Atlas, Hive, Storm, pig, spark, Elastic Search, Splunk, Solr, Kyvos, Hbase etc and other big data tools.
. Monitor Hadoop cluster connectivity and security
. Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
. Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
. Implement automation tools and frameworks (CI/CD pipelines). Knowledge on Ansible, Jenkins, Jira, Artifactory, Git etc.
. Design, develop, and implement software integrations based on user feedback.
. Troubleshoot production issues and coordinate with the development team to streamline code deployment.
. Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of projects.
. Collaborate with team members to improve the company's engineering tools, systems and procedures, and data security.
JD for Openshift kubernetes/DevOps( 1/2 openings)
. Responsible for implementation and ongoing administration of Openshift K8 and IBM Cloud Pak infrastructure initiatives.
. Experience in creating and managing production scale Openshift Redhat Kubernetes clusters, Deep understanding of Kubernetes networking.
. Responsible for implementation and ongoing administration of Atlas and Hadoop infrastructure initiatives.
. Support our Openshift Kubernetes based projects to resolve critical and complex technical issues.
. Experience in CoreOS & Linux OS installations
. Performing application deployments on Kubernetes cluster.
. Securely managing kubernetes Cluster on at least one of the cloud providers.
. Kubernetes core concepts . Deployment, ReplicaSet, DaemonSet, Statefulsets, Jobs.
. Ingress Controllers (Nginx, Istio etc) and cloud native load balancers.
. Managing Kubernetes storages (PV, PVC, Storage Classes, Provisioners).
. Kubernetes Networking ( Services, Endpoints, DNS, LoadBalancers).
. Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
Company Value
Company Value
We are an Equal Opportunity Employer. We value diversity at Incedo. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Company
Incedoinc
Posted
07/24/2022
Location
Hyderabad / Secunderabad, TG, IN