Looking for candidates with strong experience in software development, especially in Big Data development technologies along with Python/PySpark or Java programming expertise
• BE/B.Tech/MCA/MS-IT/CS/B.Sc/BCA or any other degrees in related fields.
• Experience in working on Hadoop Distribution (CDH/HDP/MapR).
• Hands-on experience with MapReduce, Hive 2.x*, Spark 2.x*.
• Conceptual knowledge of Data Structures & Algorithms
• Possessing in-depth knowledge of various Design Patterns (Java/Bigdata or Python/Bigdata), Data Processing Patterns (Batch/NRT/RT processing) & capable of providing design & architecture of typical business problems
• Knowledge and experience with NoSQL Database (Cassandra/HBase/MongoDB/CouchDB/Neo4j), SQL Database (MySQL/Oracle).
• Kafka, Redis, Distributed Message Queues along with Distributed Caching
• Proficient understanding of Build tools (Ant/Maven), Code Versioning tools (Git) with Continuous Integration
• Exposure and awareness of complete PDLC/SDLC along with experience working in projects with Agile Scrum methodology
• Excellent communication, problem-solving & analytical skills with ability to thrive in a fast paced, dynamic environment & operate under stringent deadlines
• Confident, highly motivated and passionate about delivery and customer satisfaction
• Strong technical development experience with writing performant code leveraging best coding practices
• Out of box thinker and not just limited to work done in existing assignment(s)
Good to have:
• Knowledge/experience working on Search Platforms (Solr/Elasticsearch), designing as well as implementing RESTful APIs
• Experience with Cloud environments (AWS/GCP/Azure), exposure to Containers & Container Management Platforms (Dockers/Kubernetes)
• Understanding of Data Lake vs Data Warehousing concept along with the ability to perform comparative analysis of Data Stores and knowledge/experience with creation & maintenance of the same
• Experience with Big Data ML toolkits (Spark ML/Mahout)
• Knowledge on Data Privacy, Data Governance, Data Compliance &Security
• Programming experience with Python/Scala
• Experience with building & maintaining optimal data pipelines in a reliable manner so as to deliver solutions on the fly
• Experience working on open-source product
Roles and Responsibilities:
• Design and implement solutions for problems arising out of large-scale data processing
• Provide the team technical direction(s)/approach(es) to be undertaken and guide them in resolution of queries/issues etc.
• Attend/drive various architectural, design and status calls with multiple stakeholders
• Ensure end-to-end ownership of all tasks being aligned
• Design, build & maintain efficient, reusable & reliable code www.impetus.com | 2020
• Test implementation, troubleshoot & correct problems
• Capable of working as an individual contributor and within team too
• Ensure high quality software development with complete documentation and traceability • Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups)
• Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc
Python / Java programming expertise along with Bigdata experience in Spark/Hadoop/Hive.
Experience: 5 -12 Years
Degree – Graduates/Postgraduate in CSE/IT or related field
Office Locations: Bengaluru/Noida/Indore/Gurgaon/Pune/Hyderabad
Remote Locations : Ahmedabad/Jaipur/Kochi/Chandigarh/Chennai