Sr Software Engineer
DUTIES: Develop high performance and distributed computing tasks using big data technologies such as Hadoop, Apache Spark, Snowflake, NoSQL, text mining and other distributed environment technologies. Develop dynamically scalable, highly available, fault-tolerant big data processing systems using the AWS cloud provider, to deploy and support the applications for data storage and processing. Utilize expertise in JVM-based function languages including Scala, Java, Python, Hadoop query languages including PIG, HIVE along with alternative HDFS-based computing frameworks such as spark. Design, develop, install, test and maintain data integrations from a variety of formats including files, database extracts and external APIs into data stores (including Snowflake, Elastic Map Reduce and s3 using ETL tools, techniques and programming languages like Python, Spark and SQL. Utilize big data programming languages and technology, write code, complete programming and documentation, and perform testing and debugging of applications. Analyze, design, program, debug and modify software enhancements and/or new products used in distributed, large scale analytics and visualization solutions. Interact with data scientists and industry experts to understand how data needs to be converted, loaded and presented. Investigate problems and resolve as required, including working with various internal teams and vendors. Proactively monitor the data flows with a focus on continued performance improvements. Develop POCs and best practices for application development. Own the application/data end-to-end from requirements to post-production, working closely with other teams. Provide engineering leadership by actively advocating best practices and standards for software engineering. Share knowledge and guide junior engineers to level-up the whole team. Work in a highly agile environment. Serve as Subject Mater Expert (SME) with own discipline/specialty area.
REQUIREMENTS: Requires a Master's degree, or foreign equivalent degree in Electrical Engineering or Computer Science and three (3) years of experience in the job offered or three (3) years of experience developing high performance and distributed computing tasks using big data technologies such as Hadoop, NoSQL, text mining and other distributed environment technologies; utilizing expertise in JVM-based function languages including Scala, Java, Python, Hadoop query languages including PIG, HIVE along with alternative HDFS-based computing frameworks such as spark; analyzing, designing, programming, debugging and modifying software enhancements and/or new products used in distributed, large scale analytics and visualization solutions; utilizing AWS Cloud environment; using Spark, Scala or Python and SQL; working with various database methodologies such relational, columnar, NoSQL; and developing and maintaining production data pipelines. Experience therein to include one (1) year working on Snowflake Cloud Datawarehouse on AWS and experience with Terraform, Docker and Kubernetes.
AT&T is an Affirmative Action/Equal Opportunity Employer, and we are committed to hiring a diverse and talented workforce. EOE/AA/M/F/D/V *np*