Oops! This job is no longer active.

Please contact the hiring team for pending updates, if any

FEATURED

Data Engineer

Airtel X Labs
Gurgaon
2 to 7 years
Gurgaon
2 to 7 years
Java
Scala
Hadoop
Spark
Big data
Last active
174
68% Matching
Was it by mistake?
Withdraw your application within 24 hrs.
You withdrew your application.
Click here to apply again.
Active
174
Java
Scala
Hadoop
Spark
Big data
HIRING EVENT
Address
Bharti Airtel Limited, Airtel Center, Sector 18, Gurugram, Haryana, India

  About opportunity

About Airtel Data Engineering Team

Airtel is India’s most complex and vast data science challenge. Airtel generates 100’s of trillions of records from customer purchases, phone calls, network logs, apps, IoT devices, locations and more.
The Intelligence team combines data science, data intelligence, and data infrastructure. The digital
intelligence team works across the company to process high volumes of data, create micro-targeting marketing campaigns, improve personalization and optimization, ensure revenue, and generate reporting and analytics. The platform triggers insights from activities of 400+ million customers

What we have in store for you?

As a Data Engineer, you are involved in all aspects of software development, including participating in technical designs, implementation, functional analysis, and release for mid-to-large sized projects. We expect our data engineers to build durable data pipelines with the ability to scale elegantly with data volume growth. The code you write will enable our users to get data in a timely manner. You’ll work on a variety of tools and systems, most of which are data-platform components (e.g., data pipelines, processing, Visualization, reporting, etc.). In this role, you will be working in Agile development method for faster, quick-wins with the quality of design, development and operating BI components you develop.

What Will You Do?

  • You will design and implement large scale real-time & batch data pipelines on the AWS platform.
  • Prototype creative solutions quickly by developing minimum viable products and work with seniors and peers in crafting and implementing the technical vision of the team
  • Communicate and work effectively with geographically distributed cross functional teams
  • Participate in code reviews to assess overall code quality and flexibility
  • Resolve problems and roadblocks as they occur with peers and help unblock junior members of the team. Follow through on details and drive issues to closure
  • Define, develop and maintain artifacts like technical design or partner documentation Drive for continuous improvement in software and development process within an agile development team
  • Design, architect, implement, and support key datasets that provide structured and timely access to actionable business information with the needs of the end customer always in view
  • Gather and process raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.)
  • Develop a deep understanding of vast data sources (existing on the cloud) and know exactly how, when, and which data to use to solve particular business problems
  • Loading from disparate complex data sets.
  • Designing, building, installing, configuring and supporting Hadoop.
  • Translate complex functional and technical requirements into detailed design.
  • Perform analysis of vast data stores and uncover insights.
  • Maintain security and data privacy.

You’ll fit this role if you have

  • Degree in software engineering, computer science, informatics or a similar field.
  • 2-7 years of relevant work experience in Big Data or distributed computing projects.
  • Experience in designing and implementing Big Data/ML applications (data ingestion, real-time data processing and batch analytics) in Spark Streaming, Kafka, Hadoop.
  • Solid server-side programming skills (Scala, Nodejs, or Java), and hands-on experience in OOAD, design patterns, and SQL.
  • Experience with microservice architecture, and design.
  • Experience on Hadoop-ecosystem technologies in particular MapReduce, Spark, Hive, YARN.
  • Experience working on any one distributed database system like Hadoop (Hive/HDFS), Qubole, Teradata, Redshift, or DB2.
  • Solid experience building APIs (REST), Java services, or Docker Microservices
  • Good knowledge of database structures, theories, principles, and practices.
  • Familiarity with data loading tools like Flume, Sqoop.
  • Knowledge of workflow/schedulers like Oozie.
  • Analytical and problem solving skills, applied to Big Data domai
  • Proven understanding with Hadoop, Hive, Pig, and HBase.
  • Good aptitude in multi-threading and concurrency concepts
Read more
Report an error

Was this job relevant for you?

Data Engineer

Airtel X Labs   •   Gurgaon