Introduction about Zeta
Zeta intends to replace many legacy systems in use by banks for processing payments. Banks need to leapfrog into an era of connected devices and omnipresent commerce. Banking should become an integral part of the commerce and enterprise systems, thereby enabling seamless consumer and business transactions. The average number of interactions an account holder may have with a bank would increase to 12-15 per day in the near-term from the current average of 3-4 transactions per month. These could grow to an unimaginably large number in the future. Most banking systems aren’t designed for this scale. In many ways, these systems are limiting the imagination of the key stakeholders at the banks. We want this changed fundamentally, in an interoperable and regulatory compliant manner.
We build large-scale transaction processing systems that can work with many current and future payment networks. We build applications that help banks realize the value of this new approach early. We also help banks to rapidly deliver the value of these applications to their customers.
Zeta’s platform accelerates the phenomenon of digital native neobanking, unbundling and rebundling of banking and invisible payments. The platform enables the new-age financial institutes to have faster go-to-market. Enables the existing financial institutes with the much-required capabilities to partner with the FinTechs and to offer a suite of modern banking experiences.
Zeta is founded by Bhavin Turakhia and Ramki Gaddipati. Bhavin has also been the Founder/ Co-Founder of Directi, Radix, Ringo, Codechef, and Flock. Zeta products have been recognized as the Best Prepaid Card in 2018. Zeta is recognized as the Fastest Growing Company of the Year, Fin-tech Rising Star Award, and India’s Most Innovative Top 50 Product Companies by various trade journals. Zeta is a PCI DSS, ISO 27001 and SOC 2 company.
Zeta has been valued at $300 million in a recent series-C fundraise round. Zeta’s platform is in use by 3 million users and by financial institutes across 4 countries.
As a Director of Engineering, you will be playing a pivotal role in enabling Zeta to deliver to its ambitions. You will work with an amazing peer group that fuels this ambition.
Where is this role?
This role is part of the Data Engineering division of Zeta. The Data Engineering division is responsible for providing products and services for all the business and product teams at Zeta. The Data Platform is a sub-division of the Data Engineering division and is responsible for providing infrastructure to support various data processing workloads including but not limited to
Big data analytics, Streaming, ETL jobs, Business Intelligence, and Reporting. The Data Platform teams also provide and manage Business Intelligence tools and build BI components pluggable into various products of Zeta.
As Director of the Data Platform, you will be responsible for building and managing the engineering teams responsible to build and run the data processing infrastructure and deliver BI components. Your direct and indirect span of control will be of 15-20 top-notch engineers, BI developers, and managers.
What will the sub-division do?
- Architect the complete data processing pipelines, storage and processing platform for various reporting, analytical, monitoring and machine learning use cases, for datasets growing at 100s TB per year.
- Build and deploy the ingestion, cleansing, transformation to archival pipeline components
- Deploy and manage the data processing frameworks and tools for reporting, analytical and machine learning workloads
- Build, deploy and continuously enhance the data visualization, dashboarding and distribution applications
- Build components that can be embedded into various customer facing products to provide data centric features like reports and analytics
- Build the reports and dashboards for product and business teams required for day to day business analysis and for executive decision making
- Build applications that monitor and audit the data sanity and consistency across various storages used in the transactional and analytical workloads
- Define, document and ensure adherence to the data lifecycle management pratcices required for a extremely privacy-sensitive datasets
The current infrastructure is composed of the following toolset:
- Amazon DMS, Bucardo, Debezium
- Kinesis, Kafka, Spark
- Redshift, Postgresql, Elasticsearch, Metabase, Superset
- Hadoop, HBase
The team is will make judicious choice of the required infrastrcture components considering the desired performance characteristics and the total cost of operations of the infrastructure
- The team will ensure that the platform components and the supported usecases are adequately documented and will help the relying engineering and product teams in their adoption of the data platform services
- The team operates the infrastructure in production and is fully responsible for the associated SLAs. The team will be supported by the production operations team who will provision and provide support for the lower level compute, network and storage resources.
What are your responsibilities?
- You will conceive and deliver the projects required for building and supporting the data processing infrastructure.
- You will work with the product teams of various BUs to understand the reporting and analytics requirements of their products and define, build and enhance the corresponding application components to meet the changing requirements.
- You will own all the projects of the Data platform sub-division and are accountable for end-to-end execution.
- You will define the scope and ensure continuous adherence to the scope of projects at each phase (initiation to sustenance/maintenance phase).
- You will be responsible for definition, adoption and adherence of the data lifecycle management practices.
- You will be responsible for guiding the teams accountable for delivering of the reports and dashboards for various business teams
- You will review and audit the documentation of the services offered by the data platform
In addition to the above specific responsibilities, as Director in the Engineering division, you will be responsible for:
- Hiring decisions, hiring process definition, and continuous improvements.
- Organization planning for this division - Assess skill gaps, plan upskilling exercises
- Liasioning with all external and internal stakeholders for the team.
- Mentoring managers and leads.
- Skip level quarterly 1:1s.
- Salary revisions for all ICs.
- Engineering score-card definition and reviews.
- Responsible for all People-Must-Grow initiatives for your span.
- Own and run at least 2 BU wide initiatives.
- Publish team’s work internally and externally. Campaign for the team and team members’ recognition.
- Explore and make available opportunities for community recognition for the team’s work.
What are you accountable for?
Continuous improvement of reliability and usability of the data and the data processing infrastructure
- Cost optimization of the data platform
- Velocity and quality of the projects delivered by the team
- Production SLAs of the data platform
- Hiring, Retention, and Engagement metrics of the team
What are you expected to be good at?
To be successful in this role, the following are the areas of expertise classified by their importance:
- Experience in building and operating data processing infrastructure to be run across multiple data centers, supporting mission-critical workloads for a 100+ TB size datasets.
- A thorough understanding of various data processing workfloads and the opensource toolset available for big data processing
- Lambda and Kappa architectures
- Cost/benefit analysis of various processing architectures and tools
- Expert level understanding of the Data lifecycle management
- Mastery in building and managing high caliber teams
- In-depth understanding of production operations on public cloud infrastructure
- Excellent written and oral communication and penchant for technical documentation
Good understanding of the following:
- Processing and storage components like Kafka, Spark, Hadoop, Postgresql, Redshift, Cassandra, Elasticsearch and S3
- Data privacy, security, and governance
- Software SDLC
- Agile development practices
- CI/CD environment
- Writing operations manual and documentation for complex software
- Performance, stress, and chaos testing models
- Application SLAs and Incident management practices
Nice to have:
- Hands on experience in using machine learning models
- Hands on experience in using AWS suite of analytics, storage and machine learning services
- Exposure to Terraform, Kubernetes, Docker, Jenkins X
- Exposure to open source contribution and managing open source projects
- Banking and Payments domain understanding
Experience and academic background
- 5+ years of experience in managing high caliber engineering teams at a large scale internet or SaaS company
- 5+ years of experience in handling big data workloads and Business intelligence systems
- 10+ years of overall experience in software application development or data engineer or analytics in the medium to large-sized product companies.
- BE, B.Tech, M.Tech or ME in Computer Science or equivalent