Update the high-level Infrastructure design documents.
Install/Upgrade /Maintenance and Monitoring BDA/Hadoop (CDH)/Kafka/Nifi/ODI Clusters.
Setting up and Troubleshooting User access in Hadoop cluster.
Creating ACL roles for Hive Databases for different access groups.
Data ingestion Support using Oracle Golden Gate/Confluent Kafka and Hadoop (Big data).
Business requirements to set up the data transfer between multiple sources.
Data replication from One cluster to other cluster environments (Prod –DR, DR-UAT etc..)
Migration support between multiple environments.
Code tuning to improve performance for long running jobs in MapReduce/Spark using Hive/Spark/Anaconda etc..
Adding Nodes and respective services to the Hadoop/Kafka clusters.
Co-coordinating with multiple teams and multiple Vendors to stabilize the Bigdata environments.
Integration of ETL(ODI/NIFI/UC4) applications to Bigdata technologies for data processing and scheduling.
Renewing Server certificates for Hadoop and kafka clusters.
Generating different encrypt and decrypt keys to communicate the servers Internal and external servers,
Universal scripts to monitor the server health for Hadoop/kafka/Nifi/ODI and other Bigdata applications.
Fixing issues related to the network between the clusters.
Participate in daily sync-up meetings with the project team.
Enhance the existing framework for further deployment into production and maintenance.
Required Education: Bachelor’s Degree in computer science, Computer Information Systems, Information Technology, or a closley related technical field, or a combination of education and experience equating to the U.S. equivalent of a bachelors degree in one of the aforementioned subjects.
Job Category: Bigdata
Apply for this position
Subscribe to our newsletter
Sign up to receive latest news, updates, promotions, and special offers delivered directly to your inbox.