Summary
Overview
Work History
Education
Skills
Certification
Projects
Timeline
Generic

PRAVEEN PRAKASAN

Dubai

Summary

Solutions-focused, meticulous, and result-oriented professional with 10 years of a successful career distinguished by commended performance and proven results. Extended expertise in SRE, DevOps, DevSecOps [CI-CD-CM, Git, Docker, Service Mesh, Istio, Nginx, Kubernetes, Jenkins, ArgoCD, Ansible, Grafana, Nexus and Hashicorp Tool Kit (Nomad, Consul, Vault, Terraform), Broccoli], RPA (UiPath), HDInsight, Cloud (AWS, Azure, Oracle OCI). Bigdata - Cloudera Hadoop (CM, CDH, CDP), YARN, Spark, HBase, Kafka, Kudu, Flink, Unix, (RedHat,CentOS, Ubuntu), Hadoop Security (Kerberos, TLS, Sentry, Encryption, Ranger), etc. Adept at working in high-pressure environments with strict deadlines and multiple deliverables to implement best practices that consistently deliver outstanding results Excellent interpersonal, communication and organisational skills with proven abilities in team management and planning.

Overview

15
15
years of professional experience
1
1
Certification

Work History

Senior Cloud Operation Manager

Halian [Orange Business Service - TONOMUS NEOM]
10.2022 - Current

Lead Devops/Cloud Engineer

Allianz
06.2016 - 10.2022

Lead Bigdata Engineer

Cognub Decision Solutions
06.2014 - 06.2016

Big Data Engineer

Mirox Cyber Security & Technology Pvt Ltd
11.2009 - 05.2014

Education

BACHELOR OF SCIENCE - Computer Science

UIT Kerala University
TVM, Kerala, India
01.2009

Skills

  • Cloud Platforms: AWS, Azure, Edarat, Google Cloud Platform (GCP), Oracle OCI
  • CI/CD: ArgoCD, AWS DevOps, Azure DevOps, GitLab CI/CD, Jenkins, Oracle DevOps
  • Container Orchestration: AKS, AWS ECS/EKS, Docker Swarm, GKE, Kubernetes, Nomad, OpenShift, Oracle OKE, Ranger
  • Monitoring & Logging: AWS CloudWatch, Azure Monitor, DataDog, EFK and ELK, Google Cloud
    Operations Suite, Grafana, Nagios, OCI Monitoring Service, Prometheus, Splunk and Hunk
  • DevSecOps: Compliance And Policy Enforcement, Container Security, DAST, IAST, SAST, SIEM, Vulnerability Management
  • Database Management: AWS RDS, Azure SQL Database, Google Cloud SQL, MongoDB Atlas, Oracle Database
  • Tools & Frameworks: AWS API Gateway, BMC Discovery, BMC Remedy ITSM, CloudFlare, CloudVane, Databricks, Postman, Swagger UI, TIBCO
  • Collaboration Tools: Confluence, JIRA, Slack
  • SDLC Methodologies: Agile, Waterfall
  • Incident Management Tools: Remedy, ServiceNow
  • SRE Principles: High Availability, Incident Handling, On-Call Responsibilities
  • Kubernetes: Cluster Management, RBAC, OPA Agent, Helm, Kustomize, Networking, Monitoring, Logging, Policy Implementation, Federation, Optimization
  • Cloud Security: Amazon GuardDuty, AWS Shield, Azure Active Directory, Azure Key Vault, DDoS(Aws, Azure), IAM (AWS, ORACLE, KEYCLOAK), KMS (OCI, AWS), OCI Cloud Guard, WAF (AWS, OCI)
  • Programming Languages: Bash, Java, Python, Ruby
  • Version Control: AWS CodeCommit, Azure Repo, Bitbucket, Git, GitHub, GitLab, Oracle Coderepo
  • Infrastructure As Code (IaC): AWS CloudFormation, Google Cloud Deployment Manager, Terraform
    Configuration Management: Ansible, Puppet

Certification

  • TIL FOUNDATION.
  • AWS CERTIFIED SOLUTIONS ARCHITECT (PROFESSIONAL)
  • CERTIFIED KUBERNETES ADMINISTRATOR (CKA)
  • GOOGLE PROFESSIONAL CLOUD ARCHITECT
  • MICROSOFT CERTIFIED: AZURE SOLUTIONS ARCHITECT EXPERT
  • IBM BIG-INSIGHT-CERTIFIED HADOOP
  • CCNA- Cisco certified Network Associate) & CCNP (R)

Projects

Telematics (Allianz)

Role: Team Lead

Tools: Cloudera, terraform, Puppet, github, Gitlab, Terraform, spark, yarn, hive, Impala, Hbase, Zeppline, QlickView, Datadog, Dynatrace, R Studio.

Telematics technologies represent advanced, self-orientating open network architectures comprised of variable programmable intelligent beacons. These technologies are designed to enhance the development of intelligent vehicles, aiming to integrate warning information with nearby vehicles, within the vehicle itself, and with surrounding infrastructure. Specifically, emergency warning systems for vehicle telematics are developed to promote international harmonization and standardization. This standardization focuses on vehicle-to-vehicle, infrastructure-to-vehicle, and vehicle-to-infrastructure real-time communication using Dedicated Short-Range Communication (DSRC) systems.

Real Estate (Propmix.io)

Role: Hadoop Admin /TL

Tool: Python, R , Scala, Spark, Hive, Flume , Kafka, Docker, Hadoop, Jenlkins, github, Terrafome, ansible, AWS Cloud, IAM, KMS, Vault.

Real estate decisions are often very subjective, uncertain and difficult to predict accurately. Some of these decisions include timing of the sale, price, interest rates, agents/property/locality that is best-fit etc , based on real-time data and insights. They also take into account past history and a variety of leading, lagging and coincidental indicators that powers a platoon of dynamic models. The platform uses price premium determinants such as seller profile, seller strategy, bidder strategy, and other big data components from MLS, Appraiser, Assessor and Foreclosure databases to build models that optimizes the cognitive efforts of sellers, buyers and appraisers.

IAAS Platform (Allianz)

Role: Architect / Team Lead

Tool: Kubernetes (Docker), NIFI, Ambari, Hive, R, Python , Spark , Hadoop, Flume, Kafka, H2O Etc, Azure Devops, Azure Cloud, Git, Ansible, Terraform.

Implemented the Docker based IAAS (Infrastructure as a service) platform. This was a pilot project for the R&D team. This has been implemented with Kubernetes as the frontend and NiFi as the data integration tool. Hortonworks platform has been implemented as the hadoop data processing system. Kubernetes has been used to manage the Docker containers and manage them and the Ambari from Hortonworks were used to manage the Hadoop cluster components. Most of the hadoop components were HA enabled. There is a web application associated with this platform with NiFi integration for the data import. Clients can import the data in to the platform and use the tools in the Application frontend to process and generate their reports. The Docker auto scaling is being enabled to provide the auto scalability of containers via Kubernetes.

Hadoop Cluster DB Migration

Role: Hadoop Admin /TL

Tool: Cloudera Manager

Hadoop have many sub-components like hive, sentry, hue etc.. They uses separate databases to store their respective metadata. when implemented new cluster these database also have to be migrated as it will not be a part of the data migration across HDFS data between the two cluster. These database has to be backed up from the existing cluster and restored in the new Cluster. Successfully migrated these database to the new cluster.

Server Attack Log Analyzer

Role : Consultant

Tool: Pig, Hive, Sqoop, Oozie, MySQL, Java, Splunk, Guthub, Jenkins

ANALYZE SERVER LOG FILES ( Web Application, Hosted Server, Any Secured Server Or DMZ Data Centre Logs) USING HADOOP & MAPREDUCE FRAMEWORK .It will find out , any type of "Local File Inclusion-LFI", or "Remote File Inclusion- RFI" , Any DOS(Denial Of Service) or DDOS(Distributed Denial Of Service), any virus or worm infection.

Recommendation Engine

Role: Consultant

Tool: Hadoop (MapReduce, Hive, Pig), Solr, MongoDB, MySQL

Analyze data from twitter before and after the campaign. To understand how well people were able to relate the campaign. Understand the associations of the Campaign to the products. This pilot will be enhanced to other SM Social Media) depending on the successful implementation of this project and model. Correlate the effectiveness of SM with other marketing channels.

Real Time Stock Analysis

Role: Team Lead

Tool: Solr, Hadoop, Lucid Works (Banana) (Cloudera Manager)

Stock Analysis is used to predict stock market tickers in real time with big data. In this we are indexing data in a streaming fashion and querying on it in real time. As the data is being indexed, we query on that data in real time and display it in a graph.

Image Search of 50 Million Datasets (Alamy)

Role: Team Lead

Tool : Solr , MySQL, Spark , Tomcat, Hadoop And Java

Near Real time images uploaded by Alamy customers are fetched and search engine finds images that match the search terms entered by applying certain filters (age, size, license, bespoke, etc..) and relevancy check of customers and applied diversity algorithm. In order to return the most relevant images to customers, search engine views each Keyword field in descending significance with the Essential keyword field having the greatest relevance.

Reactivate Sleeping customers

Role: Team Lead

Tool: MSBI(SSRS,SSIS,SSRS), Sql Server, HDFS, Hbase (Cloudera Manager)

Customer segmentation and customer profiling using RFM analysis and find out the pattern to reactivate luxury sleeping customers using analyzing the trade history of the customers.

Packet Data Analyzer

Role: Team Lead

Tool: HDFS, Spark, Hive, Zepline, Scala, Flume, Oozie, Nosql (Cloudera Manager)

Packet data Analyser is to get the list of unauthorized user trying to access the server. Receive the log file from the server, extract the data and store the data into database. Verify the data to identify the unauthorized users and produce the details in graphical and data representation.

Timeline

Senior Cloud Operation Manager

Halian [Orange Business Service - TONOMUS NEOM]
10.2022 - Current

Lead Devops/Cloud Engineer

Allianz
06.2016 - 10.2022

Lead Bigdata Engineer

Cognub Decision Solutions
06.2014 - 06.2016

Big Data Engineer

Mirox Cyber Security & Technology Pvt Ltd
11.2009 - 05.2014

BACHELOR OF SCIENCE - Computer Science

UIT Kerala University
PRAVEEN PRAKASAN