vns1311
Shyam Kumar V N
Bengaluru, India

A big data and data science enthusiast constantly looking to develop my skills in the area of data science. Highly focused on a vision to build a career in the field of Data Science.

CodersRank Score

What is this?

This represents your current experience. It calculates by analyzing your connected repositories. By measuring your skills by your code, we are creating the ranking, so you can know how good are you comparing to another developers and what you have to improve to be better

Information on how to increase score and ranking details you can find in this blog post.

286.1
CodersRank Rank
Top 3%
Top 50
Python
Python
Developer
Bengaluru
Highest experience points: 0 points,

0 activities in the last year

List your work history, including any contracts or internships
Societe Generale Global Solution Centre
4 years 10 months
Bangalore Urban, India
Lead Software Engineer - Big Data
Apr 2021 - May 2022 (1 year 1 month)
- Lead the design and impact analysis of all new features and enhancements.
- Lead all aspects of Software Development Lifecyle (SDLC) in-line with Agile and IT craftsmanship principles.
- Identify methods to sustain and improve various craftsmanship areas.
- Lead the timely delivery of assigned artifacts with defined quality parameters.
- Perform code review and ensure the quality of code, constantly.
- Automate manual tasks of application maintenance and support to improve efficiency.
- Lead initiatives of application modernization.
- Ensure constant review and update of Agile and DevOps practices.
- Constantly learn new/emerging technologies and mentor teams.
- Monitor the overall production processes such as daily checks, open tickets and aging of issues.
- Collaborate with customers, partners, development teams, chapter and feature teams.
- Co-ordinate with development team and implement audit recommendations.
- Lead DevOps chapters and Guilds.
- Complete the assigned learning path and contribute to daily meetings.
- Guide the team on data processing solutions and building data pipelines.
- Identify new areas of technology and use cases for data validation and implementation.
Spark scala machine learning python nodeJS ansible jenkins
Data Engineer
Apr 2019 - Mar 2021 (1 year 11 months)
Ingest and manage data from various traditional systems, store it in HDFS/Hive and
analyze/process batch / streaming data using Spark and produce meaningful insights for the
business team.
Responsibilities:
- Write bug free code that is scalable and modular (easy to maintain)
- Document code cleanly for easy review
- Delivery of reliable, scalable, modular and quality so􀈅ware that fully meets client expectations
- Conduct code review
- Ability to work as a team with geographically distributed teams. Individual is independent and
accountable for the results.
apache spark Nifi hive postgresql shell
Data Analyst
May 2017 - Mar 2019 (1 year 10 months)
machine learning data visualization apache spark pandas python scikit learn
Hewlett Packard Enterprise
Aug 2014 - Apr 2017 (2 years 8 months)
Bengaluru Area, India
Data Engineer
Teradata Datastage unix ControlM
HireCraft Software Pvt Ltd
Aug 2012 - Sep 2013 (1 year 1 month)
Bengaluru Area, India
Software Intern
c# visual studio
Add some compelling projects here to demonstrate your experience
Data Warehouse Migration to Hadoop Ecosystem
Oct 2016 - Mar 2017
Worked on migration of a Tax Data Warehouse from Informatica/SAP to Hadoop Ecosystem.
As part of the work, designed a Data Migration Assistant (DMA) that takes care of various Data Quality and Integrity issues that arise when migrating data to the Hadoop Ecosystem

Responsibilities:
• Data ingestion to an archive zone directly from files
• Develop Shell scripts to ease/automate the above
• Convert the existing Informatica mapping to Hive/Pig Scripts.
• Design and Maintenance of code repository in github
• Involved in key design and Data Modeling decisions
• Identify and fix various data quality issues
• Find adhoc solutions/workarounds to issues that arise in any step of the migration.
hive mysql hadoop null
Deep History Analysis and Temperature Profiling
Jan 2016 - Jun 2016
• Developed a scripting solution that would assess the history accessed in the Data Warehouse and thus help in deciding the Archiving and History Management Strategy
• History reliance analysis for existing ETL batches in terms of depth, through log parsing, and tracing information flow through ETL.
• Temperature profiling of data to be addressed utilizing the data sets available through the Teradata infrastructure
• Create temperature profiles for the data accessed based on Frequency, Depth and Business Criticality
scripting teradata null
Enterprise Data Warehouse Production Support
Nov 2014 - Dec 2015
Worked in the EDW Production Support team of a Large Telecom Data Warehouse.

The Enterprise Data Warehouse provides a strategic reporting and analytical platform for
systems managing customers, products and billing. The EDW Production Support team manages the publishing of information through ETL Batch processes that run on source feeds. Further, addressing user concerns pertaining to data availability, currency and quality of data is also a Production Support Function.

Responsibilities:
• Debugging an ETL failure and proposing/implementing a suitable fix for the issue
• Fixing Data quality issues and supporting user/adhoc requests
• Developing/Modifying ETL jobs based on new business requirements
• Optimizing ETL jobs/BTEQs that are designed inefficiently and are affecting system adversely
• Frequently reviewing performance of jobs/SQLs and taking corrective measures
• Change management support: Impact Analysis, Documentation and reviewing of the changes
• Automation of Routine tasks by developing scripts
• Developing File Transfer Scripts that would generate daily feed files from Data Warehouse and deliver them to various downstreams.
Teradata Datastage Data Warehousing ControlM
This section lets you add any degrees or diplomas you have earned.
upGrad.com
UpGrad & IIITB PG Diploma in Data Analytics, Data Analytics
Jan 2016 - Jan 2017
UpGrad & IIITB PG Diploma in Data Analytics program is a 11-month online program designed specifically for working professionals to develop practical knowledge and skills, establish a professional network, and accelerate entry into data analytics careers.
SJB Institute of Technology
Bachelor of Engineering (B.E.), Computer Science
Jan 2010 - Jan 2014
International Institute of Information Technology Bangalore
Masters By Research, Data Sciences
Aug 2018 - Present
Working on NLP and Network Science. Part of the Web Science Lab
upGrad rise - Deep Learning with PyTorch
Apr 2022
upGrad Rise ML Ops: Continuous Delivery & Automation Pipeline
Jan 2022
upGrad Rise - Big Data & Cloud
Feb 2021

Jobs for you

Show all jobs
Feedback