aziddddd
Azid Harun
Kuala Lumpur, Malaysia

A pure physicist who turns data into stories and pipeline into factories.

CodersRank Score

What is this?

This represents your current experience. It calculates by analyzing your connected repositories. By measuring your skills by your code, we are creating the ranking, so you can know how good are you comparing to another developers and what you have to improve to be better

Information on how to increase score and ranking details you can find in this blog post.

305.1
CodersRank Rank
Top 5%
Top 5
Jupyter Notebook
Jupyter Notebook
Developer
Malaysia
Top 5
Python
Python
Developer
Malaysia
Top 5
R
R
Developer
Malaysia
Highest experience points: 0 points,

0 activities in the last year

Pandas
Pandas
147
exp.
NumPy
NumPy
47.4
exp.
Scikit-Learn
Scikit-Learn
37.4
exp.
Scipy
Scipy
26.1
exp.
Matplotlib
Matplotlib
17.2
exp.
Keras
Keras
12.7
exp.
Pytest
Pytest
9.9
exp.
Streamlit
Streamlit
7.1
exp.
SQLAlchemy
SQLAlchemy
4.1
exp.
Flask
Flask
2.3
exp.
Sympy
Sympy
0.3
exp.
List your work history, including any contracts or internships
Maxis
May 2021 - Apr 2022 (11 months)
Kuala Lumpur, Malaysia
MLOps Engineer
Program and Processes
• Design and lead a MLOps Standardization Program for entire MLOps team to streamlining the method on productionalizing an AI project.
• Design and lead in building a centralized performance dashboard for all AI projects in terms of business, model and pipeline health. Tools involved: DataStudio, Google BigQuery, Cloud Logging, Pub/Sub, Cloud Function, Bitbucket.
• Revolutionalised team's MLOps ecosystem by building Maxis-specialized and standardized Machine Learning Pipeline, fully integrated with CICD. Tools involved: Bitbucket, Docker, Pytest, Google Vertex AI Pipeline, Secret Manager, BigQuery, Cloud Function, Cloud Scheduler, Pub/Sub, Cloud Storage.
• Built a Git repository as an easy-to-use toolkit for the commonly used DS processes with the implementation of unit testing. Tools involved: Bitbucket, Docker, Pytest.
• Acted as a panelist in Pakar STEM session of Maxis's flagship community program, Maxis eKelas to encourage and support the younger generation who plan on choosing STEM pathway as their own career path.
• Acted as an accessor for Maxis Scholarship Program 2021 to support Maxis Young Talent team in accessing potential scholarship candidates.

Productionalization
• Built a churn prediction pipeline for prepaid users. Tools involved: Refer to standardized pipeline tools.
• Built a churn prediction pipeline for postpaid users. Tools involved: Refer to standardized pipeline tools.
• Built AWS recommender system pipeline for recommending best plans on Hotlink App according to user behaviour and personalisation. Tools involved: AWS SageMaker, AWS Lambda, AWS S3, AWS Glue, AWS EC2, AWS Athena, AWS Quicksight, AWS CloudWatch, AWS EventBridge, AWS SSM.

Innovation
• Drive and involved in building a resume search app that optimize the HR recruitment process. Tools involved: Elasticsearch, Google Cloud Storage, App Engine Flex, Streamlit.
google cloud aws
DKSH
Oct 2020 - Apr 2021 (6 months)
Kuala Lumpur, Malaysia
Data Scientist
• Automated a customer segmentation pipeline using Azure resources. Tool involved: Azure Synapse, Azure Blob Storage, Azure Databricks, Azure Data Factory, Azure SQL.
• Built a fraud detection pipeline for detecting anomaly transactions & claims for Finance teams. Tools involved: t-SNE, Autoencoder with PyTorch Lightning, Azure Databricks, Azure Blob Storage, Power BI.
• Built a general forecasting pipeline for univariate forecasting. Tools involved: DARTS, Azure Databricks, Azure Blob Storage, Power BI.
• Built a sentiment analysis pipeline for to turn the projects feedback into actionable insights. Tools involved: HuggingFace, Azure Databricks, Azure Blob Storage, Power BI.
• Built a Git repository as an easy-to-use tools for the commonly used DS processes with the implementation of unit testing. Tools involved: GitHub, Docker, Pytest.
Omnilytics
1 year 6 months
Kuala Lumpur, Malaysia
Data Scientist
Jul 2019 - Oct 2020 (1 year 3 months)
• Built custom text classifications, NLP for fashion product labels such as sizes, gender and brand names.
• Built the ETL for sizes, gender and brand names, from raw data to live dashboard.
• Increased the dashboard accuracy of sizes from 69% to 92%.
• Increased the dashboard accuracy of gender from 29% to 99.8%.
• Maintained the dashboard accuracy of brand names above 99.51%.
• Support other DS in improving the dashboard accuracy of product sub-category.
• Built modern deep learning models for fashion sub-category image classification, CNN, Xception.
• Performed R&D on state-of-the-art ML architectures & techniques to improve fashion sub-category image classification, i.e. AWS SageMaker, AWS BlazingText, EfficientNet, AutoAugment.
• Created algorithms to find the best model hyperparameters using Gridsearch and RandomSearch.
• Heavily involved in building dynamic AirFlow DAG creation for both backfill and modern data pipelines, applicable to all the existing DS major processes.
• Mob programming in building Flask apps to speed up DS team production such as ML Dataset Maker, DS Radiator.
• Built and maintained Streamlit pipelines for rapid spinnable internal dashboard and major process optimisation in monitoring, analysing, benchmarking, bug-fixing the above fashion labels.
• Built autogenerated DS codebase MkDocs documentation, daily updated with AirFlow to increase codebase learning speed for the team.
• Enhanced DS codebase by reinforcing the codebase unit tests.
• Held responsibility for collaborating with HR team to recruit DS interns.
• Evaluated the incoming DS talents by reviewing the coding challenge submission and also with the technical interview.
Data Science Intern
Jun 2018 - Sep 2018 (3 months)
• Created external Python scripts for size problem automation identifier.
• Reduced the percentage of uncleaned size data from 28.0% to 3.8%.
• Updated datasets and functions for size unit testing.
• Deployed data monitoring for size and colour processing.
• Created attributes mapping process.
Add some compelling projects here to demonstrate your experience
Phase Diagram of a Leonnard Jones System(Argon) by Molecular Dynamics Simulations in Python
Feb 2018 - Mar 2018
This is the visualization of the phase of Argon particles. The evolution of the system which interacting through the Lennard-Jones pair potential, was first simulated using velocity Verlet time integration algorithm, while obeying the minimum image convention and periodic boundary conditions algorithms in order to be able to be visualized correctly within the simulation box, with the particle positions and velocities being initialized by another python file containing utilities for molecular dynamics. An .xyz trajectory file for the simulation is written out for visualization process using VMD.
Gamma Ray Spectroscopy : Positron Annihilation
Jan 2018 - Jan 2018
A gamma-ray spectrometer that consists of two NaI(Tl) scintillation detector, electronic
modules (4890, 416A, 418A, 427A and 772) and EASY-MCA-2K has been used in this
experiment in which it was divided into two parts, where the first is the Inverse Square Law,
and the second one is the Angular Correlation. The Inverse Square Law and Angular
Correlation measured the displacement and angular variation of the annihilation gammagamma
coincidence counting rate, respectively. As a result, the intensity of the gamma rays is
verified to be inversely proportional to the square of the distance from 22Na source by the
Inverse Square Law part. From the Angular Correlation part, the intensity of the gamma rays
was seen to exhibits the Dirac-delta Gaussian, 𝑊(𝜃) = 𝛿(𝜃 − π) when plotted with the
angle around 180°.
Data Generator and Maximum Likelihood Fitting
Oct 2018 - Dec 2018
A set of 10,000 random events with a distribution of decay time and angle are generated using box method. A maximum likelihood fit are performed twice where the first one with only decay time information and the other with both decay time and angle information. The best fit values of the physics parameter, F, τ1 and τ2 together with proper errors for the first maximum likelihood fit are determined to be 1.0±0.0,0.0±0.0 and 0.0±0.0, respectively. In the case for the second maximum likelihood, the parameters F, τ1 and τ2 together with proper errors are determined to be 0.5459(32),1.3952(93)μs and 2.7088(215)μs, respectively.

This section lets you add any degrees or diplomas you have earned.
University of Edinburgh
Bachelor of Science in Physics, Physics
Sep 2015 - Jul 2019
University of Malaya
Foundation in Science, Physical Sciences
Jun 2014 - Apr 2015
Blockchain: Beyond the Basics
Sep 2018
Artic Code Vault Contributor
Ethereum: Building Blockchain Decentralized Apps (DApps)
Sep 2018

Jobs for you

Show all jobs
Feedback