Handling Imbalanced Data - UnderSampling

Credit Card Fraud Detection

  • Context

It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase.

  • Content

The datasets contains transactions made by credit cards in September 2013 by european cardholders.This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data.

Features V1, V2, … V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'.

Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset.

The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-senstive learning.

Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.

Download Dataset

Importing the Relevant Libraries

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()

Importing the Dataset

In [2]:
df = pd.read_csv('creditcard.csv')

df.head()
Out[2]:
Time V1 V2 V3 V4 V5 V6 V7 V8 V9 ... V21 V22 V23 V24 V25 V26 V27 V28 Amount Class
0 0.0 -1.359807 -0.072781 2.536347 1.378155 -0.338321 0.462388 0.239599 0.098698 0.363787 ... -0.018307 0.277838 -0.110474 0.066928 0.128539 -0.189115 0.133558 -0.021053 149.62 0
1 0.0 1.191857 0.266151 0.166480 0.448154 0.060018 -0.082361 -0.078803 0.085102 -0.255425 ... -0.225775 -0.638672 0.101288 -0.339846 0.167170 0.125895 -0.008983 0.014724 2.69 0
2 1.0 -1.358354 -1.340163 1.773209 0.379780 -0.503198 1.800499 0.791461 0.247676 -1.514654 ... 0.247998 0.771679 0.909412 -0.689281 -0.327642 -0.139097 -0.055353 -0.059752 378.66 0
3 1.0 -0.966272 -0.185226 1.792993 -0.863291 -0.010309 1.247203 0.237609 0.377436 -1.387024 ... -0.108300 0.005274 -0.190321 -1.175575 0.647376 -0.221929 0.062723 0.061458 123.50 0
4 2.0 -1.158233 0.877737 1.548718 0.403034 -0.407193 0.095921 0.592941 -0.270533 0.817739 ... -0.009431 0.798278 -0.137458 0.141267 -0.206010 0.502292 0.219422 0.215153 69.99 0

5 rows × 31 columns

Number & Percentage of Fraud/Not Fraud

In [3]:
fraud = df[df['Class'] == 1]

not_fraud = df[df['Class'] == 0]

print("Total =", len(df))

print("\nFraud =", len(fraud))
print("Percentage of Fraud = {:.2f} %".format(1.*len(fraud)/len(df)*100.0))
 
print("\nNot Fraud =", len(not_fraud))
print("Percentage of Not Fraud = {:.2f} %".format(1.*len(not_fraud)/len(df)*100.0))
Total = 284807

Fraud = 492
Percentage of Fraud = 0.17 %

Not Fraud = 284315
Percentage of Not Fraud = 99.83 %

Countplot (Fraud/Not Fraud)

In [4]:
sns.countplot(df['Class'], palette= 'Set1')

plt.title("Transaction Class Distribution")
LABELS = ["Normal", "Fraud"]
plt.xticks(range(2), LABELS)
plt.xlabel("Class")
plt.ylabel("Frequency")
plt.show()

Declaring the Dependent & the Independent Variables

In [5]:
X = df.iloc[:, :-1].values

y = df.iloc[:, -1].values

Installing Imbalanced-Learn Library

In [ ]:
!pip install imbalanced-learn

Implementing Undersampling for Handling Imbalanced Data

In [6]:
from imblearn.under_sampling import NearMiss

nm = NearMiss()

X_res , y_res = nm.fit_resample(X,y)

Counter

In [7]:
from collections import Counter

print('Original dataset:', Counter(y) )

print('\nResampled dataset:', Counter(y_res))
Original dataset: Counter({0: 284315, 1: 492})

Resampled dataset: Counter({0: 492, 1: 492})