Logistic Regression

Predicting Customer Click

You have been hired as a consultant to a start-up that is running a targetted marketing ads on facebook. The company wants to anaylze customer behaviour by predicting which customer clicks on the advertisement. Customer data is as follows:

Inputs:

  • Name
  • e-mail
  • Country
  • Time on Facebook
  • Estimated Salary (derived from other parameters)

Outputs:

  • Click (1: customer clicked on Ad, 0: Customer did not click on the Ad)

source: Dr. Ryan @STEMplicity

Importing the Relevant Libraries

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()

Importing the Dataset

In [2]:
url = "https://datascienceschools.github.io/Machine_Learning/Classification_Models_CaseStudies/Facebook_Ads.csv"

dataset = pd.read_csv(url, encoding = 'latin_1')

dataset.head()
Out[2]:
Names emails Country Time Spent on Site Salary Clicked
0 Martina Avila cubilia.Curae.Phasellus@quisaccumsanconvallis.edu Bulgaria 25.649648 55330.06006 0
1 Harlan Barnes eu.dolor@diam.co.uk Belize 32.456107 79049.07674 1
2 Naomi Rodriquez vulputate.mauris.sagittis@ametconsectetueradip... Algeria 20.945978 41098.60826 0
3 Jade Cunningham malesuada@dignissim.com Cook Islands 54.039325 37143.35536 1
4 Cedric Leach felis.ullamcorper.viverra@egetmollislectus.net Brazil 34.249729 37355.11276 0

Explore Dataset

Number & Percentage of of Customers who clicked/Not Clicked

In [3]:
click = dataset[dataset['Clicked'] == 1]

not_click = dataset[dataset['Clicked'] == 0]

print("Total Click =", len(dataset))

print("\nNumber of customers who clicked on Ad =", len(click))
print("Percentage Clicked =", 1.*len(click)/len(dataset)*100.0, "%\n")
 
print("Did not Click =", len(not_click))
print("Percentage who did not Click =", 1.*len(not_click)/len(dataset)*100.0, "%")
Total Click = 499

Number of customers who clicked on Ad = 250
Percentage Clicked = 50.1002004008016 %

Did not Click = 249
Percentage who did not Click = 49.899799599198396 %

Scatterplot ('Time Spent on Site' vs 'Salary')

In [4]:
sns.scatterplot(dataset['Time Spent on Site'],dataset['Salary'], hue = dataset['Clicked'])

plt.title('Facebook Ad: Customer Click')

plt.show()

Boxplot ('Clicked' vs 'Salary')

In [5]:
plt.figure(figsize =(5,5))

sns.boxplot(dataset['Clicked'], dataset['Salary'])

plt.show()

Boxplot ('Clicked' vs 'Time Spent on Site')

In [6]:
plt.figure(figsize =(5,5))

sns.boxplot(dataset['Clicked'], dataset['Time Spent on Site'])

plt.show()

Histogram (Distribution of 'Salary')

In [7]:
plt.hist(dataset['Salary'], bins=40)

plt.title('Distribution of Salary')

plt.show()

Histogram (Distribution of 'Time Spent on Site')

In [8]:
plt.hist(dataset['Time Spent on Site'], bins=20)

plt.title('Distribution of Time Spent on Site')

plt.show()

Declaring the Dependent & the Independent Variables

- Drop unnecessary columns (Names, emails, Country)

    - dataset.drop(['Names', 'emails', 'Country'], axis=1, inplace=True)

- Declaring the Dependent & the Independent Variables

    - X = dataset.drop('Clicked',axis=1).values
    - y = dataset['Clicked'].values
In [9]:
X = dataset.iloc[:, 3:-1].values

y = dataset.iloc[:, -1].values

Splitting the Dataset into the Training Set and Test Set

In [10]:
from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 7)

Feature Scaling

In [11]:
from sklearn.preprocessing import StandardScaler

sc = StandardScaler()

X_train = sc.fit_transform(X_train)

X_test = sc.transform(X_test)

Training the Logistic Regression Model

In [12]:
from sklearn.linear_model import LogisticRegression

model = LogisticRegression(random_state = 0)

model.fit(X_train, y_train)
Out[12]:
LogisticRegression(random_state=0)

Predicting the Test Set Results

In [13]:
y_pred = model.predict(X_test)

Confusion Matrix

In [14]:
from sklearn.metrics import confusion_matrix, accuracy_score

cm = confusion_matrix(y_test, y_pred)

accuracy = accuracy_score(y_test, y_pred)

print("Accuracy is: {:.2f}%".format(accuracy*100))

sns.heatmap(cm, annot = True, fmt="d")

plt.show()
Accuracy is: 95.00%

Classification Report

In [15]:
from sklearn.metrics import classification_report

print(classification_report(y_test, y_pred))
              precision    recall  f1-score   support

           0       0.98      0.92      0.95        51
           1       0.92      0.98      0.95        49

    accuracy                           0.95       100
   macro avg       0.95      0.95      0.95       100
weighted avg       0.95      0.95      0.95       100

K-Fold Cross Validation

In [16]:
from sklearn.model_selection import cross_val_score

accuracies = cross_val_score(estimator = model, X = X_train, y = y_train, cv = 10)

print("Accuracy: {:.2f} %".format(accuracies.mean()*100))

print("Standard Deviation: {:.2f} %".format(accuracies.std()*100))
Accuracy: 90.47 %
Standard Deviation: 3.53 %

Visualising the Training Set Results

In [17]:
from matplotlib.colors import ListedColormap

X_set, y_set = X_train, y_train

X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
                     np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))

plt.contourf(X1, X2, model.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
             alpha = 0.75, cmap = ListedColormap(('magenta', 'blue')))

plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())

for i, j in enumerate(np.unique(y_set)):
    plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
                color = ListedColormap(('magenta', 'blue'))(i), label = j)

plt.title('Logistic Regression (Training set)')
plt.xlabel('Time Spent on Site')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()

Visualising the Test Set Results

In [18]:
from matplotlib.colors import ListedColormap

X_set, y_set = X_test, y_test

X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
                     np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))

plt.contourf(X1, X2, model.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
             alpha = 0.75, cmap = ListedColormap(('magenta', 'blue')))

plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())

for i, j in enumerate(np.unique(y_set)):
    plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
                color = ListedColormap(('magenta', 'blue'))(i), label = j)
    
plt.title('Logistic Regression (Test set)')
plt.xlabel('Time Spent on Site')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()