Categorical variables are those values that are selected from a group of categories or labels. For example, the variable Gender with the values of male or female is categorical, and so is the variable marital status with the values of never married, married, divorced, or widowed. In some categorical variables, the labels have an intrinsic order, for example, in the variable Student’s grade, the values of A, B, C, or Fail are ordered, A being the highest grade and Fail the lowest. These are called ordinal categorical variables. Variables in which the categories do not have an intrinsic order are called nominal categorical variables, such as the variable City, with the values of London, Manchester, Bristol, and so on.

Categorical Variables are often encoded as strings that scikit-learn does not understand. So, we need to replace those strings with numbers. This is known as Categorical Encoding

- Creating binary variables through one-hot encoding
- Performing one-hot encoding of frequent categories
- Replacing categories with ordinal numbers
- Replacing categories with counts or frequency of observations
- Encoding with integers in an ordered manner
- Encoding with the mean of the target
- Encoding with the Weight of Evidence
- Grouping rare or infrequent categories
- Performing binary encoding
- Performing feature hashing

```
import random
import pandas as pd
import numpy as np
```

```
data = pd.read_csv('crx.data', header=None)
```

```
cols = ['A'+str(s) for s in range(1, 17)]
data.columns = cols
data.head()
```

```
# for i in data.columns: # Data has '?' at someplaces
# print(i, data[i].unique())
```

```
# Replacing ? with np.nan
data.replace('?', np.nan, inplace=True)
# Re-casting
data['A2'] = data['A2'].astype('float')
data['A14'] = data['A14'].astype(float)
# Recoding the target variable A16 as binary
data['A16'] = data['A16'].map({'+':1, '-':0})
# Making a list of categorical and numerical columns in the dataset
cat_cols = [c for c in data.columns if data[c].dtype == 'O']
num_cols = [c for c in data.columns if data[c].dtype != 'O']
```

```
cat_cols, num_cols
```

```
# Filling missing data
data[num_cols] = data[num_cols].fillna(0)
data[cat_cols] = data[cat_cols].fillna('Missing')
# Saving the data
data.to_csv('creditAppUCI_1.csv', index=False)
```

#### Creating binary variables through one-hot encoding

In one-hot encoding, we represent a categorical variable as a group of binary variables, where each binary variable represents one category. The binary variable indicates whether the category is present in observation (1) or not (0).A categorical variable with k unique categories can be encoded in k-1 binary variables. For Gender, k is 2 as it contains two labels (male and female), therefore, we need to create only one binary variable (k – 1 = 1) to capture all of the information.

For the color variable, which has three categories (k=3; red, blue, and green), we need to create two (k – 1 = 2) binary variables to capture all the information, so that the following occurs:

- If the observation is red, it will be captured by the variable red (red = 1, blue = 0).
- If the observation is blue, it will be captured by the variable blue (red = 0, blue = 1).
- If the observation is green, it will be captured by the combination of red and blue (red = 0, blue = 0)

```
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
```

```
data_one = pd.read_csv('creditAppUCI_1.csv')
data_one.head()
```

```
# Applying the train test split
x_train, x_test, y_train, y_test = train_test_split(data_one.drop(labels=['A16'], axis=
test_size=0.3, random_state=0)
```

```
# Inspecting the unique categories A4 in training set
x_train['A4'].unique()
```

Let’s encode A4 into k-1 binary variables using pandas and then inspect the first five rows. get_dummies() gives indicator functions for respective label of the categorical variables

```
temp = pd.get_dummies(x_train['A4'], drop_first=True)
```

To encode the variable into k binaries, use instead drop_first=False.

```
temp.head() # 4-1 columns to capture all the data of A4
```

```
cat_cols # List of all categorical variables
```

```
# Let's encode all the categorical variables and get a new df
x_train_enc = pd.get_dummies(x_train[cat_cols], drop_first=True)
x_test_enc = pd.get_dummies(x_test[cat_cols], drop_first=True)
```

```
x_train_enc.head()
```

If there are more categories in the train set than in the test set, get_dummies() will return more columns in the transformed train set than in the transformed test set.

#### Encoding using OneHotEncoder

Let’s create a OneHotEncoder transformer that encodes into k-1 binary variables

```
encoder = OneHotEncoder(categories='auto', sparse=False, drop='first')
encoder.fit(x_train[cat_cols])
```

OneHotEncoder

OneHotEncoder(drop=’first’, sparse=False)

Scikit-learn’s OneHotEncoder() function will only encode the categories learned from the train set. If there are new categories in the test set, we can instruct the encoder to ignore them or to return an error with the handle_unknown=’ignore’ argument or the handle_unknown=’error’ argument, respectively.

```
x_train_enc = encoder.transform(x_train[cat_cols]) # returns numpy arrays
x_test_enc = encoder.transform(x_test[cat_cols])
```

Unfortunately, the feature names are not preserved in the NumPy array, therefore, identifying which feature was derived from which variable is not straightforward. bcoz while transforming the array back to the df we need the column names which we can’t have bcoz the encoder creates new (k-1) columns for every feature with k categories.

```
x_train_enc
```

#### Performing one-hot encoding of frequent categories

Sometimes features have so many categories that the OneHotEncoder will end expanding the no. of columns so much that it becomes very difficult to handle the dataset. This problem can be handled by encoding only the frequent categories of a feature

**We’re using feature_engine’s OneHotCategoricalEncoder to make things simple**

```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from feature_engine.encoding import OneHotEncoder
```

```
data = pd.read_csv('creditAppUCI_1.csv')
data.head()
```

```
x_train, x_test, y_train, y_test = train_test_split(data.drop(labels='A16', axis=1), data['A16'],test_size=0.3, random_state=0)
```

```
# Creating OneHotEncoder for the top five frequent categories of the features A6 & A7
one_enc = OneHotEncoder(top_categories=5, variables=['A6', 'A7'], drop_last=False)
```

```
# Fitting the encoder
one_enc.fit(x_train)
```

OneHotEncoder(top_categories=5, variables=[‘A6’, ‘A7’])

one_enc.encoder_dict_ # Top 5 features for A6 & A7

```
x_train = one_enc.transform(x_train)
x_test = one_enc.transform(x_test)
x_train.head()
```

```
x_test.head()
```

##### Important Notice for college students

If youโre a college student and have skills in programming languages, Want to earn through blogging? Mail us at geekycomail@gmail.com

For more Programming related blogs Visit Usย Geekycodes. Follow us onย Instagram.

Please subscribe to our Youtube Channel by clicking on the youtube icon given below