How to create Movie Recommendation System

Movie Recommendation System

In this notebook, I will try to use a few recommendation algorithms (content-based, popular-based and shared filters) and try to build a collection of these models to come up with our final movie recommendation system. For us, we have two MovieLens data sets.

Full Data Set: Contains 26,000,000 ratings and 750,000 tag requests applied to 45,000 movies by 270,000 users. Includes genome tag data with 12 million affiliate scores on 1,100 tags.
Small Data Set: Includes 100,000 ratings and 1,300 tag applications applied to 9,000 movies by 700 users.
I will create a Simple Recommendation using movies from the Full Database while all personalized complementary systems will use a small database (due to the limited computer power I have). As a first step, I will build my simple recommendation plan.

%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
from ast import literal_eval
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.metrics.pairwise import linear_kernel, cosine_similarity
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import wordnet
from surprise import Reader, Dataset, SVD, evaluate

import warnings; warnings.simplefilter('ignore')

Simple Recommendation

Simple Recommender provides general recommendations for every user based on movie popularity and (sometimes) genre. The basic premise of this recommendation is that the most popular and critically acclaimed movies will have the highest potential for popular audiences. This model does not provide user-based recommendations.

Implementing this model is very simple. All we have to do is filter our movies based on the ratings and popularity and show the top movies on our list. As an additional step, we can go through the genre argument to get high-quality movies of some kind.

md = pd. read_csv('../input/movies_metadata.csv')


0False{‘id’: 10194, ‘name’: ‘Toy Story Collection’, …30000000[{‘id’: 16, ‘name’: ‘Animation’}, {‘id’: 35, ‘… StoryLed by Woody, Andy’s toys live happily in his …1995-10-30373554033.081.0[{‘iso_639_1’: ‘en’, ‘name’: ‘English’}]ReleasedNaNToy StoryFalse7.75415.0
1FalseNaN65000000[{‘id’: 12, ‘name’: ‘Adventure’}, {‘id’: 14, ‘…NaN8844tt0113497enJumanjiWhen siblings Judy and Peter discover an encha…1995-12-15262797249.0104.0[{‘iso_639_1’: ‘en’, ‘name’: ‘English’}, {‘iso…ReleasedRoll the dice and unleash the excitement!JumanjiFalse6.92413.0
2False{‘id’: 119050, ‘name’: ‘Grumpy Old Men Collect…0[{‘id’: 10749, ‘name’: ‘Romance’}, {‘id’: 35, …NaN15602tt0113228enGrumpier Old MenA family wedding reignites the ancient feud be…1995-12-220.0101.0[{‘iso_639_1’: ‘en’, ‘name’: ‘English’}]ReleasedStill Yelling. Still Fighting. Still Ready for…Grumpier Old MenFalse6.592.0
3FalseNaN16000000[{‘id’: 35, ‘name’: ‘Comedy’}, {‘id’: 18, ‘nam…NaN31357tt0114885enWaiting to ExhaleCheated on, mistreated and stepped on, the wom…1995-12-2281452156.0127.0[{‘iso_639_1’: ‘en’, ‘name’: ‘English’}]ReleasedFriends are the people who let you be yourself…Waiting to ExhaleFalse6.134.0
4False{‘id’: 96871, ‘name’: ‘Father of the Bride Col…0[{‘id’: 35, ‘name’: ‘Comedy’}]NaN11862tt0113041enFather of the Bride Part IIJust when George Banks has recovered from his …1995-02-1076578911.0106.0[{‘iso_639_1’: ‘en’, ‘name’: ‘English’}]ReleasedJust When His World Is Back To Normal… He’s …Father of the Bride Part IIFalse5.7173.0

5 rows × 24 columnsIn [3]:

md['genres'] = md['genres'].fillna('[]').apply(literal_eval).apply(lambda x: [i['name'] for i <strong>in</strong> x] if isinstance(x, list) else [])

I use the TMDB Ratings to come up with our Top Movies Chart. I will use IMDB’s weighted rating formula to construct my chart. Mathematically, it is represented as follows:

Weighted Rating (WR) = (vv+m.R)+(mv+m.C)


  • v number of movie votes
  • m the minimum votes required to be listed on the chart
  • R average rating of the movie
  • C is the standard vote for every report
  • The next step is to determine the correct number of m, the minimum number of votes required to be listed in the chart. We will use the 95th percentile as our shortcut. In other words, for a movie to appear on the charts, it must have more votes than at least 95% of the movies on the list.

I will build our 250 Top Chart as a whole and will explain the work of building charts of some kind. Let’s get started!

vote_counts = md[md['vote_count'].notnull()]['vote_count'].astype('int')
vote_averages = md[md['vote_average'].notnull()]['vote_average'].astype('int')
C = vote_averages.mean()



In [5]:

m = vote_counts.quantile(0.95)



In [6]:

md['year'] = pd.to_datetime(md['release_date'], errors='coerce').apply(lambda x: str(x).split('-')[0] if x != np.nan else np.nan)

In [7]:

qualified = md[(md['vote_count'] >= m) & (md['vote_count'].notnull()) & (md['vote_average'].notnull())][['title', 'year', 'vote_count', 'vote_average', 'popularity', 'genres']]
qualified['vote_count'] = qualified['vote_count'].astype('int')
qualified['vote_average'] = qualified['vote_average'].astype('int')


(2274, 6)

Therefore, in order to qualify for consideration on the chart, the movie must have at least 434 votes in the TMDB. We also see that the average movie rating on TMDB is 5.244 out of 10. 2274 movies are eligible for our chart.

def weighted_rating(x):
    v = x['vote_count']
    R = x['vote_average']
    return (v/(v+m) * R) + (m/(m+v) * C)

In [9]:

qualified['wr'] = qualified.apply(weighted_rating, axis=1)

In [10]:linkcode

qualified = qualified.sort_values('wr', ascending=False).head(250)

Top Movies

In [11]:



15480Inception201014075829.1081[Action, Thriller, Science Fiction, Mystery, A…7.917588
12481The Dark Knight2008122698123.167[Drama, Action, Crime, Thriller]7.905871
22879Interstellar201411187832.2135[Adventure, Drama, Science Fiction]7.897107
2843Fight Club19999678863.8696[Drama]7.881753
4863The Lord of the Rings: The Fellowship of the Ring20018892832.0707[Adventure, Fantasy, Action]7.871787
292Pulp Fiction199486708140.95[Thriller, Crime]7.868660
314The Shawshank Redemption19948358851.6454[Drama, Crime]7.864000
7000The Lord of the Rings: The Return of the King20038226829.3244[Adventure, Fantasy, Action]7.861927
351Forrest Gump19948147848.3072[Comedy, Drama, Romance]7.860656
5814The Lord of the Rings: The Two Towers20027641829.4235[Adventure, Fantasy, Action]7.851924
256Star Wars19776778842.1497[Adventure, Action, Science Fiction]7.834205
1225Back to the Future19856239825.7785[Adventure, Comedy, Science Fiction, Family]7.820813
834The Godfather19726024841.1093[Drama, Crime]7.814847
1154The Empire Strikes Back19805998819.471[Adventure, Action, Science Fiction]7.814099
46Se7en19955915818.4574[Crime, Mystery, Thriller]7.811669

We see that three Christopher Nolan Films, InceptionThe Dark Knight, and Interstellar occur at the very top of our chart. The chart also indicates a strong bias of TMDB Users towards particular genres and directors.

Let us now construct our function that builds charts for particular genres. For this, we will use relax our default conditions to the 85th percentile instead of 95.

In [12]:

s = md.apply(lambda x: pd.Series(x['genres']),axis=1).stack().reset_index(level=1, drop=True) = 'genre'
gen_md = md.drop('genres', axis=1).join(s)

In [13]:

def build_chart(genre, percentile=0.85):
    df = gen_md[gen_md['genre'] == genre]
    vote_counts = df[df['vote_count'].notnull()]['vote_count'].astype('int')
    vote_averages = df[df['vote_average'].notnull()]['vote_average'].astype('int')
    C = vote_averages.mean()
    m = vote_counts.quantile(percentile)
    qualified = df[(df['vote_count'] >= m) & (df['vote_count'].notnull()) & (df['vote_average'].notnull())][['title', 'year', 'vote_count', 'vote_average', 'popularity']]
    qualified['vote_count'] = qualified['vote_count'].astype('int')
    qualified['vote_average'] = qualified['vote_average'].astype('int')
    qualified['wr'] = qualified.apply(lambda x: (x['vote_count']/(x['vote_count']+m) * x['vote_average']) + (m/(m+x['vote_count']) * C), axis=1)
    qualified = qualified.sort_values('wr', ascending=False).head(250)
    return qualified

Let us see our method in action by displaying the Top 15 Romance Movies (Romance almost didn’t feature at all in our Generic Top Chart despite being one of the most popular movie genres).

Top Romance Movies

In [14]:



10309Dilwale Dulhania Le Jayenge1995661934.4578.565285
351Forrest Gump19948147848.30727.971357
40251Your Name.20161030834.4612527.789489
883Some Like It Hot1959835811.84517.745154
1132Cinema Paradiso1988834814.1777.744878
37863Sing Street2016669810.6728627.689483
882The Apartment1960498811.99437.599317
38718The Handmaiden2016453816.7274057.566166
3189City Lights1931444810.89157.558867
24886The Way He Looks201426285.711277.331363
45437In a Heartbeat2017146820.821787.003959
19731Silver Linings Playbook20124840714.48816.970581

The top romance movie according to our metrics is Bollywood’s Dilwale Dulhania Le Jayenge. This Shahrukh Khan starrer also happens to be one of my personal favorites.

Content Based Recommender

The recommender we built in the previous section suffers some severe limitations. For one, it gives the same recommendation to everyone, regardless of the user’s personal taste. If a person who loves romantic movies (and hates action) were to look at our Top 15 Chart, s/he wouldn’t probably like most of the movies. If s/he were to go one step further and look at our charts by genre, s/he wouldn’t still be getting the best recommendations.

For instance, consider a person who loves Dilwale Dulhania Le JayengeMy Name is Khan and Kabhi Khushi Kabhi Gham. One inference we can obtain is that the person loves the actor Shahrukh Khan and the director Karan Johar. Even if s/he were to access the romance chart, s/he wouldn’t find these as the top recommendations.

To personalize our recommendations more, I am going to build an engine that computes similarities between movies based on certain metrics and suggests movies that are most similar to a particular movie that a user liked. Since we will be using movie metadata (or content) to build this engine, this is also known as Content-Based Filtering.

I will build two Content-Based Recommenders based on:

  • Movie Overviews and Taglines
  • Movie Cast, Crew, Keywords and Genre

Also, as mentioned in the introduction, I will be using a subset of all the movies available to us due to limiting computing power available to me.

In [15]:

links_small = pd.read_csv('../input/links_small.csv')
links_small = links_small[links_small['tmdbId'].notnull()]['tmdbId'].astype('int')

In [16]:

md = md.drop([19730, 29503, 35587])

In [17]:

<em>#Check EDA Notebook for how and why I got these indices.</em>
md['id'] = md['id'].astype('int')

In [18]:

smd = md[md['id'].isin(links_small)]


(9099, 25)

We have 9099 movies available in our small movies metadata dataset which is 5 times smaller than our original dataset of 45000 movies.

Movie Description Based Recommender

Let us first try to build a recommender using movie descriptions and taglines. We do not have a quantitative metric to judge our machine’s performance so this will have to be done qualitatively.

In [19]:

smd['tagline'] = smd['tagline'].fillna('')
smd['description'] = smd['overview'] + smd['tagline']
smd['description'] = smd['description'].fillna('')

In [20]:

tf = TfidfVectorizer(analyzer='word',ngram_range=(1, 2),min_df=0, stop_words='english')
tfidf_matrix = tf.fit_transform(smd['description'])

In [21]:



(9099, 268124)

Cosine Similarity

I will be using the Cosine Similarity to calculate a numeric quantity that denotes the similarity between two movies. Mathematically, it is defined as follows:


Since we have used the TF-IDF Vectorizer, calculating the Dot Product will directly give us the Cosine Similarity Score. Therefore, we will use sklearn’s linear_kernel instead of cosine_similarities since it is much faster.In [22]:

cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)

In [23]:



array([ 1.        ,  0.00680476,  0.        , ...,  0.        ,
        0.00344913,  0.        ])

We now have a pairwise cosine similarity matrix for all the movies in our dataset. The next step is to write a function that returns the 30 most similar movies based on the cosine similarity score.In [24]:

smd = smd.reset_index()
titles = smd['title']
indices = pd.Series(smd.index, index=smd['title'])

In [25]:

def get_recommendations(title):
    idx = indices[title]
    sim_scores = list(enumerate(cosine_sim[idx]))
    sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
    sim_scores = sim_scores[1:31]
    movie_indices = [i[0] for i <strong>in</strong> sim_scores]
    return titles.iloc[movie_indices]

We’re all set. Let us now try and get the top recommendations for a few movies and see how good the recommendations are.In [26]:

get_recommendations('The Godfather').head(10)


973      The Godfather: Part II
8387                 The Family
3509                       Made
4196         Johnny Dangerously
29               Shanghai Triad
5667                       Fury
2412             American Movie
1582    The Godfather: Part III
4221                    8 Women
2159              Summer of Sam
Name: title, dtype: object

In [27]:

get_recommendations('The Dark Knight').head(10)


7931                      The Dark Knight Rises
132                              Batman Forever
1113                             Batman Returns
8227    Batman: The Dark Knight Returns, Part 2
7565                 Batman: Under the Red Hood
524                                      Batman
7901                           Batman: Year One
2579               Batman: Mask of the Phantasm
2696                                        JFK
8165    Batman: The Dark Knight Returns, Part 1
Name: title, dtype: object

We see that for The Dark Knight, our system is able to identify it as a Batman film and subsequently recommend other Batman films as its top recommendations. But unfortunately, that is all this system can do at the moment. This is not of much use to most people as it doesn’t take into considerations very important features such as cast, crew, director and genre, which determine the rating and the popularity of a movie. Someone who liked The Dark Knight probably likes it more because of Nolan and would hate Batman Forever and every other substandard movie in the Batman Franchise.

Therefore, we are going to use much more suggestive metadata than Overview and Tagline. In the next subsection, we will build a more sophisticated recommender that takes genrekeywordscast and crew into consideration.

Metadata Based Recommender

To build our standard metadata based content recommender, we will need to merge our current dataset with the crew and the keyword datasets. Let us prepare this data as our first step.In [28]:

credits = pd.read_csv('../input/credits.csv')
keywords = pd.read_csv('../input/keywords.csv')

In [29]:

keywords['id'] = keywords['id'].astype('int')
credits['id'] = credits['id'].astype('int')
md['id'] = md['id'].astype('int')

In [30]:



(45463, 25)

In [31]:

md = md.merge(credits, on='id')
md = md.merge(keywords, on='id')

In [32]:

smd = md[md['id'].isin(links_small)]


(9219, 28)

We now have our cast, crew, genres and credits, all in one dataframe. Let us wrangle this a little more using the following intuitions:

  1. Crew: From the crew, we will only pick the director as our feature since the others don’t contribute that much to the feel of the movie.
  2. Cast: Choosing Cast is a little more tricky. Lesser known actors and minor roles do not really affect people’s opinion of a movie. Therefore, we must only select the major characters and their respective actors. Arbitrarily we will choose the top 3 actors that appear in the credits list.

In [33]:

smd['cast'] = smd['cast'].apply(literal_eval)
smd['crew'] = smd['crew'].apply(literal_eval)
smd['keywords'] = smd['keywords'].apply(literal_eval)
smd['cast_size'] = smd['cast'].apply(lambda x: len(x))
smd['crew_size'] = smd['crew'].apply(lambda x: len(x))

In [34]:

def get_director(x):
    for i <strong>in</strong> x:
        if i['job'] == 'Director':
            return i['name']
    return np.nan

In [35]:

smd['director'] = smd['crew'].apply(get_director)

In [36]:

smd['cast'] = smd['cast'].apply(lambda x: [i['name'] for i <strong>in</strong> x] if isinstance(x, list) else [])
smd['cast'] = smd['cast'].apply(lambda x: x[:3] if len(x) >=3 else x)

In [37]:

smd['keywords'] = smd['keywords'].apply(lambda x: [i['name'] for i <strong>in</strong> x] if isinstance(x, list) else [])

My approach to building the recommender is going to be extremely hacky. What I plan on doing is creating a metadata dump for every movie which consists of genres, director, main actors and keywords. I then use a Count Vectorizer to create our count matrix as we did in the Description Recommender. The remaining steps are similar to what we did earlier: we calculate the cosine similarities and return movies that are most similar.

These are steps I follow in the preparation of my genres and credits data:

  1. Strip Spaces and Convert to Lowercase from all our features. This way, our engine will not confuse between Johnny Depp and Johnny Galecki.
  2. Mention Director 3 times to give it more weight relative to the entire cast.

In [38]:

smd['cast'] = smd['cast'].apply(lambda x: [str.lower(i.replace(" ", "")) for i <strong>in</strong> x])

In [39]:

smd['director'] = smd['director'].astype('str').apply(lambda x: str.lower(x.replace(" ", "")))
smd['director'] = smd['director'].apply(lambda x: [x,x, x])


We will do a small amount of pre-processing of our keywords before putting them to any use. As a first step, we calculate the frequenct counts of every keyword that appears in the dataset.In [40]:

s = smd.apply(lambda x: pd.Series(x['keywords']),axis=1).stack().reset_index(level=1, drop=True) = 'keyword'

In [41]:

s = s.value_counts()


independent film        610
woman director          550
murder                  399
duringcreditsstinger    327
based on novel          318
Name: keyword, dtype: int64

Keywords occur in frequencies ranging from 1 to 610. We do not have any use for keywords that occur only once. Therefore, these can be safely removed. Finally, we will convert every word to its stem so that words such as Dogs and Dog are considered the same.In [42]:

s = s[s > 1]

In [43]:

stemmer = SnowballStemmer('english')



In [44]:

def filter_keywords(x):
    words = []
    for i <strong>in</strong> x:
        if i <strong>in</strong> s:
    return words

In [45]:

smd['keywords'] = smd['keywords'].apply(filter_keywords)
smd['keywords'] = smd['keywords'].apply(lambda x: [stemmer.stem(i) for i <strong>in</strong> x])
smd['keywords'] = smd['keywords'].apply(lambda x: [str.lower(i.replace(" ", "")) for i <strong>in</strong> x])

In [46]:

smd['soup'] = smd['keywords'] + smd['cast'] + smd['director'] + smd['genres']
smd['soup'] = smd['soup'].apply(lambda x: ' '.join(x))

In [47]:

count = CountVectorizer(analyzer='word',ngram_range=(1, 2),min_df=0, stop_words='english')
count_matrix = count.fit_transform(smd['soup'])

In [48]:

cosine_sim = cosine_similarity(count_matrix, count_matrix)

In [49]:

smd = smd.reset_index()
titles = smd['title']
indices = pd.Series(smd.index, index=smd['title'])

We will reuse the get_recommendations function that we had written earlier. Since our cosine similarity scores have changed, we expect it to give us different (and probably better) results. Let us check for The Dark Knight again and see what recommendations I get this time around.In [50]:

get_recommendations('The Dark Knight').head(10)


8031         The Dark Knight Rises
6218                 Batman Begins
6623                  The Prestige
2085                     Following
7648                     Inception
4145                      Insomnia
3381                       Memento
8613                  Interstellar
7659    Batman: Under the Red Hood
1134                Batman Returns
Name: title, dtype: object

I am much more satisfied with the results I get this time around. The recommendations seem to have recognized other Christopher Nolan movies (due to the high weightage given to director) and put them as top recommendations. I enjoyed watching The Dark Knight as well as some of the other ones in the list including Batman BeginsThe Prestige and The Dark Knight Rises.

We can of course experiment on this engine by trying out different weights for our features (directors, actors, genres), limiting the number of keywords that can be used in the soup, weighing genres based on their frequency, only showing movies with the same languages, etc.

Let me also get recommendations for another movie, Mean Girls which happens to be my girlfriend’s favorite movie.In [51]:

get_recommendations('Mean Girls').head(10)


3319               Head Over Heels
4763                 Freaky Friday
1329              The House of Yes
6277              Just Like Heaven
7905         Mr. Popper's Penguins
7332    Ghosts of Girlfriends Past
6959     The Spiderwick Chronicles
8883                      The DUFF
6698         It's a Boy Girl Thing
7377       I Love You, Beth Cooper
Name: title, dtype: object

Popularity and Ratings

One thing that we notice about our recommendation system is that it recommends movies regardless of ratings and popularity. It is true that Batman and Robin has a lot of similar characters as compared to The Dark Knight but it was a terrible movie that shouldn’t be recommended to anyone.

Therefore, we will add a mechanism to remove bad movies and return movies which are popular and have had a good critical response.

I will take the top 25 movies based on similarity scores and calculate the vote of the 60th percentile movie. Then, using this as the value of mm, we will calculate the weighted rating of each movie using IMDB’s formula like we did in the Simple Recommender section.In [52]:

def improved_recommendations(title):
    idx = indices[title]
    sim_scores = list(enumerate(cosine_sim[idx]))
    sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
    sim_scores = sim_scores[1:26]
    movie_indices = [i[0] for i <strong>in</strong> sim_scores]
    movies = smd.iloc[movie_indices][['title', 'vote_count', 'vote_average', 'year']]
    vote_counts = movies[movies['vote_count'].notnull()]['vote_count'].astype('int')
    vote_averages = movies[movies['vote_average'].notnull()]['vote_average'].astype('int')
    C = vote_averages.mean()
    m = vote_counts.quantile(0.60)
    qualified = movies[(movies['vote_count'] >= m) & (movies['vote_count'].notnull()) & (movies['vote_average'].notnull())]
    qualified['vote_count'] = qualified['vote_count'].astype('int')
    qualified['vote_average'] = qualified['vote_average'].astype('int')
    qualified['wr'] = qualified.apply(weighted_rating, axis=1)
    qualified = qualified.sort_values('wr', ascending=False).head(10)
    return qualified

In [53]:

improved_recommendations('The Dark Knight')


6623The Prestige4510820067.758148
8031The Dark Knight Rises9263720126.921448
6218Batman Begins7511720056.904127
1134Batman Returns1706619925.846862
132Batman Forever1529519955.054144
9024Batman v Superman: Dawn of Justice7189520165.013943
1260Batman & Robin1447419974.287233

Let me also get the recommendations for Mean Girls, my girlfriend’s favorite movie.In [54]:

improved_recommendations('Mean Girls')


1547The Breakfast Club2189719856.709602
390Dazed and Confused588719936.254682
8883The DUFF1372620155.818541
3712The Princess Diaries1063620015.781086
4763Freaky Friday919620035.757786
6277Just Like Heaven595620055.681521
6959The Spiderwick Chronicles593620085.680901
7494American Pie Presents: The Book of Love454520095.119690
7332Ghosts of Girlfriends Past716520095.092422
7905Mr. Popper’s Penguins775520115.087912

Unfortunately, Batman and Robin does not disappear from our recommendation list. This is probably due to the fact that it is rated a 4, which is only slightly below average on TMDB. It certainly doesn’t deserve a 4 when amazing movies like The Dark Knight Rises has only a 7. However, there is nothing much we can do about this. Therefore, we will conclude our Content Based Recommender section here and come back to it when we build a hybrid engine.

Collaborative Filtering

Our content based engine suffers from some severe limitations. It is only capable of suggesting movies which are close to a certain movie. That is, it is not capable of capturing tastes and providing recommendations across genres.

Also, the engine that we built is not really personal in that it doesn’t capture the personal tastes and biases of a user. Anyone querying our engine for recommendations based on a movie will receive the same recommendations for that movie, regardless of who s/he is.

Therefore, in this section, we will use a technique called Collaborative Filtering to make recommendations to Movie Watchers. Collaborative Filtering is based on the idea that users similar to a me can be used to predict how much I will like a particular product or service those users have used/experienced but I have not.

I will not be implementing Collaborative Filtering from scratch. Instead, I will use the Surprise library that used extremely powerful algorithms like Singular Value Decomposition (SVD) to minimise RMSE (Root Mean Square Error) and give great recommendations.In [55]:

reader = Reader()

In [56]:

ratings = pd.read_csv('../input/ratings_small.csv')



In [57]:

data = Dataset.load_from_df(ratings[['userId', 'movieId', 'rating']], reader)

In [58]:

svd = SVD()
evaluate(svd, data, measures=['RMSE', 'MAE'])

Evaluating RMSE, MAE of algorithm SVD.

Fold 1
RMSE: 0.8952
MAE:  0.6908
Fold 2
RMSE: 0.8971
MAE:  0.6899
Fold 3
RMSE: 0.8946
MAE:  0.6892
Fold 4
RMSE: 0.8951
MAE:  0.6911
Fold 5
RMSE: 0.8944
MAE:  0.6879
Mean RMSE: 0.8953
Mean MAE : 0.6898


                           {'mae': [0.69081276370467026,
                            'rmse': [0.89517434195329382,

We get a mean Root Mean Sqaure Error of 0.8963 which is more than good enough for our case. Let us now train on our dataset and arrive at predictions.In [59]:

trainset = data.build_full_trainset()

Let us pick user 5000 and check the ratings s/he has given.In [60]:

ratings[ratings['userId'] == 1]



In [61]:

svd.predict(1, 302, 3)


Prediction(uid=1, iid=302, r_ui=3, est=2.8779447226327712, details={'was_impossible': False})

For movie with ID 302, we get an estimated prediction of 2.686. One startling feature of this recommender system is that it doesn’t care what the movie is (or what it contains). It works purely on the basis of an assigned movie ID and tries to predict ratings based on how the other users have predicted the movie.

Hybrid Recommender

In this section, I will try to build a simple hybrid recommender that brings together techniques we have implemented in the content based and collaborative filter based engines. This is how it will work:

  • Input: User ID and the Title of a Movie
  • Output: Similar movies sorted on the basis of expected ratings by that particular user.

In [62]:

def convert_int(x):
        return int(x)
        return np.nan

In [63]:

id_map = pd.read_csv('../input/links_small.csv')[['movieId', 'tmdbId']]
id_map['tmdbId'] = id_map['tmdbId'].apply(convert_int)
id_map.columns = ['movieId', 'id']
id_map = id_map.merge(smd[['title', 'id']], on='id').set_index('title')
<em>#id_map = id_map.set_index('tmdbId')</em>

In [64]:

indices_map = id_map.set_index('id')

In [65]:

def hybrid(userId, title):
    idx = indices[title]
    tmdbId = id_map.loc[title]['id']
    movie_id = id_map.loc[title]['movieId']
    sim_scores = list(enumerate(cosine_sim[int(idx)]))
    sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
    sim_scores = sim_scores[1:26]
    movie_indices = [i[0] for i <strong>in</strong> sim_scores]
    movies = smd.iloc[movie_indices][['title', 'vote_count', 'vote_average', 'year', 'id']]
    movies['est'] = movies['id'].apply(lambda x: svd.predict(userId, indices_map.loc[x]['movieId']).est)
    movies = movies.sort_values('est', ascending=False)
    return movies.head(10)

In [66]:

hybrid(1, 'Avatar')


1011The Terminator4208.07.419842183.083605
522Terminator 2: Judgment Day4274.07.719912802.947712
8658X-Men: Days of Future Past6155.07.520141275852.935140
1621Darby O’Gill and the Little People35.06.71959188872.899612
8401Star Trek Into Darkness4479.07.42013541382.806536
2014Fantastic Planet140.07.61973163062.789457
922The Abyss822.07.1198927562.774770
4966Hercules in New York63.03.7196952272.703766
4017Hawk the Slayer13.04.51980256282.680591

In [67]:

hybrid(500, 'Avatar')


8401Star Trek Into Darkness4479.07.42013541383.238226
7265Dragonball Evolution475.02.92009141643.195070
831Escape to Witch Mountain60.06.51975148213.149360
1668Return from Witch Mountain38.05.61978148223.138147
522Terminator 2: Judgment Day4274.07.719912803.067221
8658X-Men: Days of Future Past6155.07.520141275853.043710
1011The Terminator4208.07.419842183.040908
2014Fantastic Planet140.07.61973163063.018178

We see that for our hybrid recommender, we get different recommendations for different users although the movie is the same. Hence, our recommendations are more personalized and tailored towards particular users.


In this notebook, I have built 4 different recommendation engines based on different ideas and algorithms. They are as follows:

  1. Simple Recommender: This system used overall TMDB Vote Count and Vote Averages to build Top Movies Charts, in general and for a specific genre. The IMDB Weighted Rating System was used to calculate ratings on which the sorting was finally performed.
  2. Content Based Recommender: We built two content based engines; one that took movie overview and taglines as input and the other which took metadata such as cast, crew, genre and keywords to come up with predictions. We also deviced a simple filter to give greater preference to movies with more votes and higher ratings.
  3. Collaborative Filtering: We used the powerful Surprise Library to build a collaborative filter based on single value decomposition. The RMSE obtained was less than 1 and the engine gave estimated ratings for a given user and movie.
  4. Hybrid Engine: We brought together ideas from content and collaborative filterting to build an engine that gave movie suggestions to a particular user based on the estimated ratings that it had internally calculated for that user.

Happy Learning 🙂

Important Notice for college students

If you’re a college student and have skills in programming languages, Want to earn through blogging? Mail us at

For more Programming related blogs Visit Us Geekycodes . Follow us on Instagram.

By geekycodesco

Leave a Reply

%d bloggers like this: