SoulMete - Informative Stories from Heart. Read the informative collection of real stories about Lifestyle, Business, Technology, Fashion, and Health.

Why some recommendations fall flat: Recommendation engines & their challenges

[ad_1]

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Using algorithms to make purchasing suggestions is big business. Netflix reported that its recommendation engine contributes $1 billion to its bottom line every year. However, sometimes the suggestions are way off.

Take, for example, an ad I received to apply for a job as a van driver. I have never been a professional driver, I don’t even like driving and I have never owned a van. It’s clear that this recommendation engine knows nothing about me.

There are several different ways recommendation algorithms can reach the wrong conclusions. Here are just a few examples for each type of recommendation engine.

1. Collaborative filtering

This filtering method is based on collecting and analyzing information about user preferences.  The assumption is that if two users have one common interest, they will have other interests in common, so product recommendations will be a match for both. The benefit of this type of analysis is that the algorithm doesn’t need to use inferences from deep learning to understand the item that’s being recommended, it just needs to identify users that have similar interests.

However, one downside of collaborative filtering is that it needs a large dataset with active users who have rated or purchased a product in order to make accurate predictions. If you have little user activity, it is much harder to generate good-quality recommendations. The number of items sold on major e-commerce sites is extremely large. Therefore, even the most popular items could have very few ratings. This is considered the long tail, or scarcity of data problem.

There is also no way to handle new items that haven’t been rated before.

In addition, there are millions of users and products in many of the environments in which these systems make recommendations. Thus, a large amount of computation power is often necessary to make the required calculations, which means many companies are forced to limit the amount of data their models ingest, which can negatively impact accuracy.

2. Content-based filtering

Content-based filtering methods use keywords that describe an item to make a match between recommendations and people. For example, when recommending jobs, keywords of the job description can be matched with the keywords in the user’s resume.

The biggest downside of this model is that it can only make recommendations based on the existing characteristics of the user. It also requires text analysis, which can introduce mistakes when the algorithm needs to identify keywords that are written differently; for example: instructor, trainer, teacher or facilitator.

This type of recommendation engine is also challenged when the solution is multilingual and requires translating and comparing words and phrases in different languages.

3. Hybrid recommendation engines

Hybrid recommendation systems use collaborative filtering and product-based filtering in tandem to recommend a broader range of products to customers with more accurate precision.

Hybrid recommendation systems can generate predictions separately and then combine them, or the capabilities of collaborative-based methods can be added to a content-based approach (and vice versa). In addition, many hybrid recommendation engines include analysis based on demographics and include knowledge-based algorithms, which make inferences about users’ needs and preferences based on deep learning.

However, even if hybrid recommendation engines can improve accuracy, they can suffer from longer compute times. The importance of speed differs based on the application. For example, movie and ecommerce recommendation systems can learn at a slower pace while an application that recommends who to follow on Twitter is bound to change frequently, forcing a recommendation engine to make predictions in close-to-real-time based on fresh data.

In addition, personal interests have different levels of time sensitivity. For example, individual sports like running or swimming are long term, while following sporting events like championships for favorite professional teams can change all the time. Recommendations based on real-time matches need to be more frequently updated.

Improving accuracy for all types of recommendation engines

In all cases, to be more reliable, recommendations should be varied, adapt quickly to new trends, and have the ability to scale up quickly to process more data. One way for developers to improve the accuracy of their recommendation engines is to use off-the-shelf pretrained models and invest in MLops tools that can help speed up the process of putting models into production and regularly monitor models to check for drift.

I am personally always happy to see recommendations for restaurants, bars, books and music performances. Even if the predictions are way out there, I can be convinced to try new things.  But using more complex models that are pretrained with more data will reduce the likelihood that I will be prompted to apply for a job as a van driver.

Michael Galarnyk is an AI evangelist at cnvrg.io.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

[ad_2]
Source link