How To Build a Caching System Through Redis in Node JS?

In today’s world, you don’t want your application to be slow. If a website is taking too long to respond there might be a chance the user will not visit your application again and Boosting node js application.

In this article, we are going to implement the caching system through Redis in NodeJS. But before learning that we have to understand what exactly caching is, how it increases the performance of the website, and how to scale the node js application. Also, we have mentioned how we can use Redis in our application. 

What is Caching? 

Caching is the process where we store the regularly used data in an In-Memory Database. Suppose we are accessing a certain page regularly, we store the data of that page in the Redis database so that we can cache it, boosting node js application.

Why do we need Caching?

Some reasons to cache your node js application:- 

  • If you want to reduce the response time for a node js application you can use caching

To summarize, caching is a win-win situation for your customer and yourself as it reduces the cost of managing a website.

How to Scale Your NodeJS Application?

To follow the tutorial, you must install these software and packages on your operating system:- 

  • NodeJS: Open Source Library to run web server
  • NPM:- Node Package Manager to install Packages
  • Postman:- Postman is a tool for API Testing
  • Code Editor(VS Code):- For this application, we are using VS Code as an Editor

Make sure you have installed the NodeJS Application on your system. If you don’t have NodeJS you can go to the NodeJS Website to install the Application. After installing NodeJS NPM will be installed automatically. 

Getting Started:- 

To get started, create a new directory in your root folder by running the following commands:- 

  1. mkdir cache-app && cd cache-app  

Initialize your package.json file by running the following command:- 

npm init -y

Make sure you install Redis, Axios, and express packages using Node package manager(NPM)

npm install axios redis express

Axios :- Axios is a package to make a network request to node js backend

Redis:- Package which we are using for caching

Express:- Package used for creating a web server

Create a simple Express server application:- 

Now we will be going to request the food recipe API for various food items

In your index.js paste the following code 

const express = require('express'); const app = express(); const port = 3000; const axios = require('axios'); app.get('/recipe/:fooditem', async (req, res) => { try { const food item = req. params.food item; const recipe = await Axios.get(`http://www.recipepuppy.com/api/?q=${fooditem}`); return res.status(200).send({ error: false, data: recipe. data.results }); } catch (error) { console.log(error); } }); app.listen(port, () => { console.log(`Server running on port ${port}`); }); module.exports = app;
Code language: JavaScript (javascript)

This piece of code is making a request to the food recipe API with the use of Axios library about a specific food item.

Now start the server by running the command node index.js test your open postman and make a request to the food recipe endpoint.

As you can see the request that we have made is taking a significant time of 615ms which is not good for any website. We will improve this by using Redis Cache.

Add the following code to your index.js file to enable Caching

const express = require(‘express’); const axios = require(‘axios’); const redis = require(‘redis’); const app = express(); const port = 3000; const client = redis.createClient(6379); client.on(“error”, (error) => { console.error(error); }); app.get(‘/recipe/:fooditem’, (req, res) => { try { const foodItem = req. params.food item; client.get(foodItem, async (err, recipe) => { if (recipe) { return res.status(200).send({ error: false, message: `Recipe for ${foodItem}`, data: JSON.parse(recipe) }) } else { const recipe = await axios.get(`http://www.recipepuppy.com/api/?q=${foodItem}`); client.setex(foodItem, 1440, JSON.stringify(recipe.data.results)); return res.status(200).send({ error: false, message: `Recipe for ${foodItem} from the server`, data: recipe. data.results }); } }) } catch (error) { console.log(error) } }); app.listen(port, () => { console.log(`Server running on port ${port}`); }); module.exports = app;
Code language: JavaScript (javascript)

First, using the standard Redis port (6379), we constructed a Redis client and connected it to the local Redis instance.

Then, using the key from our Redis store, we attempted to find the correct matching data to fulfill the request in the /recipe route handler. If the result is found, it is delivered to the client requesting our cache, saving us from making another server request.

However, if the key is not present in our Redis store, a request is sent to the server, and when the server responds, the result is stored using a special key in the Redis store.

So long as the cached data hasn’t expired, subsequent requests to the same endpoint with the same parameter will always be fetched from the cache. The key is set to hold a string value in the store for a specific amount of seconds, in this example 1440 (24 minutes), using the Redis client’s set method.

Now we can test the application and it will take less time to make the second request to the web server.

Conclusion

I hope you have learned from the article how we can use Redis in our application. As a developer, you will use Redis in many applications like if you are building a chat application or any web server and it improves the web performance. So as a developer, everyone should learn Redis. Thank You!!!

Recent Post

  • What is Knowledge Distillation? Simplifying Complex Models for Faster Inference

    As AI models grow increasingly complex, deploying them in real-time applications becomes challenging due to their computational demands. Knowledge Distillation (KD) offers a solution by transferring knowledge from a large, complex model (the “teacher”) to a smaller, more efficient model (the “student”). This technique allows for significant reductions in model size and computational load without […]

  • Priority Queue in Data Structures: Characteristics, Types, and C Implementation Guide

    In the realm of data structures, a priority queue stands as an advanced extension of the conventional queue. It is an abstract data type that holds a collection of items, each with an associated priority. Unlike a regular queue that dequeues elements in the order of their insertion (following the first-in, first-out principle), a priority […]

  • SRE vs. DevOps: Key Differences and How They Work Together

    In the evolving landscape of software development, businesses are increasingly focusing on speed, reliability, and efficiency. Two methodologies, Site Reliability Engineering (SRE) and DevOps, have gained prominence for their ability to accelerate product releases while improving system stability. While both methodologies share common goals, they differ in focus, responsibilities, and execution. Rather than being seen […]

  • Moving Beyond Traditional Chatbots: Autonomous Agents Redefining Business Operations

    What if your business could operate on autopilot, with AI systems making crucial decisions and managing tasks in real time? Imagine autonomous agents—advanced AI systems capable of making decisions and performing tasks without constant human oversight—transforming your operations. From streamlining workflows to performing seamless customer interactions, these smart agents promise to redefine efficiency and innovation.  […]

  • Mastering Large Action Models: Unleashing Potential and Navigating Complex Challenges in AI

    Imagine an AI assistant that doesn’t just follow commands but anticipates your needs, makes decisions for you, and carries out tasks autonomously. This is the promise of Large Action Models (LAMs), a revolutionary step beyond current AI capabilities. Unlike traditional AI, which reacts to commands, LAMs can think ahead and manage complex scenarios without human […]

  • Harnessing Multimodal AI: A Comprehensive Guide to the Future of Data-Driven Decision Making

    Artificial Intelligence (AI) has been evolving at an astonishing pace, pushing the boundaries of what machines can achieve. Traditionally, AI systems handles single-modal inputs—meaning they could process one type of data at a time, such as text, images, or audio. However, the recent advancements in AI have brought us into the age of multimodal AI, […]

Click to Copy