Server Side Pagination Using Node and Mongo

Server-side pagination is a technique used to limit the amount of data retrieved from a database and sent to the client in chunks or pages. This is especially useful when dealing with large datasets to improve performance and reduce the load on both the server and the client. Here's a simple example of server-side pagination using Node.js and MongoDB:

  1. Set up your Node.js project:

    Start by initializing your Node.js project and installing the necessary packages:

                    
                        npm init -y
                        npm install express mongoose                             
                    
                

  2. Create an Express server:

    Create a file (e.g., app.js) and set up a basic Express server:

                    
                        const express = require('express');
                        const mongoose = require('mongoose');
                        const app = express();
                        const PORT = process.env.PORT || 3000;
                        
                        // Connect to MongoDB
                        mongoose.connect('mongodb://localhost:27017/your_database_name', {
                          useNewUrlParser: true,
                          useUnifiedTopology: true,
                        });
                        
                        // Define a simple mongoose schema and model
                        const ItemSchema = new mongoose.Schema({
                          name: String,
                        });
                        
                        const Item = mongoose.model('Item', ItemSchema);
                        
                        // Route for fetching paginated data
                        app.get('/items', async (req, res) => {
                          const page = parseInt(req.query.page) || 1;
                          const pageSize = parseInt(req.query.pageSize) || 10;
                        
                          try {
                            const items = await Item.find()
                              .skip((page - 1) * pageSize)
                              .limit(pageSize);
                        
                            res.json(items);
                          } catch (error) {
                            console.error(error);
                            res.status(500).json({ error: 'Internal Server Error' });
                          }
                        });
                        
                        // Start the server
                        app.listen(PORT, () => {
                          console.log(`Server is running on http://localhost:${PORT}`);
                        });                               
                    
                

  3. Populate your MongoDB database:

    You can use a tool like MongoDB Compass or a script to populate your database with sample data.

  4. Test the pagination:

    Run your server (node app.js) and navigate to http://localhost:3000/items in your browser. You can also use tools like Postman or curl for testing.

    To implement client-side pagination, you would typically make requests to /items?page=1&pageSize=10, /items?page=2&pageSize=10, and so on.

Remember to handle edge cases, such as validating and sanitizing input, handling errors, and considering sorting options. Additionally, consider using an environment variable for sensitive information like the MongoDB connection string and port.

How To Set Up a Multi-Node Kafka Cluster using KRaft

Setting up a multi-node Kafka cluster using KRaft (Kafka Raft) mode involves several steps. KRaft mode enables Kafka to operate without the need for Apache ZooKeeper, streamlining the architecture and improving management. Here’s a comprehensiv …

read more

Streamline Data Serialization and Versioning with Confluent Schema Registry …

Using Confluent Schema Registry with Kafka can greatly streamline data serialization and versioning in your messaging system. Here's how you can set it up and utilize it effectively: you can leverage Confluent Schema Registry to streamline data seria …

read more