- Initilaize the node project.
npm initNote - Create public folder. inside it temp folder. looks like public/temp. but these are empty folders and git cant track empty folders but files. and hence git doesn't push these folders. But these are necessary for us to track. Thats why create .gitkeep file inside temp folder.
public > temp > .gitkeep --- this will track now by git and also get pushed.
- Create .gitignore file in root dir and paste code from
.gitignore generatoron internet. - Create .env file.
- Create src folder. In that, Create
app.js,constants.jsandindex.js. - Modify the package.json file. like to use module format, not common format. Also for running server simultanously.
- For, use import statement. add
"type" : "module".
{
"name": "backend",
"version": "1.0.0",
"description": "",
"main": "index.js",
"type": "module", // here module is used...
"scripts": {
"start": "nodemon src/index.js" // here the nodemon used..
},
"keywords": [
"JS",
"code"
],
"author": "shubham alhat",
"license": "ISC",
"devDependencies": {
"nodemon": "^3.1.10"
}
}-
Now, Create a folders in src -
controllers,db,models,routes,middlewaresandutils. -
For formatted code, please install prettier as devdependencies package.
npm i -D prettier- After that, Create a file named
.prettierrc. i.e .prettierrc
.prettierrc code -
{
"singleQuote": false,
"bracketSpacing": true,
"tabWidth": 2,
"trailingComma": "es5",
"semi": true
}- Create a file
.prettierignoreand put below lines.
*env
.env
env*
node_modules
/.vscode
./distHey! π I totally get how intimidating backend and database stuff can feel at first β but donβt worry, youβre not alone, and Iβll guide you step-by-step to set up MongoDB Atlas and connect it to your application.
MongoDB Atlas is a cloud database service for MongoDB. Itβs where your appβs data will be stored, like user profiles, messages, posts, etc. And donβt worry, you donβt need to install MongoDB locally.
- Go to https://www.mongodb.com/cloud/atlas
- Click on "Start Free" or Sign Up (use Google/GitHub or email)
- Once you're in, choose "Shared Cluster" (free tier).
-
Choose any cloud provider (AWS is fine) and a nearby region.
-
Keep everything default and click "Create Cluster".
- This will take a few minutes.
- Go to Database > Database Access
- Click βAdd New Database Userβ
- Set a username and password (youβll need it later in your code)
- Select βRead and Write to Any Databaseβ
- Click βAdd Userβ
- Go to Network Access
- Click βAdd IP Addressβ
- Click βAllow Access from Anywhereβ (
0.0.0.0/0)Β (only for beginners and practice project) - Save it
- Go to Databases > Browse Collections
- Click βAdd My Own Dataβ
- Give it a database name (e.g.,
myAppDB) - Give it a collection name (e.g.,
users) - Click Create
-
Go to Database > Connect > Connect Your Application
-
Copy the connection string β it looks like:
mongodb+srv://<username>:<password>@cluster0.abcd.mongodb.net/?retryWrites=true&w=majority -
Replace
<username>and<password>with the ones you created earlier.
Install MongoDB driver:
npm install mongodbOr if you use Mongoose (easier):
npm install mongooseconst mongoose = require("mongoose");
mongoose
.connect("your_connection_string_here", {
useNewUrlParser: true,
useUnifiedTopology: true,
})
.then(() => console.log("MongoDB connected"))
.catch((err) => console.error("MongoDB connection error:", err));Store your connection string safely in a
.envfile like:
MONGO_URI=mongodb+srv://youruser:[email protected]/?retryWrites=true&w=majority
And access it in your code using:
require('dotenv').config();
mongoose.connect(process.env.MONGO_URI, { ... });const UserSchema = new mongoose.Schema({
name: String,
email: String,
password: String,
});
const User = mongoose.model("User", UserSchema);
module.exports = User;Youβre now ready to:
- Store users, posts, comments, etc.
- Read/write/delete from your frontend using APIs (Express.js etc.)
- Load env's in
.envfile.
PORT=8000
MONGO_URI=mongodb+srv://youruser:[email protected]- In
constants.js, define you database name.
export const DB_NAME = "youtube";- Install
express,dotenvandmongoose.
npm install express dotenv mongooseIMP NOTE ON DATABASE - Whenever sending and recieving requests, there are possible errors may occur. And also, databases are in different continents, so it takes time. Therefore follow below lines.
- Use
try-catchin every DB code for error handling. - Use
Async-awaitto avoid asynchronous tasks.
- Connecting mongoDB yo our app. (Not professional way)
- Here, we are using an IIFE (Immediately Invoked Function Expression) with async to connect to MongoDB, which is perfectly fine..
index.js
import mongoose from "mongoose";
import { DB_NAME } from "./constants";
(async () => {
try {
if (!process.env.MONGODB_URI) {
throw new Error("MONGODB_URI is not defined in environment variables");
}
await mongoose.connect(`${process.env.MONGODB_URI}/${DB_NAME}`);
console.log("MongoDB connected successfully");
} catch (error) {
console.log("MongoDB connection error:", error);
throw error;
}
})();- Connecting using professional approach (Recommended).
- Create a folder
db. In that folder, create a file nameconnection.js.
connection.js
import mongoose from "mongoose";
import { DB_NAME } from "../constants";
const connectToDB = async () => {
try {
const connectionInstance = await mongoose.connect(
`${process.env.MONGODB_URI}/${DB_NAME}`
);
console.log(
`\n database connected.. DB_HOST: ${connectionInstance.connection.host}`
);
} catch (error) {
console.log("MONGO_DB CONNECTION ERROR:", error);
process.exit(1); // to exit the process
}
};
export default connectToDB;- Now, In index.js and also, we modify script in package.json file.
"start": "nodemon src/index.js""start": "nodemon -r dotenv/config --experimental-json-modules src/index.js" // preloads environment variablesindex.js
import dotenv from "dotenv";
import connectToDB from "./db/connection";
// dotenv.config(); // i am confused whether we should we give path or not.
dotenv.config({
path: "./env",
});
connectToDB();- Run following command to start app.
npm start-
My password contains '@' which conflict with mongodb url and error occured. I converted
@into%40 -
Also, while importing files. use
.jsextension as well.
- In
app.js, create express application and export it.
app.js
import express from "express";
const app = express();
export { app };- After this, we also have listen routes..??
- When db get connected, app should start listening. therefore, use
.thenand.catchto listen app after db connection method is called.
index.js
import dotenv from "dotenv";
import connectToDB from "./db/connection.js";
import { app } from "./app.js";
// dotenv.config(); // i am confused whether we should we give path or not.
dotenv.config({
path: "./env",
});
connectToDB()
.then(() => {
app.listen(process.env.PORT || 8000, () => {
console.log(`πServer running at port ${process.env.PORT}`);
});
})
.catch((err) => {
console.log("database connection failed :", err);
});- Install
cookie-parserandcors.
npm install cookie-parser cors- πcors β Cross-Origin Resource Sharing - Allows your backend server (API) to accept requests from different origins (domains).By default, browsers block cross-origin requests due to security reasons. If your frontend is on a different domain/port than your backend (like React on localhost:3000 and Express on localhost:5000), requests will fail unless you enable CORS.
import cors from "cors";
app.use(cors());- You can customize it.
app.use(
cors({
origin: "http://localhost:3000", // allow only this frontend
credentials: true, // allow cookies to be sent
})
);- πͺ cookie-parser β Parsing Cookies in Request Headers - Helps Express read and access cookies sent by the client (usually the browser).
app.js
import express from "express";
import cors from "cors";
import cookieParser from "cookie-parser";
const app = express();
app.use(
cors({
origin: process.env.CORS_ORIGIN,
credentials: true,
})
);
app.use(express.json({ limit: "16kb" })); // Allow express to take json data as well.
app.use(express.urlencoded({ extended: true, limit: "16kb" })); // Allow express to encode the url. eg " " = %20 or +. @ = %40
app.use(express.static("public")); // to store temp files on server. such files which are not imp.
app.use(cookieParser()); // allow express to set and read client's browser cookies.
export { app };IMP NOTE - When we talking to database, we are going to use Async-await and try-catch. but what if we create a utility file for this. so that we dont need to write those async/await format everytime.
- Create a file
asyncHandler.jsin utils folder.
asyncHandler.js
const asyncHandler = (requestHandler) => {
(req, res, next) => {
Promise.resolve(requestHandler(req, res, next)).catch((err) => next(err));
};
};
export { asyncHandler };
// const asyncHandler = (func) => async (req, res, next) => {
// try {
// await func(req, res, next);
// } catch (error) {
// res.status(error.code || 400).json({
// success: false, // for frontend developer
// message: error.message,
// });
// }
// };Same for apiError.js, apiResponse.js
- Create
user.model.jsinside models.
user.model.js
Here, dont forget about 2nd object timestamps:true.
import mongoose, { Schema } from "mongoose";
const userSchema = new Schema(
{
username: {
type: String,
required: true,
unique: true,
lowercase: true,
trim: true,
index: true,
},
email: {
type: String,
required: true,
unique: true,
lowercase: true,
trim: true,
},
fullName: {
type: String,
required: true,
trim: true,
index: true,
},
avatar: {
type: String, // cloudinary url
required: true,
},
coverImage: {
type: String, // cloudinary url
},
watchHistory: [
{
type: Schema.Types.ObjectId,
ref: "Video",
},
],
password: {
type: String,
required: [true, "password is required."],
},
refreshToken: {
type: String,
},
},
{ timestamps: true }
);
export const User = mongoose.model("User", userSchema);Same for videoSchema
- Now install following packages.
npm install bcrypt jsonwebtoken mongoose-aggregate-paginate-v2user.model.js
import mongoose, { Schema } from "mongoose";
import jwt from "jsonwebtoken";
import bcrypt from "bcrypt";
const userSchema = new Schema(
{
username: {
type: String,
required: true,
unique: true,
lowercase: true,
trim: true,
index: true,
},
email: {
type: String,
required: true,
unique: true,
lowercase: true,
trim: true,
},
fullName: {
type: String,
required: true,
trim: true,
index: true,
},
avatar: {
type: String, // cloudinary url
required: true,
},
coverImage: {
type: String, // cloudinary url
},
watchHistory: [
{
type: Schema.Types.ObjectId,
ref: "Video",
},
],
password: {
type: String,
required: [true, "password is required."],
},
refreshToken: {
type: String,
},
},
{ timestamps: true }
);
// function that should run before saving into databse
userSchema.pre("save", async function (next) {
if (!this.isModified("password")) return next(); // checks if password field is modified. here, this refer to userSchema.
this.password = await bcrypt.hash(this.password, 10);
next();
});
// here, we write custom methods for our user document. we created `isPasswordCorrect`
userSchema.methods.isPasswordCorrect = async function (password) {
return await bcrypt.compare(password, this.password);
}; // here it returns true or false
// method for generate access token.
userSchema.methods.generateAccessToken = function () {
return jwt.sign(
{
_id: this._id,
email: this.email,
username: this.username,
fullName: this.fullName,
},
process.env.ACCESS_TOKEN_SECRET,
{
expiresIn: process.env.ACCESS_TOKEN_EXPIRY,
}
);
};
// method for generating refresh token
userSchema.methods.generateRefreshToken = function () {
return jwt.sign(
{
_id: this._id,
},
process.env.REFRESH_TOKEN_SECRET,
{
expiresIn: process.env.REFRESH_TOKEN_EXPIRY,
}
);
};
export const User = mongoose.model("User", userSchema);β if (!this.isModified("password")) return next();
this refers to the user document.
.isModified("password") checks if the password field has been changed (for example, during sign-up or password update).
If the password was not modified, the function ends early by calling next() (which continues saving without doing anything extra).
πΈ This prevents hashing the password again if itβs already hashed.
β
userSchema.methods.isPasswordCorrect
userSchema.methods is where you define custom functions (methods) for your user documents.
You're creating a method called isPasswordCorrect. It can be used on any document created from the userSchema. Think of this like adding a new ability to your user objects.
video.model.js
import mongoose, { Schema } from "mongoose";
import mongooseAggregatePaginate from "mongoose-aggregate-paginate-v2";
const videoSchema = new Schema(
{
videoFile: {
type: String,
required: true,
},
thumbnail: {
type: String, // cloudinary url
required: true,
},
title: {
type: String,
required: true,
},
description: {
type: String,
required: true,
},
duration: {
type: Number, // from cloudinary
required: true,
},
views: {
type: Number,
default: 0,
},
isPublished: {
type: Boolean,
default: true,
},
owner: {
type: Schema.Types.ObjectId,
ref: "User",
},
},
{ timestamps: true }
);
videoSchema.plugin(mongooseAggregatePaginate);
export const Video = mongoose.model("Video", videoSchema);Here, file uploader will stand alone utility method. (we are going to create). reusable code for uploading avatar, video and files. also can be used as middleware wherever we need.
- signup/login cloudinary.
- install cloudinary and multer
npm i cloudinary multer- Frontend form:
<form action="/upload" method="POST" enctype="multipart/form-data">
<input type="file" name="photo" />
<button type="submit">Upload</button>
</form>- Backend (Node.js/Express) with Multer:
const express = require("express");
const multer = require("multer");
const app = express();
// Use Multer to store files in a folder named "uploads"
const upload = multer({ dest: "uploads/" });
// Route to handle file upload
app.post("/upload", upload.single("photo"), (req, res) => {
console.log(req.file); // β Multer gives you the uploaded file here
res.send("File uploaded!");
});- Now while uploading and storing files. there are two steps to follow.
- Taking file from
input-formand storing it in our server temporilary using multer. here, we will use multer as middleware. - Uploading this file from our server to cloudinary.
- Create a file
cloudinary.jsin folder utils. - take you api keys from site.
cloudinary.js
// Here, the file is already in our server and further code is written. In this we are uploading file in cloudinary from OUR SERVER.
// Also Once we successfully uploaded file in cloudinary, we dont need file in OUR SERVER, so we have to remove it.
import { v2 as cloudinary } from "cloudinary";
import fs from "fs";
// Configuration
cloudinary.config({
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET,
});
const uploadOnCloudinary = async (localFilePath) => {
try {
// Check if localFilePath is there or not
if (!localFilePath) return null;
// upload file on cloudinary
const response = await cloudinary.uploader.upload(localFilePath, {
resource_type: "auto",
});
// file has uploaded successfully
console.log("file uploaded successfully on cloudinary", response.url);
return response;
} catch (error) {
fs.unlink(localFilePath); // remove the locally saved temporary file as the upload operation got failed.
return null;
}
};
export { uploadOnCloudinary };- Create a file
multer.middleware.jsinmiddlewaresfolder.
multer.middleware.js
import multer from "multer";
const storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, "./public/temp");
},
filename: function (req, file, cb) {
cb(null, file.originalname);
},
});
export const upload = multer({ storage: storage });- 1xx β Informational
- 2xx β Success
- 3xx β Redirection
- 4xx β Client Error
- 5xx β Server Error
- 100 Continue
- 102 Processing
- 200 OK
- 201 Created
- 202 Accepted
- 307 Temporary Redirect
- 308 Permanent Redirect
- 400 Bad Request
- 401 Unauthorized
- 402 Payment Required
- 404 Not Found
- 500 Internal Server Error
- 504 Gateway Timeout
- Create
user.controller.jsin controller folder.
user.controller.js
import asyncHandler from "../utils/asyncHandler.js";
const registerUser = asyncHandler(async (req, res) => {
res.status(200).json({
message: "ok",
});
});
export { registerUser };- Create
user.routes.jsin routes folder.
user.routes.js
import { Router } from "express";
const router = Router();
export default router;Note - We writes routes in app.js. where we will import this above two methods. i.e router and registerUser. Also those import statement are written after writing middlewares. See the below code.
app.js
import express from "express";
import cors from "cors";
import cookieParser from "cookie-parser";
const app = express();
app.use(
cors({
origin: process.env.CORS_ORIGIN,
credentials: true,
})
);
app.use(express.json({ limit: "16kb" })); // Allow express to take json data as well.
app.use(express.urlencoded({ extended: true, limit: "16kb" })); // Allow express to encode the url. eg " " = %20 or +. @ = %40
app.use(express.static("public")); // to store temp files on server. such files which are not imp.
app.use(cookieParser()); // allow express to set and read client's browser cookies.
// routes import
import userRouter from "./routes/user.routes.js";
// routes declaration
app.use("/api/v1/users", userRouter);
export { app };user.routes.js
import { Router } from "express";
import { registerUser } from "../controllers/user.controller";
const router = Router();
router.route("/register").post(registerUser);
router.route("/login").post(loginUser);
export default router;- Get user details from frontend - Here we will use Postman to get user data.
- Validation - Whether user send empty string and details.
- Check if user already exist - we will check by username and email.
- Check for images - check for avatar because it is must.
- If avatar and coverimage is available, Store it in cloudinary.
- Create user object - create entry in db.
- Remove password and refresh token field from response.
- Check for user creation.
- return response.
π§Ύ HTML Code Example (as you mentioned):
<form action="http://localhost:3000/register" method="POST">
<input name="name" />
<input name="email" />
<input name="password" />
<button type="submit">Register</button>
</form>π¦ What Happens After You Click βRegisterβ?
β Step 1: Browser Packs the Data (Form Submission)
The browser sees method="POST" and action="http://localhost:3000/register".
It collects all form fields that have a name attribute.
Then it encodes the data using the application/x-www-form-urlencoded format by default.
π§ͺ Example Output (What is sent):
name=Swayam&email=swayam%40gmail.com&password=12345
name=Swayam&email=swayam%40gmail.com&password=12345This is called URL-encoded data. Itβs a string of key=value&key2=value2....
β Step 3: Server Receives the Request
Your Express.js backend receives the request at the route:
app.post("/register", (req, res) => {
console.log(req.body); // β¬
οΈ you want this
});BUT WAIT β you need middleware to decode the body first.
β Step 4: Body Parsing Middleware Decodes It
You need this middleware:
app.use(express.urlencoded({ extended: true }));This tells Express to read and parse the incoming URL-encoded string, and convert it into an object.
Now, req.body will look like:
{
"name": "Swayam",
"email": "[email protected]",
"password": "12345"
}Now, in user.controller.js
import asyncHandler from "../utils/asyncHandler.js";
import { ApiError } from "../utils/apiError.js";
import { User } from "../models/user.model.js";
import { uploadOnCloudinary } from "../utils/cloudinary.js";
import { ApiResponse } from "../utils/apiResponse.js";
const registerUser = asyncHandler(async (req, res) => {
// 1. get user details from frontend
// when data is coming through form submisttion or direct json, we get data from req.body
const { fullName, email, username, password } = req.body;
// check validation of fields
if (
[fullName, email, username, password].some((field) => field?.trim() === "")
) {
throw new ApiError(400, "All fields are required");
}
// Check user if already exist
const existedUser = await User.findOne({
// This checks if username OR email exist
$or: [{ username }, { email }],
});
// return with error if user already exist
if (existedUser) {
throw new ApiError(409, "User with email or username already exists");
}
// Now, firstly access images from fields
const avatarLocalPath = req.files?.avatar[0]?.path;
const coverImageLocalPath = req.files?.coverImage[0]?.path;
// check if avatar is given or not (avatar is compulsory).
if (!avatarLocalPath) {
throw new ApiError(400, "avatar file is required");
}
// upload them on cloudinary
const avatar = await uploadOnCloudinary(avatarLocalPath);
const coverImage = await uploadOnCloudinary(coverImageLocalPath);
// check if avatar is uploaded on cloudinary or not by checking its above response
if (!avatar) {
throw new ApiError(400, "avatar file is required");
}
// Create user object - create entry in database
const user = await User.create({
fullName,
avatar: avatar.url,
coverImage: coverImage?.url || "",
email,
password,
username: username.toLowerCase(),
});
// 1. check if user obj is created or not in database. Also select and remove password and refreshToken.
// 2. we have to remove refreshToken and password because when entry in db, we sending response to frontend for succesfull entry. this response should not contain any sensitive information
const createdUser = await User.findById(user._id).select(
"-password -refreshToken"
);
// check if user created in database or not
if (!createdUser) {
throw new ApiError(500, "Something went wrong while registering user.");
}
// Return response to frontend for succesfull entry in db
return res
.status(201)
.json(new ApiResponse(200, createdUser, "User register successfully"));
});
export { registerUser };- We can use it previous way like below.
- Paste the url which to test. like
http://localhost:8000/api/v1/users/register. - Select the method (here we select POST).
- Select
body. In that, select raw and write data you want in obj form.
{
"email": "[email protected]",
"password": "shubham292004"
}- Click send.
- If you console log the data, see terminal, the data will be logged there.
- Open a new tab by pressing
+. - Paste the url like
http://localhost:8000/api/v1/users/register. - Select the method (here we select POST).
- Select
body. - In body, Select the form-data.
- Here, we have to write data in key-value pair.like
key:fullnameandvalue:Shubham alhat.etc... - To upload files, select file type near
keytab.
user.controller.js
import asyncHandler from "../utils/asyncHandler.js";
import { ApiError } from "../utils/apiError.js";
import { User } from "../models/user.model.js";
import { uploadOnCloudinary } from "../utils/cloudinary.js";
import { ApiResponse } from "../utils/apiResponse.js";
// generate access and refresh token method
const generateAccessAndRefreshToken = async (userId) => {
try {
// find user by id
const user = await User.findById(userId);
// generate tokens here
const accessToken = user.generateAccessToken();
const refreshToken = user.generateRefreshToken();
// store refresh token in database
user.refreshToken = refreshToken;
// This is an option "validateBeforeSave:false" passed to save() to tell Mongoose NOT to run schema validations before saving.
// βSave this user without checking all validation rules. Just save what I changed.β
await user.save({ validateBeforeSave: false });
// return access and refresh token
return { refreshToken, accessToken };
} catch (error) {
throw new ApiError(
500,
"something went wrong while generating access and refresh token"
);
}
};
// ----- steps for registering user ------
// Get user details from frontend - Here we will use Postman to get user data.
// Validation - Whether user send empty string and details.
// Check if user already exist - we will check by username and email.
// Check for images - check for avatar because it is must.
// If avatar and coverimage is available, Store it in cloudinary.
// Create user object - create entry in db.
// Remove password and refresh token field from response.
// Check for user creation.
// return response.
const registerUser = asyncHandler(async (req, res) => {
// 1. get user details from frontend
// when data is coming through form submisttion or direct json, we get data from req.body
const { fullName, email, username, password } = req.body;
// check validation of fields
if (
[fullName, email, username, password].some((field) => field?.trim() === "")
) {
throw new ApiError(400, "All fields are required");
}
// Check user if already exist
const existedUser = await User.findOne({
// This checks if username OR email exist
$or: [{ username }, { email }],
});
// return with error if user already exist
if (existedUser) {
throw new ApiError(409, "User with email or username already exists");
}
// Now, firstly access images from fields
const avatarLocalPath = req.files?.avatar[0]?.path;
// if user not upload coverImage
let coverImageLocalPath;
if (
req.files &&
Array.isArray(req.files.coverImage) &&
req.files.coverImage.length > 0
) {
coverImageLocalPath = req.files.coverImage[0].path;
}
// check if avatar is given or not (avatar is compulsory).
if (!avatarLocalPath) {
throw new ApiError(400, "avatar file is required");
}
// upload them on cloudinary
const avatar = await uploadOnCloudinary(avatarLocalPath);
const coverImage = await uploadOnCloudinary(coverImageLocalPath);
// check if avatar is uploaded on cloudinary or not by checking its above response
if (!avatar) {
throw new ApiError(400, "avatar file is required");
}
// Create user object - create entry in database
const user = await User.create({
fullName,
avatar: avatar.url,
coverImage: coverImage?.url || "",
email,
password,
username: username.toLowerCase(),
});
// 1. check if user obj is created or not in database. Also select and remove password and refreshToken.
// 2. we have to remove refreshToken and password because when entry in db, we sending response to frontend for succesfull entry. this response should not contain any sensitive information
const createdUser = await User.findById(user._id).select(
"-password -refreshToken"
);
// check if user created in database or not
if (!createdUser) {
throw new ApiError(500, "Something went wrong while registering user.");
}
// Return response to frontend for succesfull entry in db
return res
.status(201)
.json(new ApiResponse(200, createdUser, "User register successfully"));
});
// Login the user
// 1. Get data from req.body - email/username and password
// 2. username and email
// 3. find the user
// 4. if found, check password
// 5. Generate access and refresh token
// 6. Send them in cookies
const loginUser = asyncHandler(async (req, res) => {
// Get data from req.body
const { email, username, password } = req.body;
// check if user give email and password both
if (!email || !username) {
throw new ApiError(400, "username or email both is required");
}
// find the user
const user = await User.findOne({
// This checks if username OR email exist
$or: [{ username }, { email }],
});
// if not found, throw error that user was never registered
if (!user) {
throw new ApiError(404, "user does not exist");
}
// check if password is right
const isPasswordValid = await user.isPasswordCorrect(password);
if (!isPasswordValid) {
throw new ApiError(401, "invalid user credentials");
}
// Generate access and refresh token
const { accessToken, refreshToken } = await generateAccessAndRefreshToken(
user._id
);
// What data want to send to user (OPTIONAL STEPS) -------
const loggedInUser = await User.findById(user._id).select(
"-password -refreshToken"
);
// Send them in cookies
// our cookies can modify by frontend as well. so this options ensure that cookie only modified by backend
const options = {
httpOnly: true,
secure: true,
};
return res
.status(200)
.cookie("accessToken", accessToken, options)
.cookie("refreshToken", refreshToken, options)
.json(
new ApiResponse(
200,
{
user: loggedInUser,
accessToken,
refreshToken,
},
"User logged In Successfully"
)
);
});
// Logout user
const logoutUser = asyncHandler(async (req, res) => {
// here req.user come from authmiddleware because in it, we inject user object
await User.findByIdAndUpdate(
req.user._id,
{
$set: {
refreshToken: undefined,
},
},
{
new: true,
}
);
const options = {
httpOnly: true,
secure: true,
};
return res
.status(200)
.clearCookie("accessToken", options)
.clearCookie("refreshToken", options)
.json(new ApiResponse(200, {}, "User Logged Out"));
});
export { registerUser, loginUser, logoutUser };auth.middleware.js
// This middleware will verify whether user or not
import { User } from "../models/user.model.js";
import { ApiError } from "../utils/apiError";
import asyncHandler from "../utils/asyncHandler";
import jwt from "jsonwebtoken";
export const verifyJWT = asyncHandler(async (req, res, next) => {
try {
const token =
req.cookies?.accessToken ||
req.header("Authorization")?.replace("Bearer ", "");
if (!token) {
throw new ApiError(401, "Unauthorized request");
}
const decodedToken = jwt.verify(token, process.env.ACCESS_TOKEN_SECRET);
const user = await User.findById(decodedToken?._id).select(
"-password -refreshToken"
);
if (!user) {
// TODO : Discussion in next video about this part
throw new ApiError(401, "Invalid access token");
}
// if user is there, create new obj "user" in req
req.user = user;
next();
} catch (error) {
throw new ApiError(401, error?.message || "Invalid access token");
}
});In user.controller.js
What we basically going to do is joined the subscription schema to User schema - That's what aggregation pipeline means.
user.controller.js
const getWatchHistory = asyncHandler(async (req, res) => {
const user = await User.aggregate([
{
$match: {
_id: new mongoose.Types.ObjectId(req.user._id),
},
},
{
$lookup: {
from: "videos",
localField: "watchHistory",
foreignField: "_id",
as: "watchHistory",
pipeline: [
{
$lookup: {
from: "users",
localField: "owner",
foreignField: "_id",
as: "owner",
pipeline: [
{
$project: {
fullName: 1,
username: 1,
avatar: 1,
},
},
],
},
},
{
$addFields: {
owner: {
$first: "$owner",
},
},
},
],
},
},
]);
return res
.status(200)
.json(
new ApiResponse(
200,
user[0].watchHistory,
"Watch history fetched successfully"
)
);
});Explaination
Great question, Shubham. You're looking at a nested aggregation pipeline, and itβs totally normal to feel confused at firstβespecially when $lookup is used inside another $lookup. Let's break it down step-by-step, like a real-world story.
You're trying to get the videos a user has watched, and for each video, also get info about the person who uploaded (owned) that video.
Letβs walk through it like a story:
{
$match: {
_id: new mongoose.Types.ObjectId(req.user._id);
}
}β‘οΈ We start by finding the logged-in user using req.user._id.
Imagine you say:
"Hey MongoDB, give me the data for this user only."
{
$lookup: {
from: "videos",
localField: "watchHistory", // array of videoIds user watched
foreignField: "_id", // match those videoIds
as: "watchHistory", // store result in a new field
pipeline: [ ... ]
}
}β‘οΈ Now you say:
"This user has a list of video IDs in
watchHistory. Go to thevideoscollection and find the full video data for each of those IDs."
But it doesn't stop thereβ¦
Inside that first $lookup, you have another $lookup:
{
$lookup: {
from: "users",
localField: "owner", // field in video document
foreignField: "_id", // match to users
as: "owner",
pipeline: [
{
$project: {
fullName: 1,
username: 1,
avatar: 1
}
}
]
}
}β‘οΈ You're saying:
"For each video in watch history, get the info about the uploader (the
ownerfield in the video). Then from that owner's data, keep onlyfullName,username, andavatar."
{
$addFields: {
owner: {
$first: "$owner";
}
}
}β‘οΈ You're saying:
"Because
$lookupgives an array, but we know there's only one owner, just pick the first one and makeownera normal object."
So instead of:
owner: [
{ fullName: "Shubham", username: "shub", avatar: "pic.jpg" }
]You make it:
owner: { fullName: "Shubham", username: "shub", avatar: "pic.jpg" }At the end, you're returning only this:
user[0].watchHistory;So your response is just the list of video objects, each one looking something like this:
{
"_id": "videoId123",
"title": "How to build a YouTube clone",
"owner": {
"fullName": "Shubham",
"username": "shub",
"avatar": "pic.jpg"
}
}| Step | What it does |
|---|---|
| 1οΈβ£ | Find the user by ID |
| 2οΈβ£ | Use $lookup to get all videos in watchHistory |
| 3οΈβ£ | For each video, use another $lookup to get the video's owner info |
| 4οΈβ£ | Simplify the owner array into a single object using $addFields |
| β | Return the processed list of video documents with owner info |
[User]
|
βββ watchHistory: [videoId1, videoId2]
|
βββ Get full video data (from videos collection)
|
βββ For each video, get its owner (from users)
|
βββ Keep only name, username, avatarPerfect, Shubham β now weβll go super practical.
Iβll show you:
- β What weβre actually trying to do
- π Stage-by-stage flow
- π€ What each stage returns
- β‘οΈ How the next stage uses that output
You want to return a list of videos that a user has watched (their watchHistory), and for each video, include some basic information about who uploaded it (video owner).
Letβs assume this sample data:
{
_id: ObjectId("u123"),
name: "Shubham",
watchHistory: [
ObjectId("v1"),
ObjectId("v2")
]
}{
_id: ObjectId("v1"),
title: "How to build YouTube",
owner: ObjectId("owner123")
}{
_id: ObjectId("owner123"),
fullName: "Aryan Singh",
username: "aryansingh",
avatar: "avatar.jpg"
}{
$match: {
_id: ObjectId(req.user._id);
}
}π€ Output:
[
{
"_id": "u123",
"name": "Shubham",
"watchHistory": ["v1", "v2"]
}
]β‘οΈ Sent to next stage: This array with just 1 user and their watchHistory.
{
$lookup: {
from: "videos",
localField: "watchHistory",
foreignField: "_id",
as: "watchHistory",
pipeline: [...]
}
}π‘ What happens:
- For each videoId in
watchHistory(v1,v2) - MongoDB looks into the
videoscollection and fetches matching documents.
π€ Output (after this $lookup):
[
{
"_id": "u123",
"name": "Shubham",
"watchHistory": [
{
"_id": "v1",
"title": "How to build YouTube",
"owner": "owner123"
},
{
"_id": "v2",
"title": "Learn MongoDB",
"owner": "owner456"
}
]
}
]β‘οΈ Next stage runs inside the pipeline of this $lookup β meaning each video now goes through its own pipeline (the next lookup).
{
$lookup: {
from: "users",
localField: "owner",
foreignField: "_id",
as: "owner",
pipeline: [
{
$project: {
fullName: 1,
username: 1,
avatar: 1
}
}
]
}
}π‘ What happens:
- Each video has
owner: ObjectId(...) - MongoDB looks up matching
_idin theuserscollection - Only keeps
fullName,username,avatar
π€ Output for one video:
{
"_id": "v1",
"title": "How to build YouTube",
"owner": [
{
"fullName": "Aryan Singh",
"username": "aryansingh",
"avatar": "avatar.jpg"
}
]
}β‘οΈ Still, owner is an array β we fix that in next step.
{
$addFields: {
owner: {
$first: "$owner";
}
}
}π€ Output for one video:
{
"_id": "v1",
"title": "How to build YouTube",
"owner": {
"fullName": "Aryan Singh",
"username": "aryansingh",
"avatar": "avatar.jpg"
}
}β So now, instead of this:
owner: [ { ... } ]We have this:
owner: { ... }[
{
"_id": "u123",
"name": "Shubham",
"watchHistory": [
{
"_id": "v1",
"title": "How to build YouTube",
"owner": {
"fullName": "Aryan Singh",
"username": "aryansingh",
"avatar": "avatar.jpg"
}
},
{
"_id": "v2",
"title": "Learn MongoDB",
"owner": {
"fullName": "Neha Patel",
"username": "neha_p",
"avatar": "neha.jpg"
}
}
]
}
]user[0].watchHistory;You're just returning the list of processed video objects β not the full user.
[Users Collection]
|
| $match: find one user by _id
β
[User: { watchHistory: [v1, v2] }]
|
| $lookup: Get video docs from videoIds
β
[watchHistory: [ video1, video2 ]]
|
| $lookup (nested): Get video.owner from users
β
[Each video now has owner info]
|
| $addFields: convert owner from array β object
β
Return: watchHistory videos with clean owner info
| Stage | Input | Output | Purpose |
|---|---|---|---|
$match |
All users | Only logged-in user | Select relevant user |
$lookup (videos) |
watchHistory ids | Full video documents | Join user's watched videos |
$lookup (owners) |
video.owner | User info of uploader | Join owner info into video |
$addFields |
owner (array) | owner (object) | Clean up output |
| Final | user[0].watchHistory | List of videos with owner info | Final response |
Want me to turn this into a flowchart image or help you refactor this into simpler functions? Or add filters like "watch history this week only"? Letβs build next level π
[
{
_id: "c1",
content: "Great video!",
video: "v1",
owner: {
// Now it's a SINGLE OBJECT (not array)
_id: "u1",
fullName: "John Doe",
username: "john",
avatar: "john.jpg",
},
},
{
_id: "c2",
content: "Nice work!",
video: "v1",
owner: {
// Single object
_id: "u2",
fullName: "Jane Smith",
username: "jane",
avatar: "jane.jpg",
},
},
];Absolutely Shubham. Here's a practical list of HTTP status codes youβll use 99% of the time in real-world backend development β especially in REST APIs like your YouTube clone.
| Situation | Status | Meaning |
|---|---|---|
| Created successfully | 201 |
Created |
| Validation failed (e.g., empty input) | 400 |
Bad Request |
| Auth required but missing | 401 |
Unauthorized |
| Resource conflict (e.g., duplicate email) | 409 |
Conflict |
| Authenticated but no permission | 403 |
Forbidden |
| Situation | Status | Meaning |
|---|---|---|
| Data found and returned | 200 |
OK |
| No data found (empty list okay) | 200 |
OK (still valid) |
| Resource not found (invalid ID) | 404 |
Not Found |
| Situation | Status | Meaning |
|---|---|---|
| Updated successfully | 200 |
OK |
| Validation or missing input | 400 |
Bad Request |
| Trying to update something not owned by user | 403 |
Forbidden |
| Target not found (e.g., comment ID) | 404 |
Not Found |
| Situation | Status | Meaning |
|---|---|---|
| Deleted successfully | 200 |
OK |
| Already deleted or not found | 404 |
Not Found |
| Not authorized to delete it | 403 |
Forbidden |
| Situation | Status | Meaning |
|---|---|---|
| Logged in / token generated | 200 |
OK |
| Token expired / invalid | 401 |
Unauthorized |
| Trying to access something you're not allowed to | 403 |
Forbidden |
| Situation | Status | Meaning |
|---|---|---|
| Something crashed or exception | 500 |
Internal Server Error |
- Use
200for almost all successful GET, PATCH, DELETE responses. - Use
201only when you're creating something. - Use
400if the client did something wrong (missing field, invalid type, etc.). - Use
401if token is missing or invalid. - Use
403if user is logged in but not allowed (e.g., trying to edit someone else's comment). - Use
500if your server code broke.
{
"success": true,
"message": "Comment added successfully",
"data": { ... } // or [] or null
}If something goes wrong:
{
"success": false,
"message": "Video not found",
"error": "Invalid videoId or it may have been deleted"
}Let me know if you want a helper function to send consistent responses (like a sendResponse() util).
Perfect, Shubham. Letβs keep it super clear and focused on your use case:
π₯ "Just fetch latest videos to feed using
mongoose-aggregate-paginate-v2"
Run this in terminal:
npm install mongoose-aggregate-paginate-v2Inside your video model (video.model.js):
import mongoose, { Schema } from "mongoose";
import mongooseAggregatePaginate from "mongoose-aggregate-paginate-v2";
const videoSchema = new Schema(
{
videoFile: String,
thumbnail: String,
title: String,
description: String,
duration: Number,
views: { type: Number, default: 0 },
isPublished: { type: Boolean, default: true },
owner: { type: Schema.Types.ObjectId, ref: "User" },
},
{ timestamps: true }
);
// π Register the plugin
videoSchema.plugin(mongooseAggregatePaginate);
// π Export the model
export const Video = mongoose.model("Video", videoSchema);Your simplified getAllVideos function:
import { Video } from "../models/video.model.js";
const getAllVideos = asyncHandler(async (req, res) => {
const { page = 1, limit = 10 } = req.query;
// Step 1: Build aggregation pipeline
const aggregateQuery = Video.aggregate([
{
$match: {
isPublished: true,
},
},
{
$sort: {
createdAt: -1, // Latest videos first
},
},
]);
// Step 2: Define options
const options = {
page: parseInt(page),
limit: parseInt(limit),
// populate if needed (example)
populate: {
path: "owner",
select: "username avatar",
},
};
// Step 3: Run aggregation with pagination
const videos = await Video.aggregatePaginate(aggregateQuery, options);
// Step 4: Return response
res.status(200).json({
success: true,
message: "Fetched all latest videos",
data: videos,
});
});This is what the response will look like:
{
"success": true,
"message": "Fetched all latest videos",
"data": {
"docs": [
{
"_id": "abcd123",
"title": "How to Code",
"description": "Best tutorial",
"owner": {
"_id": "user123",
"username": "shubham",
"avatar": "https://cloudinary.com/avatar.jpg"
},
"createdAt": "2024-12-05T12:00:00Z"
},
...
],
"totalDocs": 50,
"limit": 10,
"page": 1,
"totalPages": 5,
"hasNextPage": true,
"hasPrevPage": false,
"nextPage": 2,
"prevPage": null
}
}-
You build a MongoDB aggregation pipeline.
-
mongoose-aggregate-paginate-v2:- Adds
.skip()and.limit()based onpageandlimit. - Returns metadata like
totalPages,nextPage, etc. - Handles heavy lifting so your code stays clean.
- Adds
Install β plugin in schema β use
aggregatePaginate()β send paginated, sorted data to frontend.
Want me to give you a Postman test example or make a dummy video seed script to test the pagination?