SEO and Metadata optimization on NextJS

Fri Feb 02 2024


SEO is one of the most important pieces of a website becouse when it's done correctly it allows our website to be discovered more easly on and by search engines. In this post I'm gonna show you how I do SEO and metadata optimization on my NextJS websites without using any external libraries.

For this post I'm gonna use the source code of my Caesium Project which I use as the website of The Batuhan's Network SMP server. Let's start.

Metadata optimization

What I mean by metadata optimization is basically improving the way our sites handle metadata so that search engines and social media sites can show information about the content without loading the entire page, and that makes our website more accessible when it's shared on places like Mastodon or Reddit.

For that, we're going to add an Open Graph section to our metadata. If you're wondering what Open Graph is, it's a protocol that's been created by Facebook to standardize the way of handling metadata on a webpage. With Next.JS's built-in support, we can easily add Open Graph to our webpage.

Static Metadata

Static metadata is the main source of information about our website. It's typically defined in the app/layout.jsx file and used whenever dynamic metadata is not defined. You can create static metadata simply creating a metadata element on your layout.jsx file like the one below.

const websiteURL = process.env.NODE_ENV === 'production'  
    ? `https://www.tbnmc.xyz/`  
    : 'http://localhost:3000/'  
  
export const metadata = {  
  metadataBase: new URL(websiteURL),
  title: "Caesium",  
  description: "Caesium is a simple markdown based website designed to be used as a website for simple Minecraft Servers.",  
}
  • As you can see this metadata object is too small. This is becouse we only added the sections that our browser will use. For the Open Graph and Twitter, we'll add seperate sections.
  • As you can see this code contains a websiteURL variable that's used for metadataBase in our metadata. For now it does nothing for our site but in future it'll be needed for generated Open Graph images.

Open Graph and Twitter / X

Now it's time to add our Open Graph and Twitter section. It's simple as adding the section below to end of our metadata object.

openGraph: {  
    title: "Caesium",  
    description: "Caesium is a simple markdown based website designed to be used as a website for simple Minecraft Servers.",  
    url: websiteURL,  
    siteName: "Caesium Prject",  
    type: `website`,  
    images: [  
      {  
        url: "https://raw.githubusercontent.com/B4tuhanY1lmaz/Caesium/main/public/photos/9.png",  
        secureUrl: "https://raw.githubusercontent.com/B4tuhanY1lmaz/Caesium/main/public/photos/9.png",  
        width: 1200,  
        height: 900,  
        alt: "Preview image for TbnMC Website"  
      }  
    ]  
  },
  twitter: {
      card: "Caesium_project",
      site: "@caesium", // You need to change that with your twitter username.
      title: "Caesium",
      description: "Caesium is a simple markdown based website designed to be used as a website for simple Minecraft Servers.",
      creator: "@caesium", // You need to change that with your twitter username.
      images: {
    url:"https://raw.githubusercontent.com/B4tuhanY1lmaz/Caesium/main/public/photos/9.png",
    alt: "Preview image"
      }
  }

After that when we build our site and look at the generated HTML, you can see our metadata like the one below.

Meta tags

This means our static metadata works as expected.

Dynamic Metadata

The static metadata approach isn't a good option for things like blog pages. Metadata of these pages should provide information about what's on that page.

Unlike the static metadata, we need to create a function named generateMetadata() in our dynamic route and return our metadata from there. It sounds a bit complicated but in the end it's simple as creating static metadata. Here is all the steps you need.

  1. Create and export a function named generateMetadata and pass the params object.
  2. Fetch the page data. For me this is a simple function I exported from another file. But you can use your own way of fetching the page data.
  3. Return the Metadata object.

After that you should end up with a function that looks like this.

import config from "@/config/siteconfig.json";
import { getPostContent } from "@/libs/getPostContent";

export async function generateMetadata({ params }) {
    const slug = params.slug // Our dynamic route
    const postContent = getPostContent(slug)

    return {  
    title: postContent.data.title,  
    description: postContent.data.description,  
    openGraph: {  
        title: postContent.data.title,  
        description: postContent.data.description,  
        url: `/posts/${slug}`,  
        type: `article`,  
        siteName: config.siteName,  
        publishedTime: new Date(postContent.data.date).toISOString(),  
        modifiedTime: new Date(postContent.data.date).toISOString(),  
        authors: config.authorName,  
        images: [  
            {  
                url: `https://${config.siteUrl}/${postContent.data.image}`,  
                secureUrl: `https://${config.siteUrl}/${postContent.data.image}`,  
                width: 1200,  
                height: 630,  
                alt: `Preview image for ${postContent.data.title}`,  
                }  
            ],  
        },  
    },
    twitter: {
      card: "Caesium_project",
      site: "@caesium", // You need to change that with your twitter username.
      title: postContent.data.title,
      description: postContent.data.description,
      creator: "@caesium", // You need to change that with your twitter username.
      images: {
        url: postContent.data.image,
        alt: `Preview image for post ${postContent.data.title}`
      }
  }
}

After that when you go to your blog page, your HTML should contain every metadata field we returned in our function. Here is an example from one my posts.

Dynamic Meta Tags

Automated image generation for Open Graph and Twitter

You may see that places like Github automaticly generates a preview image whenever you share it somewhere like Discord or Mastodon.

github example

And as you will see, generating them is actually a much simple than you think, thanks to NextJS's built in support.

We need to create a file named opengraph.image.jsx on the directory we want to generate our image. Like the one example I give below.

// app/blog/[slug]/opengraph-image.jsx
import { ImageResponse } from "next/og"
import { getPostContent } from "@/libs/getPostContent"

import { Inter } from "next/font/google"

const websiteURL = process.env.NODE_ENV === 'production'  
    ? `https://www.tbnmc.xyz/`  
    : 'http://localhost:3000/' 

const inter = Inter({
    weight: "400",
    subsets: ['latin']
})
 
export const contentType = 'image/png'
export const alt = 'Example automated Open Graph image'

export default function Image({ params }) {
    const slug = params.slug
    const postContent = getPostContent(slug)

    return new ImageResponse((
        <div
        className={inter.className}
        style={{
            fontSize: 150,
            background: 'white',
            width: '100%',
            height: '100%',
            display: 'flex',
            alignItems: 'center',
            justifyContent: 'center'
        }}
        >
            {postContent.data.title}
        </div>
    ), {
        width: 1200,
        height: 630
    })
}

I know this example looks a bit complicated right? Let me explain what any of those does.

Image generation is actually just a piece of JSX code that's being rendered when something wants to see that image. At the top of our code, we're defining our url for our website. I explained that on top of this post, and after that, we're defining our font, alt text, and type of content. And in the end of this code we're generating our image with our JSX code by returning a ImageResponse.

After that when you check the meta tags of your website, you can see the generated image on og:image tag.

And that's it! We're done with our metadata optimization and only thing left to do is creating our sitemap and robots.txt file.

Sitemap

sitemap.xml as the name suggests is the file that maps all the pages that out website and it allows search engines to discover the pages we have on our website. With NextJS's built in support for sitemaps we can simply create one by creating a new file named sitemap.js inside our app/ directory.

Here is my sitemap file below.

import { getPostMetadata } from "@/libs/getPostMetadata"  
import config from "@/config/siteconfig.json"  
  
export default async function sitemap() {  
    const allPosts = getPostMetadata() // Get all posts
  
    const home = {  
        url: `https://${config.siteUrl}`,  
        lastModified: new Date().toString()  
    }  // Home page
  
    if (!allPosts) return [home]  // Return homepage if there are no posts.
  
    const posts = allPosts.map((post) => ({  
        url: `${home.url}/blog/${post.slug}`,  
        lastModified: post.date  
    }))  // Map posts to an array.
  
    return [  
        home,  
        ...posts  
    ]  
}

In this code I'm using the getPostMetadata function to fetch all the posts and their metadata. After fetching it I'm mapping it inside the posts array in a format that NextJS can understand and returning it in the end of the file. After that when you save the file and go to localhost:3000/sitemap.xml you should get an output like the one below.

<urlset>
    <url>
        <loc>https://example.com</loc>
        <lastmod>Fri Feb 02 2024 21:58:43 GMT+0300 (GMT+03:00)</lastmod>
    </url>
    <url>
        <loc>https://example.com/blog/2023-09-30-markdown-test-post</loc>
        <lastmod>2023-10-30T00:00:00.000Z</lastmod>
    </url>
</urlset>

Now we can create our robots.txt file and build our site for production.

Robots.txt

The robots.txt file guides search engine crawlers on which pages to access or ignore. It's not a strict rule like .htaccess, and not all bots follow it. Just like Sitemaps, NextJS has built-in support for robots.txt generation since NextJS 13.3.

Here is my robots.js file that allows all crawlers on our website.

import config from "@/config/siteconfig.json"  
  
export default function robots() {  
    return {  
        rules: {  
            userAgent: '*',  
            allow: '/',  
        },  
        sitemap: `https://${config.siteUrl}/sitemap.xml`,  
    }  
}

Now when you go to localhost:3000/robots.txt you should get an output like this.

User-Agent: *
Allow: /

Sitemap: https://example.com/sitemap.xml

And this is it! Now only thing left to do is submitting your sitemap to Google and creating content for your site and sharing it on most places possible.

Hopefully this helps you building your next NextJS project, if you have any questions or need any help, you can always send a mail to me or join our Discord Server. As always, thanks for reading see you in the next one!

Reply via E-Mail

Thank You!

02.02.2024 - 40/100