Benjamin Raffetseder
posts/Merging Videos with FFmpeg in a React Application

Merging Videos with FFmpeg in a React Application

In this blog post, we will explore a fascinating use case of FFmpeg in a React application. We will be merging video clips fetched from an API, adjusting their frame rates, and ensuring they all have the same resolution.

Project Overview

One of the key features of our application is the ability to merge video clips. To achieve this, we use FFmpeg WASM, a powerful tool that can read, write, and convert multimedia files, compiled to WebAssembly (WASM) for use in the browser. This allows us to merge videos on the client-side without having to send the videos to a server.

Fetching and Preparing the Clips

The first step in our process is to fetch the video clips. We do this by making a POST request to our API endpoint (/api/twitch) for each selected clip. The response is a Blob object, which we convert to an ArrayBuffer for further processing.

const clipArrayBuffers = await Promise.all(
  selectedClips.map(async (clip) => {
    const response = await axios.post(
      `/api/twitch`,
      {
        clipUrl: clip,
      },
      {
        responseType: "blob",
      },
    )
    const blob = new Blob([response.data], { type: "video/mp4" })

    return await blob.arrayBuffer()
  }),
)

Initializing FFmpeg and Adjusting Frame Rate

Next, we initialize FFmpeg and load the necessary scripts. We then write each clip to FFmpeg's in-memory filesystem and adjust the frame rate to 30 fps using the fps filter. We also ensure that all videos have the same resolution using the scale filter.

import { createFFmpeg } from "@ffmpeg/ffmpeg"

const ffmpeg = createFFmpeg({ log: true })
await ffmpeg.load()

const adjustedClipNames = []
for (let index = 0; index < clipArrayBuffers.length; index++) {
  const buffer = clipArrayBuffers[index]
  const inputFileName = `clip${index + 1}.mp4`
  const outputFileName = `clip${index + 1}-30fps.mp4`

  ffmpeg.FS("writeFile", inputFileName, new Uint8Array(buffer!))

  await ffmpeg.run(
    "-i",
    inputFileName,
    "-vf",
    "fps=30,scale=-1:720", // Adjust frame rate and resolution
    outputFileName,
  )
  // ...
}

Merging the Clips

Finally, we merge the clips using FFmpeg's concat filter. The result is a single video file that contains all the selected clips, each playing one after the other.

// Generate the input and filter_complex arguments for FFmpeg
const inputs = adjustedClipNames.map((name) => `-i ${name}`).join(" ")
const filter = `concat=n=${adjustedClipNames.length}:v=1:a=1`

// Run FFmpeg to merge the clips
await ffmpeg.run(
  ...inputs.split(" "),
  "-filter_complex",
  filter,
  "output.mp4",
)

We then read the merged video from FFmpeg's in-memory filesystem and convert it to a Blob object. This Blob object can be used to create a URL that can be used to download the video.

const data = ffmpeg.FS("readFile", "output.mp4")
const videoBlob = new Blob([data.buffer], { type: "video/mp4" })

const url = URL.createObjectURL(videoBlob)
const a = document.createElement("a")

a.href = url
a.download = "merged.mp4"
a.click()
a.remove()

Conclusion

By leveraging the power of FFmpeg and the flexibility of WebAssembly, we can perform complex video processing tasks directly in the browser. This approach has the advantage of being fast, as all processing is done on the client side, and versatile, as FFmpeg supports a wide range of media formats. As already mentioned, by using the client-side approach, we also avoid the need to send the videos to a server, which can be a huge cost saver.

However, there is a downside to this approach. We're limited by the resources available by the client's machine. If the client's computer is not powerful enough, the video processing may take a long time to complete. Even though we use WebAssembly, which is much faster than JavaScript, it still can't match the performance of native code. This is especially true for complex tasks like video processing.