Code2Video is a simple project that converts code snippets into video format (mp4). This is achieved by using the carbon-now-cli
tool to generate images from code, and ffmpeg
to assemble these images into a video.
This project is divided into four main components, each responsible for a specific task in the conversion process:
-
code2partial.py
:- Purpose: Splits the input code into smaller parts or segments, simulating the effect of typing out the code step-by-step. This is particularly useful for creating videos that visually mimic the process of code being written or revealed gradually.
- Input: A code file (e.g.,
main.py
). - Output: A set of smaller code files, each containing a portion of the original code, which can then be sequentially converted into images.
-
partial2image.py
:- Purpose: Converts each code segment into an image using the
carbon-now-cli
tool. - Input: The partial code files generated by
code2partial.py
. - Output: Image files (e.g.,
.png
or.jpg
) representing each code segment.
// Example configuration for carbon-now-cli { "latest-preset": { "theme": "monokai", "backgroundColor": "#272822", "windowTheme": "none", "windowControls": false, "fontFamily": "Fira Code", "fontSize": "30px", "lineNumbers": true, "firstLineNumber": "1", "dropShadow": false, "dropShadowOffsetY": "20px", "dropShadowBlurRadius": "68px", "selectedLines": "*", "widthAdjustment": false, "lineHeight": "133%", "paddingVertical": "60px", "paddingHorizontal": "40px", "squaredImage": false, "watermark": false, "exportSize": "4x", "type": "png" } }
- Purpose: Converts each code segment into an image using the
-
image2frame.py
:- Purpose: Ensures that all the generated images are resized or adjusted to the same dimensions, preparing them for seamless video creation.
- Input: The image files generated by
partial2image.py
. - Output: Resized image files, ready for video conversion.
-
cover.py
:- Purpose: Generates a cover image for the video. This image can serve as a title slide or any other introductory visual that you wish to include at the beginning of the video.
- Input:
image_path
: The path to a background image that will be used for the cover.title
: The text that will be displayed on the cover image, typically the title of the video or any relevant introductory text.
- Output: A cover image file.
After the images are generated and formatted, use ffmpeg
to compile these images into a video. This tool is responsible for converting the sequence of images into a final .mp4
video file.
ffmpeg -loop 1 -t 1 -i cover.png -framerate 10 -i $(GENERATED_FRAMES_FOLDER)/%d.png -filter_complex "[0:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v0];[1:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v1];[v0][v1]concat=n=2:v=1:a=0,format=yuv420p[v]" -map "[v]" -c:v libx264 -r 10 output.mp4
Explanation of Each Parameter (Automatically Generated by ChatGPT):
-
-loop 1
:- Loops the input image indefinitely. The value
1
means to loop the image, while0
would mean no looping.
- Loops the input image indefinitely. The value
-
-t 1
:- Sets the duration of the first input (cover image) to 1 second. This means the cover image will be shown for 1 second in the video.
-
-i cover.png
:- Specifies the first input file, which is the
cover.png
image. This image will be used as the cover slide at the beginning of the video.
- Specifies the first input file, which is the
-
-framerate 10
:- Sets the frame rate for the following input (the generated frames) to 10 frames per second (fps). This means the video will display 10 frames per second.
-
-i $(GENERATED_FRAMES_FOLDER)/%d.png
:- Specifies the second input, which is a sequence of images located in the folder specified by the environment variable
GENERATED_FRAMES_FOLDER
. The%d
is a placeholder for the image sequence numbers (e.g., 1.png, 2.png, etc.).
- Specifies the second input, which is a sequence of images located in the folder specified by the environment variable
-
-filter_complex "[0:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v0];[1:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v1];[v0][v1]concat=n=2:v=1:a=0,format=yuv420p[v]"
:-filter_complex
: Specifies a complex filter chain. Hereโs a breakdown of the filter steps:[0:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v0]
: Scales the first input (cover image) to an even width and height by truncating the input width (iw
) and height (ih
) to the nearest even number. This is necessary because some video codecs require even dimensions. The result is stored in the alias[v0]
.[1:v]scale=trunc(iw/2)*2:trunc(ih/2)*2[v1]
: Similarly, scales the second input (the sequence of images) to even dimensions. The result is stored in the alias[v1]
.[v0][v1]concat=n=2:v=1:a=0,format=yuv420p[v]
: Concatenates the two video streams[v0]
(cover image) and[v1]
(image sequence) into a single video. Then=2
specifies the number of video segments to concatenate,v=1
specifies one video stream, anda=0
specifies no audio streams. Theformat=yuv420p
ensures the output video is in the YUV 4:2:0 pixel format, which is widely compatible with most video players. The final output is stored in the alias[v]
.
-
-map "[v]"
:- Maps the filtered video output
[v]
to the final output file. This tellsffmpeg
to use the result of the previous filter chain as the video stream in the output file.
- Maps the filtered video output
-
-c:v libx264
:- Specifies the video codec to use for encoding the video.
libx264
is a widely used codec for H.264 video compression.
- Specifies the video codec to use for encoding the video.
-
-r 10
:- Sets the output video frame rate to 10 frames per second. This ensures that the final video will run at 10 fps.
-
output.mp4
:- The name of the final output video file.
-
carbon-now-cli: This tool is used to convert code snippets into beautiful images. Install it via npm:
npm install -g carbon-now-cli
-
ffmpeg: A versatile tool to handle multimedia data, specifically used here for video creation. Installation instructions can be found here.
- Split the code: Run
code2partial.py
to divide your code into smaller parts. - Convert to images: Use
partial2image.py
to generate images from these code segments. - Adjust image size: Run
image2frame.py
to ensure all images are the same size. - Create a cover image (optional): Use
cover.py
if you want a custom cover for your video. - Generate the video: Compile everything into a video using
ffmpeg
.