Clips AI is an open-source Python library that automatically converts long videos into clips. With just a few lines of code, you can segment a video into multiple clips and resize its aspect ratio from 16:9 to 9:16.
Note: Clips AI is designed for audio-centric, narrative-based videos such as podcasts, interviews, speeches, and sermons. Our clipping algorithm analyzes a video's transcript to identify and create clips. Our resizing algorithm dynamically reframes videos to focuse on the current speaker, converting the video into various aspect ratios.
For full documentation, visit Clips AI Documentation. Check out a UI demo with clips generated by this library.
pip install clipsai
pip install whisperx@git+https://github.com/m-bain/whisperx.git
Since clips are found using the video's transcript, the video must first be transcribed. Transcribing is done with WhisperX, an open-source wrapper on Whisper with additional functionality for detecting start and stop times for each word.
from clipsai import ClipFinder, Transcriber
transcriber = Transcriber()
transcription = transcriber.transcribe(audio_file_path="/abs/path/to/video.mp4")
clipfinder = ClipFinder()
clips = clipfinder.find_clips(transcription=transcription)
print("StartTime: ", clips[0].start_time)
print("EndTime: ", clips[0].end_time)
To actually trim the video using the returned clips, you'll first need to install ffmpeg and possibly libmagic as well. Note that these are command line libraries, not python libraries. Once done, simply run the following code.
media_editor = clipsai.MediaEditor()
# use this if the file contains audio stream only
media_file = clipsai.AudioFile("/abs/path/to/audio_only_file.mp4")
# use this if the file contains both audio and video stream
media_file = clipsai.AudioVideoFile("/abs/path/to/video.mp4")
clip = clips[0] # select the clip you'd like to trim
clip_media_file = media_editor.trim(
media_file=media_file,
start_time=clip.start_time,
end_time=clip.end_time,
trimmed_media_file_path="/abs/path/to/clip.mp4", # doesn't exist yet
)
A hugging face access token is required to resize a video since Pyannote is utilized for speaker diarization. You won't be charged for using Pyannote and instructions are on the Pyannote HuggingFace page. For resizing the original video to the desired aspect ratio, refer to the resizing reference.
from clipsai import resize
crops = resize(
video_file_path="/abs/path/to/video.mp4",
pyannote_auth_token="pyannote_token",
aspect_ratio=(9, 16)
)
print("Crops: ", crops.segments)
To actually resize the video using the returned crops, you'll first need to install ffmpeg and possibly libmagic as well. Note that these are command line libraries, not python libraries. Once done, simply run the following code.
media_editor = clipsai.MediaEditor()
# use this if the file contains video stream only
media_file = clipsai.VideoFile("/abs/path/to/video_only_file.mp4")
# use this if the file contains both audio and video stream
media_file = clipsai.AudioVideoFile("/abs/path/to/video.mp4")
resized_video_file = media_editor.resize_video(
original_video_file=media_file,
resized_video_file_path="/abs/path/to/resized/video.mp4", # doesn't exist yet
width=crops.crop_width,
height=crops.crop_height,
segments=crops.to_dict()["segments"],
)