My previous version has flaws. It works well for opaque GIFs, but OpenCV is famous for not being good with transparent images. So instead of OpenCV I use imageio for GIF iteration and PIL for adding GIF frames to the static background.
Why do more?
The original algorithm left these artifacts, look at the cyan block on the intersection of GIF frame and a darker abstract part:
Initially, I aimed to make one big post, but I am splitting it up. This Part 1 is about how I made the frames to generate videos from.
I have been on and off doing this project for a few months. Many things did not work for me – from ffmpeg to sheer OpenCV/ffmpeg installation on Apple M1 systems – and I am not 100% sure what made it work to this day. I plan to put myself in a situation where I have to do it again, and I will make a guide then. However, right now, I can only say that this video tutorial on running native ffmpeg and this guide from OpenCV themselves are good places to start.
The last time I used OpenCV was in 2017 when I worked on my graduate work. Back then, I processed prerecorded videos with no need to worry about audio. So I had some background prior to this project, but I was in no way ready for the ride.
BTW, if you want to run ffmpeg or any other command inside a python script (a very reasonable idea), use subprocess:
command = "ffmpeg -f concat -safe 0 -i list.txt -c copy final_video.avi -y"
As a result – the videos got smashed together – some frames even never appeared.