Skip to main content
Hero image for Building an Autonomous Sunset Timelapse System: Part 2 - Historical Processing

Building an Autonomous Sunset Timelapse System: Part 2 - Historical Processing


historical-processing api-integration video-processing data-pipeline automation python projects

When your live sunset capture system fails, you need a backup plan. Here’s how I built a system to recover lost footage from the camera’s memory card and process previously saved videos.


The Backup Plan: When Live Capture Isn’t Enough

My automated sunset timelapse system works great… Until it doesn’t. Network outages, power interruptions, or even a simple WiFi hiccup can cause missed captures. But here’s the thing: I’ve configured the camera to record the daily sunset timeframe to its SD card anyway.

This created two opportunities:

  1. Recovery: When the live system fails, I needed a way to recover those missed sunsets from the camera’s memory
  2. Historical Access: The camera had lots of existing footage that could be turned into timelapses

The solution? Build a historical analyzer that could dig through the camera’s stored videos, find the sunset moments, and create timelapses from footage that was already sitting there waiting.

Project Overview: Historical Processing Architecture

Hardware Setup

  • Processing: MacBook Pro (for CPU-intensive video processing)
  • Camera: Same Reolink RLC810-WA with SD card storage
  • Network: WiFi, but I really need to get off my rear and run a CAT-6 cable to my patio for better speed and reliability

Software Stack (Historical Focus)

  • Python 3.9+ with concurrent processing capabilities
  • reolinkapipy: Community library for reliable Reolink API access
  • OpenCV: Video analysis and frame extraction
  • FFmpeg: Bulk video processing and format conversion
  • tqdm: Progress tracking for long-running operations
  • ThreadPoolExecutor: Parallel downloads and processing

Historical Processing Capabilities

  • SD card footage retrieval via multiple API approaches
  • Bulk video downloads with chunked streaming and progress tracking
  • Frame extraction pipeline converting motion videos to timelapse frames
  • Batch video creation for multiple days simultaneously
  • Automatic deduplication and quality validation

The Great API Hunt: Three Attempts to Talk to My Camera

Getting data from a security camera turns out to be surprisingly tricky. Cameras aren’t really designed to be friendly to DIY projects—they’re built for security systems that use specific protocols. I tried three different approaches before finding one that actually worked.

Attempt #1: The “Universal” Standard (Spoiler: It Wasn’t)

I started with something called ONVIF, which is supposed to be a universal way to talk to any security camera.

The reality: ONVIF was a disaster. It would work sometimes, fail other times, give me the wrong timestamps, or just timeout completely. ONVIF works fine for basic stuff like “show me the camera feed,” but asking it to reliably find specific recordings? Not happening.

Attempt #2: The “Official” HTTP API

My research suggested that Reolink cameras have their own native HTTP API for developers. Great! This should be straightforward, right? I found some documentation online and started building requests to talk directly to the camera’s web interface.

The reality: The HTTP API existed, but it was definitely not designed with developers in mind. It felt like trying to use a tool that was never quite finished.

The problems:

  • Authentication tokens would expire at random times, breaking everything mid-process
  • Every camera firmware update seemed to change how the API worked
  • The documentation was sparse and often outdated
  • Large video downloads would fail partway through with no way to resume
  • Error messages were cryptic or nonexistent

Attempt #3: The Community Solution

Just when I was about to give up, I discovered reolinkapipy—a library built by other developers who had encountered the same frustrations. Instead of everyone solving the same problems individually, this was a collaborative effort to create a reliable way to work with Reolink cameras.

This library actually worked. It handled authentication properly, dealt with firmware differences, and provided clean, consistent responses.

Why the community library was the winner:

Battle-tested reliability: The library was built by developers actually using Reolink cameras day-to-day. They’d already encountered and solved all the edge cases I was discovering—firmware compatibility issues, authentication problems, network hiccups, and failed large file downloads.

Clean data handling: Instead of wrestling with messy API responses full of nested objects and cryptic field names, the library gave me clean, consistent data structures that were easy to work with.

Active maintenance: When issues came up, there was usually already discussion and solutions available on GitHub. I wasn’t debugging camera quirks alone.

It simply worked: Fewer failed downloads, more reliable authentication, and better error handling when things did go wrong.

The Download Challenge: Moving Gigabytes of Data

Even with a working API, I still had to actually download all that video data. We’re talking about 35+ video files per day, each around 50MB, totaling ~2GB per day. That’s a lot of data moving over my home WiFi.

The Smart Download Strategy

The key insight was to download in small chunks rather than trying to pull entire 50MB files at once. This “chunked streaming” approach solved several problems:

  • Memory efficiency: Instead of loading a 50MB file entirely into RAM, the system only holds 8KB at a time
  • Resume capability: If a download fails partway through, you can pick up where you left off
  • Progress tracking: Real-time updates on download speed and time remaining
  • Network resilience: Brief internet hiccups don’t kill the entire download

Parallel Processing Pipeline

For bulk historical processing, the system runs up to 4 downloads simultaneously. This parallel approach processes multiple days of footage efficiently while keeping the system responsive—you can still browse the web while it’s downloading gigabytes in the background.

Frame Extraction: From Motion Videos to Timelapse Frames

Once videos are downloaded, they need to be converted into individual frames for the timelapse. Each motion video contains 5 minutes of footage, but I only need frames at precise 5-second intervals.

The extraction process:

  1. Opens each video file and analyzes its properties (frame rate, duration, etc.)
  2. Calculates which frames to extract based on the 5-second interval requirement
  3. Saves each selected frame as a timestamped JPEG image
  4. Processes all 35+ daily video files to extract hundreds of precisely timed frames

The system handles this efficiently, processing about 100 frames per minute and organizing them by date for easy timelapse creation.

Bulk Processing: Handling Weeks of Data

The historical processor can handle large date ranges efficiently. You can tell it “process everything from January 1st to January 30th” and it will:

  1. Process each date sequentially, retrieving footage, extracting frames, and creating timelapses
  2. Automatically upload to YouTube if requested
  3. Continue processing even if individual days fail
  4. Provide detailed progress reports showing what succeeded and what didn’t

This makes it practical to recover from extended outages—the system can process weeks of historical data in a few hours.

The Lesson: Don’t Reinvent What Works

After ONVIF’s spectacular failures and the fragility of the HTTP API, the community library became the only reliable option. The community library represented thousands of hours of real-world testing across different camera models, firmware versions, and network conditions that I simply couldn’t replicate in a few weeks of development.

Key takeaways:

  • Community libraries often solve problems you haven’t encountered yet
  • Chunked downloads are essential for large file transfers over consumer internet
  • Layered fallbacks provide resilience without complexity
  • Parallel processing dramatically improves bulk operation performance
  • Real-world testing beats theoretical correctness

Storage Management and Cleanup

Processing gigabytes of video data requires smart storage management. The system automatically:

  • Removes downloaded video files after extracting frames (no need to keep 50MB video files once you have the individual images)
  • Compresses old frame directories older than 30 days into archives to save disk space
  • Manages temporary storage to prevent running out of disk space during large bulk operations

This keeps storage usage reasonable while preserving the important extracted frames for future timelapse creation.

Error Recovery and Data Integrity

When dealing with large file downloads over home internet, things go wrong. The system includes robust error handling:

  • Automatic retries with exponential backoff (wait 1 second, then 2, then 4, etc.)
  • File integrity verification by checking download sizes match what the camera reported
  • Graceful failure handling - if one video fails, continue processing the rest
  • Detailed logging to track what succeeded and what needs manual attention

This makes the system resilient to temporary network issues, camera glitches, or corrupted files.

Looking Forward: Historical Processing Enhancements

Future Improvements

  • Cloud storage: Cloud backup for processed frames and videos
  • Machine learning: Automated analysis to detect sunset color brilliance and highlight most vivid

This is Part 2 of a two-part technical series. Part 1: Live Capture Engineering covers the real-time autonomous capture system running on Raspberry Pi.

Source code: github.com/lolovespi/sunset-timelapse