movie_filter
i2vStudio
Version 0.9.6 Pre-release is live!

The AI workspace for
Generative Video.

Bridge the gap between open-source models and proprietary workflows. A complete local environment for Image Analysis, LLM Prompt Generation, and Batch Video Queuing.

$ curl -sSL https://i2v.timnetworks.net/install.sh | sudo bash

Why i2vStudio?

Moving from a reference image to a consistent cinematic prompt—and tracking parameters across hundreds of generations—requires a dedicated space.

lock

100% Local & Private

No cloud dependencies for generation. i2vStudio ties directly into your local or remote ComfyUI node and localized Ollama models, keeping your assets strictly private.

hub

Intelligent Orchestration

Let LLMs do the heavy lifting. The built-in orchestrator converts natural language into perfect FLUX image prompts and formats strict 80-word Wan 2.2 cinematic sequences.

layers

Batch Management

Design dozens of variations using dynamically saved taxonomy tokens. Push massive jobs to the ComfyUI queue instantly, or export your sequences to portable ZIP manifests.

Application Features

Explore the specialized tools designed to streamline your video pipeline.

Setup & Prerequisites

i2vStudio acts as a centralized brain, meaning you must configure your worker engines properly to accept incoming API instructions.

dns ComfyUI Configuration

Standard API port is expected at 8188.

# Start ComfyUI, ensuring it listens on all interfaces if running remotely
python main.py --listen 0.0.0.0 --port 8188
  • Install necessary nodes for Wan 2.2 and/or FLUX architecture.
  • Ensure ComfyUI's input/ and output/ directory paths match what you have set in app.py configuration (Default: /media/tish/inference/ComfyUI/).

psychology Ollama Engine Configuration

Default API port is expected at 11434. Required for VLM extraction and Agent logic.

# Download the required models
ollama run llava
ollama run gemma2
# If hosting Ollama on a dedicated machine, ensure it listens externally:
sudo systemctl edit ollama.service
# Add the following under [Service]:
Environment="OLLAMA_HOST=0.0.0.0"
# Save, then restart:
sudo systemctl daemon-reload
sudo systemctl restart ollama

Ready to deploy?

System Execution Commands

Install curl -sSL https://i2v.timnetworks.net/install.sh | sudo bash -s -- install
Update curl -sSL https://i2v.timnetworks.net/install.sh | sudo bash -s -- update
Uninstall curl -sSL https://i2v.timnetworks.net/install.sh | sudo bash -s -- uninstall

Click any command block to copy

Copied to clipboard!