peepshow/ sinks/ openai-files

Reel #66 LLM file bucket

peepshow sink / openai-files

OpenAI FilesPre-upload frames to OpenAI Files for Custom GPTs / Assistants / Responses.

Multipart POST every frame to `/v1/files` with `purpose=vision`, then upload a stitched `manifest.json` with `purpose=assistants`. File IDs are ready to hand to a Responses API call, a Custom GPT Project, or an Assistants vector store.

drop · process · openai-files

What it does

Upload every extracted frame to the OpenAI Files API so Custom GPTs, Projects, Assistants file-search, and the Responses API can reference the frames by `file_id` without re-uploading each turn. Frames go up with `purpose=vision` (the bucket image inputs use); the manifest ships with `purpose=assistants` so it's discoverable by file-search tools. No SDK — native fetch + FormData. Transcripts ride along inside the manifest when transcription is enabled.

When to reach for it

  • Wire peepshow into a Custom GPT / Assistant file-search flow without a per-turn upload step
  • Stage frames for a Responses API call where `input_image` content parts reference `file_id`
  • Team workflows where an `OPENAI_FILES_ORG` header pins uploads to a shared organisation

Install

npm i -g peepshow

Use it

export OPENAI_API_KEY="sk-..."
peepshow sinks add openai-files
peepshow ./demo.mp4

Make it automatic

Register the sink once — every run fires it afterward. Scope by --when so it only runs for matching videos.

peepshow sinks add openai-files
peepshow sinks add openai-files --when extension=mp4,mov
peepshow sinks add openai-files --when path=/Volumes/Work/

Configuration

  • OPENAI_API_KEY Standard OpenAI key. Sent as `Authorization: Bearer <key>`. required
  • OPENAI_FILES_PURPOSE Purpose for frame uploads. One of `vision`, `assistants`, `user_data`, `batch`. Default `vision`. Manifest is always `assistants`.
  • OPENAI_FILES_ORG Adds an `OpenAI-Organization` header when set.
  • OPENAI_FILES_API_URL Override the `/v1/files` base URL. Useful for proxies and mocks.

Use with an LLM agent

Every peepshow sink reads its config from env vars and receives a single JSON payload on stdin. An LLM agent (Claude Code, Cursor, Windsurf, Gemini, Codex) can drive the OpenAI Files sink automatically when three things are true:

  • the env vars below are exported in the agent's shell (or a project .env it can load),
  • the peepshow CLI is on PATH — install with npm i -g peepshow,
  • a peepshow auto-sink is registered for the run (optional but recommended — makes invocation zero-argument).

1. Set the environment

# Add to ~/.zshrc, ~/.bashrc, or a project .env the agent can load
export OPENAI_API_KEY="..."

2. Register as an auto-sink

peepshow sinks add openai-files
peepshow sinks add openai-files --when extension=mp4,mov

3. Example LLM session

You → drop a .mov into Claude Code.

Claude → auto-invokes /peepshow:slides ./clip.mov. peepshow extracts frames + audio, the OpenAI Files sink forwards the run to the configured OpenAI Files target. Claude replies with a summary and a link to the created record.

The transcript rides along in the payload whenever the audio pass transcribes successfully.

Write your own

A sink is any executable that reads the --emit json payload on stdin. Shell, Node, Python, Go — the spec's in docs/PLUGINS.md. Register persistent ones with peepshow sinks add-cmd 'your-command'.