Early Access

Glimpse-v1

A lightweight, open vision-language model built to understand and summarize home security camera events.

ollama run llmvision/glimpse-v1

Install Ollama, then paste this in terminal. See documentation for more information.

Compact Model

Glimpse-v1 is a compact 4 billion parameter model, built to run efficiently with limited memory.

Model Size

4 Billion

Parameters

Glimpse-v1 is a compact 4 billion parameter model, built on Gemma3 architecture.

1.9x

Improvement in accuracy over base model.

Glimpse-v1 shows a significant improvement in accuracy over the base model when summarizing home security camera events.

Trained on over

5'000

Samples

Glimpse-v1 was trained on a diverse dataset of real-world home security camera events for robust performance across different scenarios.

Domain Knowledge

Understands Events around your Home.

Glimpse-v1 is trained to recognize and summarize different scenarios around your home.

package_2
Deliveries
Can not only spot deliveries, but also recognize the delivery carrier. Glimpse-v1 also outperforms the base model in telling the difference between a delivery and a visitor, reducing false positives.
crop_free
False Positives
Knows the difference between a false positive and a real event.
directions_car
Vehicles
Improved understanding of vehicles and their movements.
pets
Animals
Can spot pets and other animals even in low-light conditions.

Privacy

Private, local AI for your Home.

We believe privacy is fundamental to smart homes.

By building specialized, compact models that can run locally on hardware with limited memory and compute resources, we aim to make local AI accessible to everyone.
By running Glimpse-v1 locally, your data stays private and secure in your home, while saving money on API costs.

Try Glimpse-v1 today

ollama run llmvision/glimpse-v1

Install Ollama, then paste this in terminal. See documentation for more information.

Get Started