Skip to main content
A video is the core output of Demomatic. You describe what you want to demonstrate in a prompt, and Demomatic navigates your live application, records the interaction, adds narration, captions, music, and effects, then stores the finished video in your library.

Video properties

Each video has the following properties:
FieldDescription
Internal nameThe display name for the video in your library
DurationLength of the video in seconds
PublicWhether the video can be viewed by anyone with the link
SavedWhether the video has been added to your library
CreatorThe team member who generated the video
FolderThe library folder the video is organized into
CTA enabledWhether the domain’s call-to-action is shown at the end of the video

Generation options

When you generate a video, you can customize how it looks and sounds:
  • Font — typeface for on-screen text ("Inter", "OpenSans", "Playwrite", "Poppins", "Roboto")
  • Voice — AI voice for narration ("ash", "onyx", "nova", "fable")
  • Music — background music track ("observer", "lawrence", "all_i_am", "lust", "denied_access", "75_and_lower")
  • Background — visual background style
  • Captions — whether closed captions are included

How video generation works

After you submit a prompt, Demomatic processes it through the following steps:
1

Researching domain context

Demomatic analyzes your domain to understand your application’s structure, pages, and available interactions.
2

Authenticating

Demomatic logs in to your application using the login steps you configured on the domain.
3

Determining actions and recording

The AI determines the browser actions needed to fulfill your prompt, then navigates and records the real application.
4

Analyzing and grouping actions into flows

Recorded interactions are analyzed and grouped into logical flows with titles and captions.
5

Trimming recorded video

The raw recording is trimmed to remove pauses and irrelevant footage.
6

Preparing video segments

The video is split into segments, ready for narration and effects to be applied.
7

Generating narration and adding effects

AI-generated narration is synthesized, and music, captions, B-roll, and text animations are added.
8

Storing video in library

The finished video is stored and made available in your library.

Video library

Your videos are organized in a library. You can create folders to group related videos—for example, by use case, audience, or feature area. When you retrieve videos from the API, you can filter by folder.

Sharing

Videos can be public or private:
  • Public videos are accessible to anyone with the link—no sign-in required.
  • Private videos require an authenticated session to view.
Toggle the is_public field on a video to control its visibility.

Embedding

You can embed any public video on your website or in your product using the embed endpoint:
GET /videos/:id/embed
This returns a self-contained HTML page with a full-screen video player. Drop it into an <iframe> to embed it anywhere.

Self-heal

Videos can be configured to automatically regenerate when your application changes, so your demos stay current without manual effort. See Self-Heal for details.