Turn Any Text Into Realistic Audio

Instantly convert your blog posts, scripts, PDFs into natural-sounding voiceovers.

What is Siri TTS? How to Use it and When You’ll Need More

Learn how to set up and customize Siri TTS on your devices. Improve your workflow with crystal-clear text-to-speech technology today.
Siri droplets - Siri TTS

You’ve heard Siri speak countless times on your iPhone or Mac, but have you ever wondered how that natural-sounding voice actually works? Siri TTS (text-to-speech) technology powers the familiar voices that read your messages, give you directions, and respond to your questions. Whether you’re creating content, building apps, or simply want to understand how to harness Siri’s voice capabilities for your own projects, knowing how to access and use Siri TTS opens up possibilities you might not have considered.

The good news is that AI voice agents can help you tap into this technology more effectively than ever before. These tools bridge the gap between understanding what Siri TTS offers and actually implementing high-quality voice output in your work. By learning how voice synthesis works on Apple devices and exploring the speech-generation options available in iOS and macOS, you can create audio content that sounds professional and engaging without requiring expensive recording equipment or voice talent.

Summary

  • Apple’s text-to-speech engine powers spoken content across iOS and macOS for over 500 million users globally, but most people don’t realize they’re experiencing sophisticated speech synthesis technology, not just a chatbot. Siri TTS refers to three distinct features: the voice that responds to “Hey Siri” commands and the Speak Screen accessibility feature that reads on-screen text aloud.
  • Research shows that 71% of consumers prefer to query by voice rather than typing, reflecting a broader shift toward audio interfaces that extend beyond search. People want to listen to content while driving, exercising, or resting their eyes after screen time. Accessibility features make this essential for individuals with visual impairments, but the use cases now span language learning, multitasking during commutes, and voice narration for tutorials and social media content.
  • Apple’s Speech framework enables developers to trigger text-to-speech in iOS and macOS apps using the AVSpeechSynthesizer class, allowing them to control speech rate, pitch, and voice selection. This functionality remains bound to Apple’s ecosystem and licensing terms. You cannot legally extract audio files for redistribution, use these voices in commercial audio products, or deploy them outside your app.
  • The quality gap between early robotic versions and modern Siri TTS comes from deep neural networks trained on vast human speech datasets. Modern synthesis engines analyze text for context, adjusting pronunciation based on grammar and sentence structure while handling contractions, acronyms, and punctuation naturally. 
  • Most people searching for “Siri TTS download” are looking for something that doesn’t exist in the form they imagine. Siri TTS is infrastructure embedded in Apple’s operating system, accessible through specific interfaces but not extractable or redistributable as standalone audio files. Screen recording can capture Siri’s voice output for personal use, but redistributing that audio commercially violates Apple’s terms of service.

AI voice agents address this gap by offering studio-quality text-to-speech with commercial licensing, API access for workflow automation, and voice customization options that work across platforms beyond the Apple ecosystem.

What is Siri TTS (And What People Actually Mean by it)

SIri in action - Siri TTS

Siri TTS refers to Apple’s built-in text-to-speech engine that powers spoken content across iOS, macOS, and other Apple devices. When people say “Siri TTS,” they’re usually talking about one of three things: 

  • The voice you hear when Siri responds to commands
  • The Speak Screen feature that reads on-screen text aloud
  • The underlying speech synthesis APIs that developers use to build voice-enabled apps

These are related, but distinct technologies, and understanding the difference matters if you’re trying to actually use or integrate voice output in your work.

The Educational and Accessibility Impact of Siri TTS

According to Wikipedia, more than 500 million users interact with Siri globally, making Apple’s voice technology one of the most widely deployed speech systems worldwide. That scale means millions of people encounter Siri’s voice daily, but most don’t realize they’re experiencing a sophisticated text-to-speech engine, not just a chatbot with a pleasant accent.

The Three Faces of Siri TTS

The confusion starts because “Siri” means different things depending on context. Siri, the assistant, is what responds when you say, “Hey Siri, set a timer.” That’s a conversational interface built on natural language processing, query interpretation, and task execution. Siri TTS, on the other hand, is the underlying speech synthesis layer. 

It’s what converts written text into audible speech, whether that’s Siri reading your calendar event, VoiceOver narrating a webpage for accessibility, or Speak Screen reading an article while you fold laundry.

How Apple’s Frameworks Protect Digital Assets

Then there’s the developer side. Apple provides a Speech framework that enables app developers to integrate system voices into their applications. This isn’t a standalone “Siri voice generator” you can download and use freely. 

It’s an API that complies with Apple’s licensing terms and operates only within the Apple ecosystem. You can’t extract Siri’s voice as an MP3 file or use it in a YouTube video without violating those terms.

Understanding IP and Consent

The misconception that there’s a public Siri voice generator tool causes real frustration. People search for ways to “download Siri’s voice” or “use Siri TTS for my podcast,” only to discover that Apple doesn’t offer that. The voice is baked into the operating system, accessible through specific features or developer tools, but not exportable at will.

Why People Want Siri TTS

The appeal is obvious. Siri’s voice sounds natural, familiar, and polished. Research from Keywords Everywhere shows that 71% of consumers prefer to search by voice rather than typing, signaling a broader shift toward audio interfaces. That preference extends beyond search. 

People want to listen to content while: 

  • Driving
  • Cooking
  • Exercising
  • Simply resting their eyes after hours of screen time

Accessibility drives much of this demand. For individuals with visual impairments or reading difficulties, text-to-speech isn’t a convenience feature. It’s essential infrastructure. Siri TTS makes iPhones and Macs usable for millions who would otherwise struggle with traditional interfaces.

How Siri TTS Bridges the Gap Between Literacy and Fluency

But the use cases extend far beyond accessibility. Language learners use Siri TTS to hear correct pronunciation. Multitaskers listen to emails and articles while commuting. Content creators experiment with voice narration for tutorials and social media content

The problem is that most of these use cases bump up against Apple’s walled garden. You can use Siri TTS on your device, but you can’t easily export it, customize it for brand-specific needs, or integrate it into enterprise workflows.

What You Can Actually Do With Siri TTS

Your options depend entirely on your role. Casual users can enable Speak Screen or Speak Selection in iOS accessibility settings. Swipe down with two fingers from the top of the screen, and Siri TTS reads whatever’s displayed. 

It’s: 

  • Simple
  • Effective
  • Requires no technical knowledge

Why Sandbox Licensing Matters

Developers have more flexibility. Using Apple’s AVSpeechSynthesizer class, you can trigger text-to-speech within your app, choose from available system voices, and control speech rate and pitch. 

This works well for in-app notifications, reading lists, and accessibility enhancements. The limitation is that you’re still bound to Apple’s ecosystem and licensing terms. You can’t use these voices in commercial audio products or redistribute them outside your app.

Digital Rights Management (DRM) and the Legal Landscape of Synthetic Media

Then there’s the gray area: people trying to create “Siri-like” voiceovers for projects. Technically, you can screen-record Siri TTS output for personal use, but redistributing that audio commercially violates Apple’s terms. This is where many content creators hit a wall. They want the quality and familiarity of Siri’s voice without the legal and technical restrictions.

The gap between consumer-grade voice assistants and enterprise-grade voice synthesis becomes clear here. Apple built Siri TTS for device interaction and accessibility, not for scalable voice production, brand customization, or integration into customer-facing applications. When businesses need voice output that sounds professional, adapts to their specific terminology, and deploys across platforms beyond iOS, they quickly discover that Siri TTS wasn’t designed for that.

Voice Interoperability and Open Standards

Platforms like AI voice agents address this gap by offering studio-quality text-to-speech that businesses can: 

  • Customize
  • Deploy on-premise or in the cloud
  • Integrate into existing tech stacks through APIs and SDKs

While Siri TTS serves its purpose within Apple’s ecosystem, enterprises building voice experiences at scale need synthesis engines built for flexibility, compliance, and human-like output that works across channels, not just on iPhones.

The Technology Underneath

Siri’s voice quality has improved dramatically over the years. Early versions sounded robotic and stilted. Modern Siri TTS uses deep neural networks trained on vast datasets of human speech, learning to replicate: 

  • Intonation
  • Rhythm
  • Emotional nuance

The result is a voice that sounds conversational rather than mechanical. Apple’s speech synthesis engine analyzes text for context and adjusts pronunciation based on grammar and sentence structure. It handles contractions, acronyms, and punctuation cues naturally. When Siri reads “Dr. Smith’s appt. at 3 PM,” it knows to say “Doctor Smith’s appointment at three PM,” rather than spell out every abbreviation.

The Science of Phonetic Localization: Why Accents Matter in AI Trust

The engine also supports multiple languages and regional accents. You can choose British English, Australian English, or Indian English, each with distinct pronunciation patterns. This localization matters for users who want voices that match their linguistic context, but it also highlights a limitation: you’re choosing from Apple’s preset options rather than creating custom voices tailored to your brand or audience.

That constraint becomes significant when you’re building customer-facing voice applications. A healthcare company might need a voice that sounds reassuring and authoritative. A children’s app might want something playful and energetic. Siri TTS offers quality, but not that level of customization.

What Most People Miss

The biggest misunderstanding is thinking Siri TTS is a product you can “get” or “use” independently. It’s not. It’s infrastructure embedded in Apple’s operating system, accessible through specific interfaces but not extractable or redistributable. When people search for “Siri TTS download,” they’re looking for something that doesn’t exist in the form they imagine.

Another blind spot: assuming that because Siri sounds good on your iPhone, it’s suitable for any voice application. Siri TTS was optimized for short, conversational utterances like “Your timer is done” or “Here’s what I found on the web.” It performs well in those contexts but wasn’t designed for long-form narration, complex technical content, or brand-specific voice personas.

Matching Speech Architecture to Enterprise Goals

The real question isn’t “How do I use Siri TTS?” but “What am I actually trying to accomplish?” If you want to listen to articles on your iPhone, Speak Screen works perfectly. If you’re a developer building an iOS app with voice feedback, Apple’s Speech framework is the right tool. 

But if you’re creating scalable voice experiences for enterprise applications, customer service, or content production, you need synthesis technology built from the ground up for those use cases.

Related Reading

How to Use Siri Text-to-Speech on iPhone, iPad, and Mac: Step-by-Step Guide to Make Siri Read Text Out Loud

person using TTS - Siri TTS

Activating Siri’s text-to-speech on your device takes about 30 seconds. The feature is located in Accessibility settings, not in Siri’s main configuration, because Apple designed it primarily for users who need auditory support when navigating their devices. Once enabled, you can trigger it with a swipe gesture or voice command, and Siri reads whatever appears on your screen.

For iPhone and iPad

Open the Settings app and scroll to Accessibility. This section includes all assistive technologies, from magnification tools to motor-control adaptations. 

How to:

  • Tap Spoken Content, where you’ll find several voice output options. The most useful feature for most people is Speak Screen. Toggle it on. 
  • Now, swipe down from the top edge of your screen with two fingers anywhere in iOS, and a small control panel appears. Siri begins reading the visible text aloud, starting from the top. The control panel lets you pause, adjust speed, or skip forward and backward through sentences.
  • Speak Selection works differently. It highlights text you manually select, then offers a “Speak” button in the contextual menu. This is better for spot-checking specific paragraphs or hearing how a sentence sounds before sending an email.

Both features require downloading voice files if you haven’t used Siri TTS before. iOS prompts you automatically, but the download happens in the background. Expect a few minutes on slower connections. The voices consume storage space, typically 100-300MB per language and accent, so if you enable multiple regional variants, monitor your available capacity.

For iMac and MacBooks

How to:

  • Click the Apple menu at the top left, then System Settings. Navigate to Accessibility, then Spoken Content. Check the boxes for Speak Selection and Speak Screen.
  • On macOS, keyboard shortcuts make this faster. Option + Esc triggers Speak Selection by default. You can customize shortcuts under Keyboard Shortcuts within Accessibility if those defaults conflict with other tools you use. Some developers remap these constantly because they overlap with terminal commands or code editor functions.

The Mac version includes a feature called “Speak item under the pointer,” which reads aloud whatever your cursor hovers over. This sounds niche, but it’s surprisingly useful when reviewing dense documents or proofreading web content where you want to catch awkward phrasing without reading silently.

How to Use Siri Voice Text-to-Speech to Read Text Aloud

  • Open any app where text appears. Safari, Notes, Messages, Mail, and third-party apps such as Kindle or Pocket. Say “Hey Siri, speak screen,” or use the two-finger swipe gesture on iOS. On Mac, select text and hit your keyboard shortcut.
  • A control panel materializes, showing playback controls. The tortoise and hare icons adjust speed. Tap the forward or backward arrows to jump sentences. If the panel disappears after a few seconds, tap the side of your screen to bring it back.

The feature respects the app structure. In Safari, Siri reads article text but skips navigation menus and ads. In Messages, conversation threads are read in chronological order. In Mail, sender names are displayed before the message body. This context awareness makes the experience less robotic, though it occasionally stumbles on poorly coded websites where text hierarchy isn’t properly marked up.

Customizing Siri’s Voice and Speech Options

Apple provides several voices per language, each with distinct characteristics. In Spoken Content settings, tap Voices to explore options. You’ll see categories like Siri Voice, Premium, and Enhanced Quality. Siri Voice uses the neural engine for more natural prosody. Premium voices sound smoother but require larger downloads. Enhanced Quality voices are older, smaller files that sound more mechanical.

Regional accents matter more than most people expect. British English Siri pronounces “schedule” as “shed-yule,” while American English says “sked-yule.” Australian English handles slang differently. Indian English adapts intonation for local speech patterns. If you’re listening to content written in a specific regional style, matching the voice to that style reduces cognitive friction.

How Siri TTS Enhances Literacy

The Speaking Rate slider lets you speed up or slow down playback. Most people start at the default midpoint, then gradually increase speed as they acclimate. I’ve seen language learners set it to a slower speed to catch pronunciation details, while commuters crank it up to 1.5x or 2x to read articles faster. The upper limit sounds frantic but remains intelligible if you’re used to listening to podcasts at high speeds.

Highlight Content adds visual tracking. As Siri reads, words or sentences are highlighted in real time. You can choose to underline words, change sentence background colors, or both. This helps maintain focus during long passages, especially for readers who process information more effectively when auditory and visual inputs are synchronized.

Why Type to Siri is an Accessibility Anchor

Type to Siri is unrelated to text-to-speech output but lives in the same settings area. It lets you type requests to Siri instead of speaking them, useful in quiet environments. The feature confuses people because it sits next to voice customization options, but it controls the input method, not the output voice.

Utilizing Siri Text-to-Speech on macOS Devices

macOS offers the same core features as iOS but integrates them differently. The Speak Selection shortcut works across all apps, including terminal windows, code editors, and design tools. Developers use this to proofread commit messages or documentation. Writers listen to drafts to catch awkward phrasing that looks fine on the page but sounds clunky when spoken aloud.

System Voice settings let you choose a default voice for all spoken content. Unlike iOS, where Siri’s voice is tightly coupled to the assistant, macOS separates the system voice from Siri assistant voice. You can have Siri respond to “Hey Siri” in one accent while Speak Selection uses another. This separation matters if you prefer a specific voice for long-form listening but want Siri’s assistant responses to match your regional accent.

Cognitive Pacing and the ‘Interruption Cost’ of Audio Alerts

The Announce Notifications feature reads incoming alerts aloud when you’re wearing AirPods or other connected audio devices. This works well for hands-free workflows, such as cooking or exercising, but it interrupts audio playback, which frustrates music or podcast listeners. You can configure which apps trigger announcements to reduce interruptions.

Advanced Tips and Personalization

Create Siri Shortcuts to automate repetitive listening tasks. For example, create a shortcut that opens your news app, navigates to your saved articles, and automatically starts Speak Screen. 

Another shortcut is to have your daily calendar read aloud every morning at 7 AM. Shortcuts eliminate the manual steps of opening apps and triggering speech, which matters when you repeat the same routine daily.

Balancing Personalization With Device Privacy

Sync settings across devices through iCloud. Voice preferences, speaking rate, and highlight settings carry over when you sign in on a new iPhone or Mac. This consistency reduces setup friction but also makes it harder to maintain separate configurations for each device. If you prefer faster playback on your phone but slower on your Mac, you’ll need to adjust manually each time you switch.

Enable Announce Notifications selectively. Most people don’t want every app interrupting them, but hearing text messages or calendar reminders aloud while driving or exercising adds genuine value. Go to Settings > Siri & Search > Announce Notifications, then choose which apps receive voice priority.

How Siri TTS Navigates Multilingual Fluidity

External voices exist, but Apple restricts third-party voice installation more than Android does. Some apps bundle their own TTS engines, such as audiobook players or language-learning tools, but these voices only work within those apps. You can’t set them as system-wide defaults.

Multilingual Mode automatically switches languages if your device’s language settings support it. Siri detects when text changes from English to Spanish mid-paragraph and adjusts pronunciation accordingly. This works better in theory than in practice. Detection isn’t perfect, and mixed-language content can sometimes cause awkward transitions or mispronunciations.

How Minimalist Interfaces Unlock Deep Learning

iOS Reader Mode in Safari strips away clutter before Siri reads web pages. Tap the “AA” icon in the address bar, select Show Reader View, then trigger Speak Screen. 

The result is: 

  • Cleaner narration
  • With no ads, pop-ups
  • Navigation elements interrupting the flow

From Personal Utility to Professional Liability

The gap between personal listening and professional production becomes apparent when you try to export Siri TTS audio. You can’t. Apple doesn’t provide a “save as audio file” option because the feature was designed for real-time accessibility, not content creation. 

Screen recording captures Siri’s voice, but that violates Apple’s terms if you redistribute the audio commercially. This limitation frustrates podcasters, video creators, and marketers who want Siri’s quality without the legal restrictions.

Escaping Vendor Lock-In for Global Scale

Platforms like AI voice agents address this by offering studio-quality synthesis, full commercial licensing, API access, and customization options that let you tailor voices to specific brand needs. 

While Siri TTS serves personal listening well, businesses building voice experiences need tools designed from the start for scale, compliance, and integration flexibility.

Related Reading

How to Generate Siri-Style Voice Audio for Projects

Use of TTS - Siri TTS

If you’re building an iOS or macOS app, Apple’s AVSpeechSynthesizer gives you programmatic access to system voices. You initialize the synthesizer, pass it a string of text wrapped in an AVSpeechUtterance object, and call the speak method. 

The device’s built-in TTS engine handles the rest, converting your text into spoken audio using whatever system voice the user has selected.

The Power of Programmatic Speech Control

This approach works well for in-app notifications, reading list features, or accessibility enhancements where voice output happens in real time. 

You can adjust speech rate, pitch, and volume programmatically. You can pause, resume, or stop playback mid-sentence. The API integrates cleanly with SwiftUI and UIKit, making implementation straightforward for developers already familiar with Apple’s frameworks.

Navigating the Intellectual Property of Synthetic Speech

The catch is licensing. Apple’s Speech framework lets you trigger system speech within your app, but you cannot legally extract audio files for redistribution. You can’t render Siri’s voice to an MP3 and upload it to YouTube. You can’t use it in a podcast intro. You can’t include it in a commercial video project. The voices are licensed for device-based, real-time synthesis only. 

Starting with iOS 10, Apple Machine Learning Research introduced deep learning models that significantly improved voice naturalness, but those improvements remain locked within Apple’s ecosystem and are accessible only through approved APIs.

Creating Assistant-Style Voices for Content Projects

This is where clarity matters. You should not attempt to impersonate Siri specifically. Apple’s “Siri” name and the specific voice identity are protected intellectual property. Using an identical voice or claiming it’s Siri in commercial content violates trademark and platform policies. 

The legal risk isn’t theoretical. Companies have faced cease-and-desist letters for using voice clones that too closely mimic recognizable assistants.

Voice Persona and the Psychology of Trust

You can create a clean, neutral, assistant-style AI voice that serves the same functional purpose without crossing legal boundaries. If you need a voiceover for a tutorial, explainer video, or podcast, you want something that sounds: 

  • Professional
  • Clear
  • Approachable

That doesn’t require copying Siri. It requires selecting high-quality text-to-speech that aligns with your project’s tone.

Start by selecting a commercial TTS platform that offers proper licensing for your use case. Look for neutral American English voices if you’re targeting a U.S. audience, or choose regional accents that match your content’s context. Most platforms let you preview voices before committing, so test several to find one that suits your script.

The Science of Sustained Engagement

Adjust pacing and prosody to match your content. Siri’s voice works well for short, conversational responses, but wasn’t optimized for long-form narration. If you’re reading a 10-minute script, you’ll want a voice that maintains listener engagement without sounding rushed or monotonous. 

Many TTS platforms let you: 

  • Control sentence-level pacing
  • Insert pauses
  • Adjust emphasis on specific words

Avoid branding references to Siri or any other trademarked assistant. Don’t title your video “Made with Siri TTS” or describe the voice as “Siri-like” in promotional materials. This isn’t about hiding what you’re doing. It’s about respecting intellectual property boundaries while still achieving your creative goals.

Managing Biometrics and Data Integrity

The gap between consumer voice assistants and production-ready synthesis becomes obvious when you: 

  • Need customization
  • Compliance documentation
  • API access for automation

Platforms like AI voice agents offer studio-quality voices with full commercial licensing, enabling you: 

  • To generate audio at scale
  • Integrate synthesis into existing workflows via APIs
  • Deploy voices that match your brand’s specific tone without legal ambiguity

While Apple’s Speech framework serves developers building within iOS, businesses creating voice content for distribution need synthesis engines that are flexible, compliant, and deliver human-like output across channels.

Practical Workflow Considerations

If you’re recording a voiceover for a project, write your script first. TTS engines perform better when you structure sentences clearly, avoid excessive jargon, and break complex ideas into digestible chunks. Run your script through the synthesis engine and listen critically. 

  • Does the pacing feel natural? 
  • Are there awkward pauses or mispronunciations? 

Most platforms let you adjust these issues with pronunciation guides or SSML tags.

Why Bitrates and Bandwidth Shape Clarity

Export settings matter more than most people expect. 

  • Choose lossless formats like WAV for editing flexibility, then compress to MP3 or AAC for final delivery. 
  • Lower bitrates save bandwidth but reduce audio quality, especially in the frequency ranges where speech clarity lives. 
  • Test your final audio on different devices. Voices that sound crisp on studio monitors sometimes lose intelligibility on phone speakers or laptop audio.

Version Control and Batch Efficiency

Version control helps when iterating. If your script changes after initial synthesis, re-generate only the affected segments rather than the entire audio file. This saves time and maintains consistency across takes. Some TTS platforms support batch processing, allowing you to queue multiple scripts for generation overnight.

The goal isn’t to copy Siri. It’s about choosing high-quality AI voice synthesis that fits your project while respecting licensing constraints. That means understanding what you’re legally allowed to do, selecting tools designed for your use case, and focusing on output quality rather than brand imitation.

Related Reading

• Duck Text To Speech

• Most Popular Text To Speech Voices

• Boston Accent Text To Speech

• Brooklyn Accent Text To Speech

• Jamaican Text To Speech

• Npc Voice Text To Speech

• Tts To Wav

• Text To Speech Voicemail

• Premiere Pro Text To Speech

Need More Than Built-In Siri TTS? Turn Text Into Studio-Quality Voice Instantly With Voice AI

Siri TTS works inside Apple devices. But when you need downloadable, customizable, production-ready audio, you need something built for creators. The built-in tools weren’t designed for exporting files, scaling across platforms, or matching brand-specific tones. 

That gap forces teams to choose between limiting their projects to Apple’s ecosystem or finding synthesis engines that offer the flexibility commercial work demands.

Emotional Contagion in AI Speech

Voice AI delivers human-like AI voices with emotional range, natural pacing, and multi-language support. It’s designed for YouTube videos, explainer content, podcasts, apps, and customer experiences where voice quality directly affects audience engagement. 

No robotic narration. No complicated setup. You generate studio-quality audio files you can download, edit, and distribute without worrying about licensing restrictions or platform lock-in.

Why AI Voice is a Production Powerhouse

The difference shows up when you’re creating content at scale. A single voice actor recording a 50-video tutorial series can take weeks and cost thousands of dollars. Revisions mean scheduling another session, waiting for delivery, and hoping the tone stays consistent across takes. 

Voice AI lets you generate that same series in hours, adjust pacing or emphasis instantly, and maintain perfect consistency across every file. You control the workflow instead of waiting on external schedules.

Why AI Voice is a Production Powerhouse

Try Voice AI free today and upgrade your voiceovers in minutes. The platform handles everything from short social media clips to hour-long training modules, giving you the flexibility to test different voices, adjust scripts on the fly, and produce finished audio without technical barriers slowing you down.

What to read next

Create lifelike speech with the best Australian accent text-to-speech. Use ElevenLabs TTS for realistic AI voice audio and free English media.
Convert text to speech with Google TTS voices.
Turn PDFs into natural speech for school or work.
Create lifelike Australian English audio for your videos.