DB-hub Technology Phython OpenAI Whisper

OpenAI Whisper

1.Overview

There are several open-source speech-to-text solutions that can be used to transcribe YouTube videos and save the output as a text file. The most popular and widely used open-source speech-to-text engine is OpenAI’s Whisper, which is highly accurate and supports multiple languages. Other notable open-source options include DeepSpeech, Kaldi, Vosk, and SpeechBrain.

OpenAI Whisper the most advanced open-source speech-to-text model, capable of transcribing audio from video files with high accuracy. It can be installed via pip and used in Python scripts to transcribe downloaded YouTube audio.

2.How to develop a Python program with Whisper to transcribe YouTube videos

1.Install Required Packages:

  • whisper (OpenAI’s Whisper Python package)
  • pytube or yt-dlp to download YouTube video/audio
  • ffmpeg (external tool to handle media conversion)

2.Download the YouTube Video Audio:

  • Use pytube or yt-dlp to download the video or audio stream from YouTube.

3.Transcribe Audio with Whisper:

  • Load the downloaded audio or video file with Whisper and generate the transcript.

4.Save Transcript to Text File.

Example Python Script:

import whisper
from pytube import YouTube

def download_audio(url, filename="audio.mp4"):
    yt = YouTube(url)
    audio_stream = yt.streams.filter(only_audio=True).first()
    audio_stream.download(filename=filename)
    return filename

def transcribe_audio(file):
    model = whisper.load_model("base")
    result = model.transcribe(file)
    return result["text"]

def save_transcript(text, filename="transcript.txt"):
    with open(filename, "w", encoding="utf-8") as f:
        f.write(text)

if __name__ == "__main__":
    url = "https://www.youtube.com/watch?v=YOUR_VIDEO_ID"
    audio_file = download_audio(url)
    transcript = transcribe_audio(audio_file)
    save_transcript(transcript)
    print("Transcription saved to transcript.txt")

Additional Notes:

  • Whisper is free to use locally if you run the model on your machine.

  • You do not need API keys unless using the online OpenAI API for Whisper, which may have usage costs.

  • The model versions range from tiny (fast but less accurate) to large (best accuracy but resource intensive).

  • ffmpeg is required for media processing and installation instructions are on https://ffmpeg.org/.

  • This workflow downloads audio only to save bandwidth and simplify processing.

3.Realtime Youtube Video Transcription – Client
# whisper_live_real_time.py


# The transcribed text is printed automatically by TranscriptionClient when a sentence is fully transcribed.
# Your script feeds audio frames to the client, which manages live transcription and printing behind the scenes.
# There is no explicit print() call in your script that prints the transcription; it is handled internally after client.feed_audio(audio).


from whisper_live.client import TranscriptionClient
import sounddevice as sd
import time
import numpy as np

DEVICE_ID = 26   # your working Voicemeeter device

client = TranscriptionClient(
    "localhost",        # Indicates the transcription is running locally on your machine, not connecting to any external server.
    lang="en",
    model="medium",     # or "large-v3" for best accuracy
    port=9090,          # port

    # setting use_vad=True enables Voice Activity Detection (VAD) which helps detect
    # when speech starts and stops. This allows the transcription client to know
    # when a sentence or spoken segment ends based on silence detection.

    # With VAD enabled, the transcription text will be printed only after voice/speech is detected
    # and a complete sentence has ended, avoiding printing during silence or incomplete segments.
    use_vad=True       # enable voice activity detection
)


# The printing of the transcribed text is done automatically by the whisper_live.client.TranscriptionClient library itself.
# This happens internally and not explicitly by a print statement in source code.

# inside the callback function, the TranscriptionClient processes the audio and will automatically print
# the completed sentences or captions to the screen as they are recognized.


#####################
# callback function
#####################
# The function callback(indata, frames, time, status) is not explicitly called in your code because
# it is passed as a callback function to the sounddevice.InputStream:

# When the input audio stream starts with stream.start(), the sounddevice library internally manages audio capture
# in the background. It repeatedly calls the callback function automatically for every audio block/frame
# captured from the input device.

# This callback function is designed to process incoming audio data in real-time:
#    indata is the captured audio block (a NumPy array).
#    The callback processes this audio (in your case, averages channels to mono and converts to float32).
#    Then it feeds this audio sample to the transcription client with client.feed_audio(audio).

# The looping happens internally in the sounddevice library, which calls your callback function repeatedly
# as new audio is captured. This allows your program to continuously and asynchronously receive audio data
# in small chunks, process it, and feed it to the transcription engine without blocking the main program loop.

def callback(indata, frames, time, status):
    audio = indata.copy().mean(axis=1).astype(np.float32)
    client.feed_audio(audio)
    # The printing happens internally inside the library logic upon feeding audio via client.feed_audio(audio),
    # not on a visible print statement line in the script.

stream = sd.InputStream(samplerate=16000, device=DEVICE_ID, channels=1, dtype='float32', blocksize=512, callback=callback)
stream.start()

# In summary:
# The sounddevice InputStream manages audio capture in a background thread.
# It automatically invokes your callback periodically with the latest audio data.
# So your callback function runs repeatedly in the background as long as the stream is active.
# Your main program loop just keeps running, letting the audio stream and transcription happen asynchronously.


print("\nLIVE CAPTIONS ACTIVE – Perfect full sentences, no repetition!\n")


# The while True: loop runs indefinitely to keep the program alive and continuously processing audio input.

# time.sleep(0.1) pauses the loop briefly to avoid high CPU usage while waiting for audio data to be fed
# and transcribed asynchronously in the background.

# The try block monitors for a KeyboardInterrupt exception, which allows the program to gracefully stop
# when you manually interrupt it by pressing Ctrl+C in the terminal.

# When interrupted, it prints "Stopped." and runs the cleanup code in the finally block.

# The finally block ensures the audio stream is properly stopped and closed, releasing system resources before exiting the program.
try:
    while True:
        time.sleep(0.1)
        # Printed automatically by the library when a sentence is complete
        pass
except KeyboardInterrupt:
    print("\nStopped.")
finally:
    stream.stop()
    stream.close()
# this loop keeps your audio stream and transcription running until you manually stop the program with Ctrl+C.
# It does not print the transcription text itself; that is done asynchronously inside the transcription client.
# This step ensures clean startup, continuous processing, and graceful shutdown of the live transcription session.


4.Realtime Youtube Video Transcription – Server
import whisper

# This will download the model files if they are not already present and load the model into memory for transcription.
# Load the "medium" Whisper model and specify where to download/store model files
model = whisper.load_model("medium", download_root="C:/Users/szdav/.cache/whisper-live")

from whisper_live.server import TranscriptionServer

server = TranscriptionServer()
server.run("0.0.0.0", 9090)  # Runs server on all interfaces at port 9090
5.how to install and start a whisper_live server

1.Install onnxruntime
CPU-only version recommended unless you have GPU support and want GPU acceleration

pip install onnxruntime

If you want GPU support for onnxruntime (with CUDA), you can install:

pip install onnxruntime-gpu

but ensure you have compatible CUDA drivers installed.

You can verify by importing the packages in Python:

import onnxruntime
print("Packages installed correctly!")

Warning:

.venv\Lib\site-packages\onnxruntime\capi\onnxruntime_validation.py:26: UserWarning: Unsupported Windows version (11). ONNX Runtime supports Windows 10 and above, only.
  warnings.warn(

Is just an informational warning indicating that ONNX Runtime currently categorizes Windows 11 as an unsupported or unrecognized version, even though it technically supports Windows 10 and all Windows 11 versions. This is mainly due to how ONNX Runtime detects the OS version and has not fully been updated to explicitly recognize Windows 11.
This warning does not affect the functionality or performance of ONNX Runtime on Windows 11. It is safe to ignore the warning. ONNX Runtime works fine on Windows 11 even with this message.

2.Install websockets package

pip install websockets

3.Install ffmpeg
Download FFmpeg for Windows:
Go to the official FFmpeg builds by gyan.dev: https://www.gyan.dev/ffmpeg/builds/
Download the “ffmpeg-git-full.7z” for the latest full build.

Extract the Downloaded File:
Use a tool like 7-Zip (https://www.7-zip.org/) to extract the .7z archive.
Extract it to a folder such as C:\ffmpeg.

Add FFmpeg to System PATH:
– Open “Edit the system environment variables” (search in Start menu).
– Click Environment Variables…
– In System variables, select Path and click Edit…
– Click New, then add the path to FFmpeg’s bin folder, e.g. C:\ffmpeg\bin
– Click OK on all dialogs to apply.

Verify Installation:
Open a new Command Prompt and type:

ffmpeg -version

If installed correctly, you will see version info and configuration details.

4.Download Whisper model files
Whisper model files are downloaded automatically the first time you load a specific model in your script. By default, on Windows, these model files are stored in this directory:

C:\Users\<username>\.cache\whisper\

For example, loading the “medium” model downloads and stores it at:

C:\Users\<username>\.cache\whisper\medium

You can run a small Python snippet to explicitly load a model and force downloading the files to your cache directory, for example:

import whisper
model = whisper.load_model("medium", download_root="C:/your/custom/folder")

This downloads the models to the given folder if not already present.
Also, you can use the command line to specify your preferred model directory when downloading.
Once downloaded, the model files do not need to be downloaded again unless removed.

Summary:
– Whisper models download automatically on first use.
– Default cache directory (Windows): C:\Users\.cache\whisper
– Use download_root in whisper.load_model to specify a custom path.
– Models are large (~1.5 GB for medium), so pre-download if bandwidth is limited.

5.Install whisper-live package
You can install the whisper_live server package via pip:

pip install whisper-live

Alternatively, clone the GitHub repo and install from source.

6.Run the server
Use a Python script or interactive session to run the WebSocket transcription server on a desired host and port (e.g. localhost and 9090):

python whisper-server.py

This will start a WebSocket server that accepts audio input and provides live transcription using Whisper.

7.Client connection
Your transcription client script should connect to the same host and port used by the server, e.g. localhost:9090.

8.Optional: Run with Docker for GPU support
If you have an Nvidia GPU, you can build and run a Docker container for whisper_live with GPU acceleration:

docker build . -t whisper-live -f docker/Dockerfile.gpu
docker run -it --gpus all -p 9090:9090 whisper-live:latest
6.Errors
  1. Models mess up
Traceback (most recent call last):
  File "C:\Users\szdav\PycharmProjects\winhide\whisper-stream.py", line 86, in <module>
    client(
TypeError: TranscriptionTeeClient.__call__() got an unexpected keyword argument 'device'

You installed the original whisper-live (the one that works with client() and accepts device=), but your environment actually has whisper-live[tee] or the newer fork called whisper-live by ahmetoner, which renamed the class to TranscriptionTeeClient and removed the device keyword from call in the most recent versions (after ~June 2025).
That fork is very popular now and is what most people have when they do pip install whisper-live.

7.Start over
cd C:\Users\szdav\PycharmProjects\winhide
rmdir /s .venv
python -m venv .venv
.venv\Scripts\activate
pip install --upgrade pip
pip install openai-whisper sounddevice numpy

working script

# Perfect real-time full-sentence captions with original OpenAI Whisper + Voicemeeter - November 2025
import whisper
import sounddevice as sd
import numpy as np
import queue
import threading

print("Loading medium model... (first time 10-30 seconds)")
model = whisper.load_model("medium")   # or "large-v3" for best accuracy, "small" for faster

print(sd.query_devices())
DEVICE_ID = 26  # Your Voicemeeter device

q = queue.Queue()
silence_threshold = 0.02  # adjust if needed
buffer = np.array([], dtype=np.float32)
min_audio_length = 16000 * 2  # at least 2 seconds before transcribing

def callback(indata, frames, time, status):
    audio = indata.copy().mean(axis=1).astype(np.float32)
    q.put(audio)

def process_buffer():
    global buffer
    while True:
        audio_chunk = q.get()
        if audio_chunk is None:
            break
        buffer = np.append(buffer, audio_chunk)
        # Simple energy-based VAD
        if np.mean(np.abs(buffer[-16000:])) < silence_threshold and len(buffer) > min_audio_length:
            # Silence detected after speech → transcribe
            result = model.transcribe(buffer, language="en", fp16=False)
            text = result["text"].strip()
            if text:
                print(text)
            buffer = np.array([], dtype=np.float32)  # clear buffer after sentence

threading.Thread(target=process_buffer, daemon=True).start()

print("\nLIVE CAPTIONS ACTIVE – Perfect full sentences from Voicemeeter!\n")
with sd.InputStream(samplerate=16000, device=DEVICE_ID, channels=1, dtype='float32', blocksize=512, callback=callback):
    try:
        while True:
            sd.sleep(1000)
    except KeyboardInterrupt:
        q.put(None)
        print("\nStopped.")

(.venv) C:\Users\szdav\PycharmProjects\winhide>python whisper_live.py
Loading medium model... (first time 10-30 seconds)
    0 Microsoft Sound Mapper - Input, MME (2 in, 0 out)
>   1 Stereo Mix (Realtek(R) Audio), MME (2 in, 0 out)
    2 Voicemeeter Out B2 (VB-Audio Vo, MME (8 in, 0 out)
    3 Voicemeeter Out A3 (VB-Audio Vo, MME (8 in, 0 out)
    4 Voicemeeter Out A2 (VB-Audio Vo, MME (8 in, 0 out)
    5 Voicemeeter Out A4 (VB-Audio Vo, MME (8 in, 0 out)
    6 Voicemeeter Out A1 (VB-Audio Vo, MME (8 in, 0 out)
    7 CABLE Output (VB-Audio Virtual , MME (16 in, 0 out)
    8 Voicemeeter Out A5 (VB-Audio Vo, MME (8 in, 0 out)
    9 Voicemeeter Out B1 (VB-Audio Vo, MME (8 in, 0 out)
   10 Voicemeeter Out B3 (VB-Audio Vo, MME (8 in, 0 out)
   11 Microphone (Realtek(R) Audio), MME (2 in, 0 out)
   12 Microsoft Sound Mapper - Output, MME (0 in, 2 out)
<  13 Voicemeeter Input (VB-Audio Voi, MME (0 in, 8 out)
   14 CABLE Input (VB-Audio Virtual C, MME (0 in, 16 out)
   15 Voicemeeter In 5 (VB-Audio Voic, MME (0 in, 8 out)
   16 Voicemeeter In 3 (VB-Audio Voic, MME (0 in, 8 out)
   17 Speakers (Realtek(R) Audio), MME (0 in, 2 out)
   18 1 - GF276 (AMD High Definition , MME (0 in, 2 out)
   19 Voicemeeter In 4 (VB-Audio Voic, MME (0 in, 8 out)
   20 Voicemeeter In 1 (VB-Audio Voic, MME (0 in, 8 out)
   21 Voicemeeter In 2 (VB-Audio Voic, MME (0 in, 8 out)
   22 Voicemeeter AUX Input (VB-Audio, MME (0 in, 8 out)
   23 Voicemeeter VAIO3 Input (VB-Aud, MME (0 in, 8 out)
   24 CABLE In 16ch (VB-Audio Virtual, MME (0 in, 16 out)
   25 Primary Sound Capture Driver, Windows DirectSound (2 in, 0 out)
   26 Stereo Mix (Realtek(R) Audio), Windows DirectSound (2 in, 0 out)
   27 Voicemeeter Out B2 (VB-Audio Voicemeeter VAIO), Windows DirectSound (8 in, 0 out)
   28 Voicemeeter Out A3 (VB-Audio Voicemeeter VAIO), Windows DirectSound (8 in, 0 out)
   29 Voicemeeter Out A2 (VB-Audio Voicemeeter VAIO), Windows DirectSound (8 in, 0 out)
   30 Voicemeeter Out A4 (VB-Audio Voicemeeter VAIO), Windows DirectSound (8 in, 0 out)
   31 Voicemeeter Out A1 (VB-Audio Voicemeeter VAIO), Windows DirectSound (8 in, 0 out)
   32 CABLE Output (VB-Audio Virtual Cable), Windows DirectSound (16 in, 0 out)
   33 Voicemeeter Out A5 (VB-Audio Voicemeeter VAIO), Windows DirectSound (8 in, 0 out)
   34 Voicemeeter Out B1 (VB-Audio Voicemeeter VAIO), Windows DirectSound (8 in, 0 out)
   35 Voicemeeter Out B3 (VB-Audio Voicemeeter VAIO), Windows DirectSound (8 in, 0 out)
   36 Microphone (Realtek(R) Audio), Windows DirectSound (2 in, 0 out)
   37 Primary Sound Driver, Windows DirectSound (0 in, 2 out)
   38 Voicemeeter Input (VB-Audio Voicemeeter VAIO), Windows DirectSound (0 in, 8 out)
   39 CABLE Input (VB-Audio Virtual Cable), Windows DirectSound (0 in, 16 out)
   40 Voicemeeter In 5 (VB-Audio Voicemeeter VAIO), Windows DirectSound (0 in, 8 out)
   41 Voicemeeter In 3 (VB-Audio Voicemeeter VAIO), Windows DirectSound (0 in, 8 out)
   42 Speakers (Realtek(R) Audio), Windows DirectSound (0 in, 2 out)
   43 1 - GF276 (AMD High Definition Audio Device), Windows DirectSound (0 in, 2 out)
   44 Voicemeeter In 4 (VB-Audio Voicemeeter VAIO), Windows DirectSound (0 in, 8 out)
   45 Voicemeeter In 1 (VB-Audio Voicemeeter VAIO), Windows DirectSound (0 in, 8 out)
   46 Voicemeeter In 2 (VB-Audio Voicemeeter VAIO), Windows DirectSound (0 in, 8 out)
   47 Voicemeeter AUX Input (VB-Audio Voicemeeter VAIO), Windows DirectSound (0 in, 8 out)
   48 Voicemeeter VAIO3 Input (VB-Audio Voicemeeter VAIO), Windows DirectSound (0 in, 8 out)
   49 CABLE In 16ch (VB-Audio Virtual Cable), Windows DirectSound (0 in, 16 out)
   50 CABLE Input (VB-Audio Virtual Cable), Windows WASAPI (0 in, 2 out)
   51 Voicemeeter In 5 (VB-Audio Voicemeeter VAIO), Windows WASAPI (0 in, 2 out)
   52 Voicemeeter In 3 (VB-Audio Voicemeeter VAIO), Windows WASAPI (0 in, 2 out)
   53 Speakers (Realtek(R) Audio), Windows WASAPI (0 in, 2 out)
   54 1 - GF276 (AMD High Definition Audio Device), Windows WASAPI (0 in, 2 out)
   55 Voicemeeter Input (VB-Audio Voicemeeter VAIO), Windows WASAPI (0 in, 2 out)
   56 Voicemeeter In 4 (VB-Audio Voicemeeter VAIO), Windows WASAPI (0 in, 2 out)
   57 Voicemeeter In 1 (VB-Audio Voicemeeter VAIO), Windows WASAPI (0 in, 2 out)
   58 Voicemeeter In 2 (VB-Audio Voicemeeter VAIO), Windows WASAPI (0 in, 2 out)
   59 Voicemeeter AUX Input (VB-Audio Voicemeeter VAIO), Windows WASAPI (0 in, 2 out)
   60 Voicemeeter VAIO3 Input (VB-Audio Voicemeeter VAIO), Windows WASAPI (0 in, 2 out)
   61 CABLE In 16ch (VB-Audio Virtual Cable), Windows WASAPI (0 in, 2 out)
   62 Voicemeeter Out B2 (VB-Audio Voicemeeter VAIO), Windows WASAPI (2 in, 0 out)
   63 Voicemeeter Out A3 (VB-Audio Voicemeeter VAIO), Windows WASAPI (2 in, 0 out)
   64 Voicemeeter Out A2 (VB-Audio Voicemeeter VAIO), Windows WASAPI (2 in, 0 out)
   65 Voicemeeter Out A4 (VB-Audio Voicemeeter VAIO), Windows WASAPI (2 in, 0 out)
   66 Voicemeeter Out A1 (VB-Audio Voicemeeter VAIO), Windows WASAPI (2 in, 0 out)
   67 CABLE Output (VB-Audio Virtual Cable), Windows WASAPI (2 in, 0 out)
   68 Voicemeeter Out A5 (VB-Audio Voicemeeter VAIO), Windows WASAPI (2 in, 0 out)
   69 Stereo Mix (Realtek(R) Audio), Windows WASAPI (2 in, 0 out)
   70 Voicemeeter Out B1 (VB-Audio Voicemeeter VAIO), Windows WASAPI (2 in, 0 out)
   71 Voicemeeter Out B3 (VB-Audio Voicemeeter VAIO), Windows WASAPI (2 in, 0 out)
   72 Microphone (Realtek(R) Audio), Windows WASAPI (2 in, 0 out)
   73 CABLE Output (VB-Audio Point), Windows WDM-KS (16 in, 0 out)
   74 Output (VB-Audio Point), Windows WDM-KS (0 in, 16 out)
   75 Input (VB-Audio Point), Windows WDM-KS (16 in, 0 out)
   76 Speakers 1 (Realtek HD Audio output with HAP), Windows WDM-KS (0 in, 2 out)
   77 Speakers 2 (Realtek HD Audio output with HAP), Windows WDM-KS (0 in, 2 out)
   78 PC Speaker (Realtek HD Audio output with HAP), Windows WDM-KS (2 in, 0 out)
   79 Headphones 1 (Realtek HD Audio 2nd output with HAP), Windows WDM-KS (0 in, 2 out)
   80 Headphones 2 (Realtek HD Audio 2nd output with HAP), Windows WDM-KS (0 in, 2 out)
   81 PC Speaker (Realtek HD Audio 2nd output with HAP), Windows WDM-KS (2 in, 0 out)
   82 Microphone (Realtek HD Audio Mic input), Windows WDM-KS (2 in, 0 out)
   83 Stereo Mix (Realtek HD Audio Stereo input), Windows WDM-KS (2 in, 0 out)
   84 Output (AMD HD Audio HDMI out #0), Windows WDM-KS (0 in, 2 out)
   85 Voicemeeter Out 3 (Voicemeeter Point 3), Windows WDM-KS (8 in, 0 out)
   86 Voicemeeter Out 6 (Voicemeeter Point 6), Windows WDM-KS (8 in, 0 out)
   87 Output (Voicemeeter Point 2), Windows WDM-KS (0 in, 8 out)
   88 Input (Voicemeeter Point 2), Windows WDM-KS (8 in, 0 out)
   89 Output (Voicemeeter Point 5), Windows WDM-KS (0 in, 8 out)
   90 Input (Voicemeeter Point 5), Windows WDM-KS (8 in, 0 out)
   91 Output (Voicemeeter Point 8), Windows WDM-KS (0 in, 8 out)
   92 Input (Voicemeeter Point 8), Windows WDM-KS (8 in, 0 out)
   93 Voicemeeter Out 1 (Voicemeeter Point 1), Windows WDM-KS (8 in, 0 out)
   94 Voicemeeter Out 4 (Voicemeeter Point 4), Windows WDM-KS (8 in, 0 out)
   95 Voicemeeter Out 7 (Voicemeeter Point 7), Windows WDM-KS (8 in, 0 out)
   96 Output (Voicemeeter Point 3), Windows WDM-KS (0 in, 8 out)
   97 Input (Voicemeeter Point 3), Windows WDM-KS (8 in, 0 out)
   98 Output (Voicemeeter Point 6), Windows WDM-KS (0 in, 8 out)
   99 Input (Voicemeeter Point 6), Windows WDM-KS (8 in, 0 out)
  100 Voicemeeter Out 2 (Voicemeeter Point 2), Windows WDM-KS (8 in, 0 out)
  101 Voicemeeter Out 5 (Voicemeeter Point 5), Windows WDM-KS (8 in, 0 out)
  102 Voicemeeter Out 8 (Voicemeeter Point 8), Windows WDM-KS (8 in, 0 out)
  103 Output (Voicemeeter Point 1), Windows WDM-KS (0 in, 8 out)
  104 Input (Voicemeeter Point 1), Windows WDM-KS (8 in, 0 out)
  105 Output (Voicemeeter Point 4), Windows WDM-KS (0 in, 8 out)
  106 Input (Voicemeeter Point 4), Windows WDM-KS (8 in, 0 out)
  107 Output (Voicemeeter Point 7), Windows WDM-KS (0 in, 8 out)
  108 Input (Voicemeeter Point 7), Windows WDM-KS (8 in, 0 out)
  109 Headset Earphone (@System32\drivers\bthhfenum.sys,#2;%1 Hands-Free%0
;(SRS-XB12)), Windows WDM-KS (0 in, 1 out)
  110 Headset Microphone (@System32\drivers\bthhfenum.sys,#2;%1 Hands-Free%0
;(SRS-XB12)), Windows WDM-KS (1 in, 0 out)
  111 Speakers (), Windows WDM-KS (0 in, 2 out)

LIVE CAPTIONS ACTIVE – Perfect full sentences from Voicemeeter!

Now I'm going to install the new filter on.
Always take a quick check on your
mating surface of your
your motor to your filter to make sure there isn't a.
a gasket left on there. We don't want a double gasket.
at that, that would create a leak.
So now we're going to put this filter on.
Always
start your oil filter by hand. You don't want to.
cross thread something.
instructions say to go once this
touches so here it is just touching the
mating surface.

Related Post