Easiest way to get your own Local AI: Ollama | Docker WSL Tutorial

With Ollama Web UI you’ll not only get the easiest way to get your own Local AI running on your computer (thanks to Ollama), but it also comes with OllamaHub Support, where you can find Prompts, Modelfiles (to give your AI a personality) and more, all of that power by the community.

FixtSE Web: https://fixtse.com/blog/ollama-webui

Main Project Page: https://ollama.ai/

00:00 Prerequisites
00:47 Install it on WSL
02:06 Docker Installation (Linux/WSL)
003:21 Activate GPU Compatibility
04:11 Installation
005:00 How to update it
05:19 Ollama WebUI
05:42 Install a New Model
06:36 Use your new model
07:17 OllamaHub
09:00 Windows Limitations

If you like my work, please consider supporting me on Ko-fi! : https://ko-fi.com/fixtse
Patreon: https://patreon.com/fixtSE
or Join this channel to get access to perks:
https://www.youtube.com/channel/UCOY6oNxodGWbFg6CjXtae5g/join

You can find me on:
Web: https://fixtse.com/
Instagram: https://www.instagram.com/fixtse/

Hope this was useful and if you have any questions, write me a comment below
Thank you for watching (~ ̄▽ ̄)~

Source: https://www.youtube.com/watch?v=oYgH0BWXzIE

Leave a Reply

Your email address will not be published. Required fields are marked *