Skip to content

🧠 Naren GPT - Your Personal Local AI Chat Assistant

Naren GPT is a self-hosted, privacy-first conversational assistant that brings ChatGPT-like capabilities directly to your desktop β€” no subscriptions, no limits, no cloud dependencies. Powered by Ollama it lets you run powerful LLMs (like LLaMA 3, Qwen, Mistral, etc.) locally with a clean, persistent chat interface.


ai-llm

πŸ‘‰πŸΌ https://github.com/naren4b/naren-gpt

✨ Features

  • βœ… Runs locally β€” your data never leaves your machine
  • βœ… No per-token or monthly subscription costs
  • βœ… Works offline β€” even without internet
  • βœ… No rate limits or usage throttling
  • βœ… Faster, low-latency responses
  • βœ… Easy model switching with Ollama
  • βœ… Use arbitrary prompt templates, agents, and pipelines
  • βœ… Extend with voice, vision, or image generation
  • βœ… Compatible with CPUs and GPUs β€” no special hardware needed

🏁 Getting Started

1. Prerequisites

  • Python 3.10+
  • Ollama installed curl -fsSL https://ollama.com/install.sh | sh
  • At least 8–16 GB RAM for optimal performance (depending on model)
  • Install streamlit pip install streamlit

2. Clone and Run

# Server Mode
ollama serve
# pull the model 
ollama pull llama3

curl http://127.0.0.1:11434/api/tags
git clone https://github.com/yourusername/naren-gpt.git
cd naren-gpt
# python3 -m pip install --upgrade pip
pip install -r requirements.txt
bash run.sh

Open The browser

http://localhost:8501 image

*It will be slow in local machines

Naren GPT – A Local AI Chat App using Streamlit + Ollama + LLMs

Naren GPT is a lightweight, locally hosted AI assistant built with Streamlit and powered by Ollama and open-source LLMs like LLaMA 3. It provides a clean, persistent chat interface β€” similar to ChatGPT β€” but with full control, privacy, and offline functionality.

When to Self-Host?

Scale: Cost-effective once GPU utilization is high.

Performance: Better for specialized workloads (e.g., RAG, embeddings).

Privacy/Sovereignty: Legal or regulatory constraints, on-prem, hybrid/multi-cloud.

πŸš€ Features

  • 🧠 Supports local LLMs (LLaMA 3, Mistral, etc.)
  • πŸ’¬ Persistent multi-turn chat with conversation memory
  • 🎨 Simple, clean UI built using Streamlit
  • πŸ”’ Runs completely offline – your data stays with you
  • βš™οΈ Easily customizable and extendable