TSF – Giải pháp IT toàn diện cho doanh nghiệp SMB | HCM

P9 - Build Local AI Telegram Bot Fast (Ollama Guide)

🚀 AI Tutorial – P9: Create a Local AI Telegram Bot with Ollama in Minutes

Building a Local AI Telegram Bot is one of the fastest ways to bring AI into real-world usage. By combining Ollama with a simple Python script, you can create a powerful chatbot that runs entirely on your local machine.

In this guide, you’ll learn how to deploy a Telegram bot, connect it to a local AI model, and run it as a background service for continuous operation.


🎯 Why Use a Local AI Telegram Bot?

Running a local AI bot offers several advantages:

  • 🔒 Full privacy (no external API calls)
  • ⚡ Instant response from local models
  • 💰 Zero API cost
  • 🔧 Fully customizable behavior

🔗 Part 1: Connecting the Telegram Bot


🤖 Step 1: Create the Telegram Bot

Create your bot via Telegram (BotFather) and obtain your API token:

 
Use this token to access the HTTP API: 8384724024:AAG7YL0LG_G70VccMYTkFMyEeyofOcArH6o
 

📁 Step 2: Create Bot Directory

On your Ubuntu server:

 
mkdir telegram-ai-bot
cd telegram-ai-bot
 

📦 Step 3: Install Python and Libraries

Install pip (if not available):

 
sudo apt install python3-pip -y
 

Install required libraries:

 
pip install python-telegram-bot requests
 

Upgrade if needed:

 
pip install python-telegram-bot –upgrade
 

💻 Step 4: Create the Bot Script

Create the Python file:

 
nano bot.py
 

Paste the following code:

💻
filename.bash
import requests
from telegram import Update
from telegram.ext import ApplicationBuilder, MessageHandler, filters, ContextTypes

TOKEN = ""
OLLAMA_URL = "http://localhost:11434/api/generate"
MODEL = "qwen2.5:7b-instruct"

async def handle_message(update: Update, context: ContextTypes.DEFAULT_TYPE):
    user_text = update.message.text

    data = {
        "model": MODEL,
        "prompt": user_text,
        "stream": False
    }

    response = requests.post(OLLAMA_URL, json=data)

    ai_reply = response.json()["response"]

    await update.message.reply_text(ai_reply)

app = ApplicationBuilder().token(TOKEN).build()

app.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle_message))

print("Bot is running...")

app.run_polling()

▶️ Step 5: Run the Bot

Start the bot:

 
python3 bot.py
 

If successful, you will see:

 
Bot is running…
 

🧪 Step 6: Test the Bot

  • Open Telegram
  • Search for your bot
  • Send a message

👉 The AI will respond using your local Ollama model.


⚙️ Part 2: Run Bot as a Background Service


📍 Step 1: Determine Bot Path

Example path:

 
/home/bao/telegram-ai-bot/bot.py
 

Check current path:

 
pwd
 

🛠️ Step 2: Create systemd Service

Create service file:

 
sudo nano /etc/systemd/system/telegram-ai-bot.service
 

Add the following content:

 
[Unit]
Description=Telegram AI Bot
After=network.target

[Service]
User=bao
WorkingDirectory=/home/bao/telegram-ai-bot
ExecStart=/usr/bin/python3 /home/bao/telegram-ai-bot/bot.py
Restart=always

[Install]
WantedBy=multi-user.target
 

Save the file.


🔄 Step 3: Enable and Start Service

Reload systemd:

 
sudo systemctl daemon-reload
 

Enable service:

 
sudo systemctl enable telegram-ai-bot
 

Start bot:

 
sudo systemctl start telegram-ai-bot
 

Check status:

 
sudo systemctl status telegram-ai-bot
 

If everything is working:

 
active (running)
 

✅ Final Result

After completing all steps, your Local AI Telegram Bot will be fully operational:

  • 🤖 Telegram bot connected to Ollama
  • 🧠 Running Qwen AI model locally
  • 🔄 Auto-start with systemd
  • ⚡ Real-time AI responses

💡 Use Cases

This setup is ideal for:

  • Personal AI assistants
  • Internal business bots
  • Automation workflows
  • AI-powered chat systems

🎯 Final Thoughts

Creating a Local AI Telegram Bot with Ollama is one of the simplest and most powerful ways to deploy AI in real-world scenarios.

With just a few steps, you can:

  • Run AI locally without cloud dependency
  • Integrate AI into messaging platforms
  • Build scalable and automated systems

🚀 Follow this series to explore more advanced AI integrations like multi-account routing, OpenClaw integration, and automation pipelines.

See also related articles

P10 – Uninstall OpenClaw Windows Fast

P10 – Uninstall OpenClaw Windows Fast https://youtu.be/1ljEMzohiSY 🚀 AI Tutorial – P10: Uninstall OpenClaw on Windows (Clean Removal & Fix Issues) If you’re facing issues with OpenClaw or simply want to remove it completely, performing a proper Uninstall OpenClaw Windows process is essential. A partial uninstall may leave behind background...

Read More

P9 – Build Local AI Telegram Bot Fast (Ollama Guide)

P9 – Build Local AI Telegram Bot Fast (Ollama Guide) https://youtu.be/YuiLJDLIVr0 🚀 AI Tutorial – P9: Create a Local AI Telegram Bot with Ollama in Minutes Building a Local AI Telegram Bot is one of the fastest ways to bring AI into real-world usage. By combining Ollama with a simple...

Read More

P8 – Ultimate OpenClaw Local AI Setup

P8 – Ultimate OpenClaw Local AI Setup 🚀 AI Tutorial – P8: Complete Guide to Running Local AI with Ollama, Qwen & Open WebUI Running openclaw local AI is one of the most powerful ways to build a private, fast, and cost-efficient AI system. By combining Ollama, Qwen models, and...

Read More