Home Projects Services Blog Contact
Back to Blog

Article

Subtitle

TL;DR

Summary

What You're Actually Building

We're adding a chatbot to a web app. The user types a message, it sends to the OpenAI API, and displays the response. Streaming optional. Conversation history included. Let's build it.

Setup — Python (Django/Flask)

Install & Configure
pip install openai python-decouple

# .env
OPENAI_API_KEY=sk-...
Python — Basic Chat Completion
from openai import OpenAI
from decouple import config

client = OpenAI(api_key=config('OPENAI_API_KEY'))

def chat(messages: list) -> str:
    """
    messages format:
    [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"},
        {"role": "assistant", "content": "Hi! How can I help?"},
        {"role": "user", "content": "Tell me about Tunisia"}
    ]
    """
    response = client.chat.completions.create(
        model="gpt-4o-mini",  # Cheapest, very capable
        messages=messages,
        max_tokens=500,
        temperature=0.7
    )
    return response.choices[0].message.content

Django API Endpoint

views.py — Chat API Endpoint
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json

conversation_history = {}  # Use Redis in production!

@csrf_exempt
def chat_view(request):
    if request.method != 'POST':
        return JsonResponse({'error': 'POST only'}, status=405)
    
    data = json.loads(request.body)
    session_id = data.get('session_id', 'default')
    user_message = data.get('message', '')
    
    # Get or create conversation history
    if session_id not in conversation_history:
        conversation_history[session_id] = [
            {"role": "system", "content": "You are a helpful assistant for a Tunisian software agency."}
        ]
    
    # Add user message
    conversation_history[session_id].append({
        "role": "user", "content": user_message
    })
    
    # Get AI response
    reply = chat(conversation_history[session_id])
    
    # Add to history
    conversation_history[session_id].append({
        "role": "assistant", "content": reply
    })
    
    return JsonResponse({'reply': reply})

Streaming — Real-Time Response

Streaming Response (like ChatGPT)
from django.http import StreamingHttpResponse

def stream_chat(request):
    user_message = request.GET.get('message', '')
    
    def generate():
        stream = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": user_message}],
            stream=True
        )
        for chunk in stream:
            if chunk.choices[0].delta.content:
                yield f"data: {chunk.choices[0].delta.content}\n\n"
        yield "data: [DONE]\n\n"
    
    return StreamingHttpResponse(
        generate(),
        content_type='text/event-stream'
    )

GPT-4o-mini costs ~$0.15 per million input tokens. A typical chat message is 50-100 tokens. You can handle thousands of conversations for a few dollars per month.

Flutter — Calling the API

Flutter — Chat API Call
import 'package:http/http.dart' as http;
import 'dart:convert';

Future<String> sendMessage(String message, String sessionId) async {
  final response = await http.post(
    Uri.parse('https://your-api.com/chat/'),
    headers: {'Content-Type': 'application/json'},
    body: jsonEncode({
      'message': message,
      'session_id': sessionId,
    }),
  );
  
  if (response.statusCode == 200) {
    final data = jsonDecode(response.body);
    return data['reply'];
  }
  throw Exception('Failed to get response');
}

What to Build With This

Customer support bot for your SaaS. FAQ assistant for a Tunisian e-commerce site. Document analyzer for a legal app. Product recommender. The OpenAI API is the fastest way to add intelligence to any app.

Tous les articles Article suivant