FastAPI + LangChain: AI Backend Yaratish 2026 (3 Ta Asosiy Qadam)
Bugungi kunda sun'iy intellekt (AI) texnologiyalari rivojlanishining eng yuqori cho'qqisiga chiqdi. AI agentlar, RAG (Retrieval-Augmented Generation) tizimlari va Claude Opus 4.5, Gemini 3, GPT-5.2 kabi eng so'nggi til modellarining paydo bo'lishi, har bir sohada o'ziga xos yechimlarni talab qilmoqda. Ayniqsa, murakkab AI funksiyalarini o'z ichiga olgan backend tizimlarini tez va samarali yaratish muhim ahamiyat kasb etadi. Ushbu maqolada biz FastAPI va LangChain yordamida zamonaviy AI backend'ini yaratishning 3 ta asosiy qadamini batafsil ko'rib chiqamiz. Bu texnologiyalar sizga kuchli, moslashuvchan va oson kengaytiriladigan tizimlarni qurish imkonini beradi.
Nima uchun FastAPI va LangChain AI Backend uchun Ideal Tanlov?
FastAPI â bu Python uchun zamonaviy, tez (yuqori ishlash) veb-framework bo'lib, u standart Python type hints (tip ko'rsatkichlari) ga asoslangan. Uning asosiy afzalliklari quyidagilardan iborat:
- Tezlik: FastAPI Starlette va Pydantic'ga asoslangan bo'lib, Node.js va Go kabi eng tezkor Python frameworklari bilan raqobatlashadi.
- Tezkor kod yozish: Kod yozishning 30-40% tezlashishi.
- Kengaytirilgan xususiyatlar: Avtomatik interaktiv hujjatlar (Swagger UI va ReDoc) bilan keladi.
- Kamroq xatolar: Samarali kod yozishga yordam beradi.
LangChain esa, aksincha, murakkab til modellarini (LLMs) oson integratsiya qilish va ulardan foydalanish uchun mo'ljallangan to'plamdir. Uning yordamida siz agentlar, zanjirlar (chains), xotira (memory) va agentlarning o'zaro harakatini boshqarish kabi ko'plab AI funksiyalarini yaratishingiz mumkin. LangChain, ayniqsa, RAG tizimlarini qurishda juda samarali bo'lib, LLM'larga tashqi ma'lumotlar bazalaridan ma'lumot olish imkonini beradi. 2026-yilga kelib, bu ikkalasining integratsiyasi AI xizmatlari backend'ini yaratishda standart amaliyotga aylanmoqda.
1-Qadam: Asosiy FastAPI Strukturasi va LLM Integratsiyasi
AI backend yaratishning birinchi qadami â bu FastAPI ilovasining asosiy tuzilmasini o'rnatish va eng muhimi, tanlangan LLM bilan bog'lanishni ta'minlashdir. Hozirgi kunda GPT-5.2, Gemini 3 yoki Claude Opus 4.5 kabi modellarning API'laridan foydalanish keng tarqalgan.
Proyekt Tuzilmasi:
Oddiy loyiha quyidagicha bo'lishi mumkin:
my_ai_backend/
â
âââ app/
â âââ __init__.py
â âââ main.py # FastAPI ilovasi asosiy fayli
â âââ api/
â âââ __init__.py
â âââ endpoints.py # API endpoint'lari
â
âââ core/
â âââ __init__.py
â âââ config.py # Konfiguratsiya sozlamalari
â
âââ models/
â âââ __init__.py
â âââ schemas.py # Pydantic modellar
â
âââ services/
â âââ __init__.py
â âââ llm_service.py # LLM bilan bog'lanish uchun xizmat
â
âââ requirements.txt
LLM Xizmati (app/services/llm_service.py):
Bu yerda biz LangChain'ning LLM integratsiyasidan foydalanamiz. Masalan, OpenAI API'sidan foydalanish uchun:
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
import os
load_dotenv()
# API kalitingizni atrof-muhit o'zgaruvchisidan oling
openai_api_key = os.getenv("OPENAI_API_KEY")
def get_llm_model():
return ChatOpenAI(
model="gpt-4o", # Yoki 2026-yilga mos eng yangi model
openai_api_key=openai_api_key
)
# RAG tizimi uchun yuqoriroq versiyali LLM kerak bo'lishi mumkin
def get_rag_llm_model():
return ChatOpenAI(
model="gpt-5.2-turbo", # Maxsus RAG modeli
openai_api_key=openai_api_key,
temperature=0.1 # RAG uchun kamroq ijodkorlik
)
FastAPI Application (app/main.py):
from fastapi import FastAPI
from .api.endpoints import router
app = FastAPI(
title="AI Backend Service",
description="FastAPI and LangChain powered AI backend.",
version="1.0.0"
)
app.include_router(router)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
Endpoint (app/api/endpoints.py):
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel
from typing import List
from ..services.llm_service import get_llm_model
router = APIRouter()
class ChatRequest(BaseModel):
message: str
history: List[dict] = []
class ChatResponse(BaseModel):
response: str
@router.post("/chat", response_model=ChatResponse)
async def chat_endpoint(request: ChatRequest):
llm = get_llm_model()
try:
# LangChain'dan foydalanib, LLM ga murojaat qilish
# Bu yerda LangChain'ning Chatinization (suhbatlashuv) funksiyasidan foydalanish mumkin
# Masalan: from langchain.chains import ConversationChain
# conversation = ConversationChain(llm=llm, verbose=True)
# result = conversation.predict(input=request.message)
# Agar oddiyroq murojaat kerak bo'lsa:
from langchain.prompts import ChatPromptTemplate
from langchain.schema import HumanMessage, SystemMessage
prompt = ChatPromptTemplate.from_messages([
SystemMessage(content="You are a helpful AI assistant."),
*(
[HumanMessage(content=m["content"]) for m in request.history] if request.history else []
),
HumanMessage(content=request.message),
])
messages = prompt.format_messages()
result = llm.invoke(messages).content
return ChatResponse(response=result)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Bu bosqichda biz FastAPI serverini ishga tushirish va oddiy chatbot funksiyasini yaratish uchun kerakli asosiy kodni yozdik. requirements.txt faylida fastapi, uvicorn, langchain, langchain-openai, python-dotenv kabi paketlar ko'rsatilishi kerak.
2-Qadam: LangChain Agentlari va RAG Tizimlarini Integratsiya Qilish
Zamonaviy AI backend'lar faqatgina savol-javob qilishdan iborat emas. Ular ma'lumot qidirish, ma'lumotlarni qayta ishlash va hatto tashqi vositalar bilan ishlash (masalan, API'larni chaqirish) kabi murakkab vazifalarni bajarishi kerak. Bu yerda LangChain agentlari va RAG tizimlari muhim rol o'ynaydi.
RAG Tizimi uchun Ma'lumotlar Bazasini Tayyorlash:
RAG (Retrieval-Augmented Generation) tizimi, LLM'ni o'zining trening ma'lumotlaridan tashqari, maxsus hujjatlar, ma'lumotlar bazalari yoki veb-saytlardan ma'lumot olishga imkon beradi. Buning uchun ko'pincha Vector Databases (Vektor Ma'lumotlar Bazalari) dan foydalaniladi. Hozirgi kunda ChromaDB, Pinecone, Weaviate kabi echimlar mashhur.
Oddiy misol uchun, biz FAISS (Facebook AI Similarity Search) dan foydalanishimiz mumkin, u mahalliy tarzda ishlaydi va LangChain bilan yaxshi integratsiya qilinadi.
RAG Xizmati (app/services/rag_service.py):
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.documents import Document
from ..services.llm_service import get_rag_llm_model # RAG uchun alohida LLM
# Sinov ma'lumotlari (haqiqiy loyihada fayllardan yoki DB dan olinadi)
TEXT_DATA = """
O'zbekiston 24-mart, 2026-yil holatiga ko'ra, iqtisodiy o'sish sur'atlarida sezilarli yaxshilanishni ko'rsatdi.
Qishloq xo'jaligi sektori 5.2%, sanoat esa 7.8% o'sdi. Turizm sohasida chet el sayyohlar soni 15% ga oshdi.
Bu yutuqlar, hukumatning investitsion muhitni yaxshilash va yangi texnologiyalarni joriy etish bo'yicha olib borgan siyosatlarining natijasidir.
"""
async def create_vector_db():
embeddings = OpenAIEmbeddings(api_key=os.getenv("OPENAI_API_KEY"))
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = text_splitter.split_text(TEXT_DATA)
documents = [Document(page_content=chunk) for chunk in chunks]
vector_db = await FAISS.afrom_documents(documents, embeddings)
return vector_db
async def query_vector_db(query: str, vector_db):
results = await vector_db.asimilarity_search(query, k=3)
return results
# Endpoint'da ushbu xizmatdan foydalanish uchun moslashuvlar kerak bo'ladi.
LangChain Agentlari:
Agentlar â bu LLM'ning vositalarni (tools) ishlatib, qanday harakat qilishini o'zi belgilaydigan tizimlardir. Masalan, agent veb-qidiruv qilish, kalkulyatordan foydalanish yoki o'ziga tegishli API'ni chaqirishni bilishi mumkin.
Hozirgi kunda "OpenAI Functions Agent" yoki "ReAct Agent" kabi agent turlari keng tarqalgan.
# Bu qismni ko'rsatish uchun asosiy faylda qo'shimcha kodlar kerak bo'ladi.
# Misol uchun, agent yaratish uchun LangChain'dan
# initialize_agent va AgentExecutor dan foydalanish mumkin.
# Va bu agentni API endpoint'iga ulash lozim.
2026-yilda AI agentlarining imkoniyatlari yanada kengayadi, ular bir-biri bilan hamkorlik qilib, murakkab muammolarni avtonom tarzda hal eta oladi. Bizning backend'imiz shu kabi agentlarni qo'llab-quvvatlashi kerak.
3-Qadam: API Endpoint'larini Kengaytirish va Xavfsizlik
Backend'ning ishlashi uchun API endpoint'larini kengaytirish va uni xavfsiz holga keltirish muhim. Bu quyidagilarni o'z ichiga oladi:
- End-to-End Chatinization: Foydalanuvchi va bot o'rtasidagi suhbat tarixini saqlash va uni LLM ga uzatish.
- Agentlar uchun Endpoint: Agentlarni ishga tushirish va ularning natijalarini qaytarish uchun alohida endpointlar.
- RAG Query Endpoint: Foydalanuvchi savoliga mos keladigan ma'lumotlarni vektor bazasidan olib, LLM ga uzatish.
- Xavfsizlik: API kalitlarini boshqarish, foydalanuvchi autentifikatsiyasi va ruxsatnomalarni tekshirish.
RAG Endpoint Misoli:
# app/api/endpoints.py fayliga qo'shimcha
# Yuqorida yaratilgan vector_db ni global yoki konfiguratsiya orqali boshqarish yaxshiroq
# Lekin bu misol uchun soddalashtiramiz
import asyncio
from ..services.rag_service import create_vector_db, query_vector_db
from ..services.llm_service import get_rag_llm_model
# Bir marta vektor DB ni yaratish (asosan dastlabki yuklashda)
vector_db_instance = None
@router.on_event("startup")
async def startup_event():
global vector_db_instance
vector_db_instance = await create_vector_db()
class RagRequest(BaseModel):
question: str
class RagResponse(BaseModel):
answer: str
@router.post("/rag_query", response_model=RagResponse)
async def rag_query_endpoint(request: RagRequest):
if not vector_db_instance:
raise HTTPException(status_code=500, detail="Vector database not initialized.")
try:
relevant_docs = await query_vector_db(request.question, vector_db_instance)
llm = get_rag_llm_model()
# LangChain'dan RAG chain'idan foydalanish
from langchain.chains import create_history_aware_retriever, create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.prompts import ChatPromptTemplate
retriever = vector_db_instance.as_retriever()
# Ma'lumotni saqlash (History Awareness)
history_aware_retriever_prompt = ChatPromptTemplate.from_messages([
(
"system",
"Given a conversation history and a follow up question, rephrase the follow up question to be a standalone question."
),
("human", "{chat_history}"),
("human", "{question}"),
])
history_aware_retriever = create_history_aware_retriever(
retriever,
llm, # Bu yerda conversation history'ni yaratuvchi LLM ishlatiladi
history_aware_retriever_prompt
)
# Asosiy RAG prompt'i
rag_chain_prompt = ChatPromptTemplate.from_template(
"""
You are an AI assistant that helps people find information. Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
ALWAYS return a "SOURCES" section in your answer.
<context>
{context}
</context>
Question: {input}
"""
)
# Hujjatlarni birlashtirish
document_chain = create_stuff_documents_chain(llm, rag_chain_prompt)
# Retrieval Chain
rag_chain = create_retrieval_chain(history_aware_retriever, document_chain)
# Suhbat tarixini boshqarish (agar kerak bo'lsa)
chat_history = [] # Bu yerda haqiqiy suhbat tarixini qo'shish kerak
response = await rag_chain.ainvoke({
"input": request.question,
"chat_history": chat_history
})
# Javobni qayta ishlash (masalan, manbalarni ajratib olish)
answer = response["answer"]
return RagResponse(answer=answer)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Xavfsizlik va Konfiguratsiya:
core/config.py faylida API kalitlari, ma'lumotlar bazasi ulanishlari va boshqa sozlamalar saqlanishi kerak. Uni python-dotenv bilan birgalikda foydalanish tavsiya etiladi.
API xavfsizligi uchun FastAPI Security modulidan foydalanish mumkin. API kalitlari HTTPS orqali uzatilishi va server tomonda himoyalangan bo'lishi zarur. 2026-yilda OAuth2, JWT tokenlari va API Gateway'lar orqali xavfsizlikni ta'minlash standart hisoblanadi.
Yakuniy Fikrlar
FastAPI va LangChain integratsiyasi sizga kuchli va moslashuvchan AI backend'ini yaratish imkonini beradi. Eng so'nggi AI modellari (GPT-5.2, Gemini 3, Claude Opus 4.5), AI agentlari va RAG tizimlari bilan hamkorlikda ushbu frameworklar yordamida siz raqobatbardosh mahsulotlarni tez va samarali ishlab chiqishingiz mumkin. 2026-yilga kelib, bunday tizimlar biznes jarayonlarini avtomatlashtirish va mijozlarga yanada yaxshi tajriba taqdim etishda hal qiluvchi rol o'ynaydi.
Agar sizga ham FastAPI + LangChain yordamida AI backend yaratish xizmatlari kerak bo'lsa, TrendoAI jamoasi yordam beradi.
Bepul konsultatsiya uchun: t.me/Akramjon1984