GenAIProduction

MocktailVerse Bedrock

Enterprise GenAI data platform powered by AWS Bedrock, RAG, and Vector Search.

AWS
Bedrock LLMs
RAG
Retrieval Pipeline
pgvector
Vector Store
E2E
GenAI Platform
MocktailVerse Bedrock — demo

Enterprise-grade GenAI data engineering platform using AWS Bedrock for LLM inference, pgvector for semantic search, and a full RAG pipeline over a cocktail knowledge base. Demonstrates the full AWS GenAI stack from ingestion to retrieval to generation.

The full AWS Bedrock RAG stack — embeddings, vector store, retrieval, generation — in one production-ready platform.
PythonAWS BedrockpgvectorRAGPostgreSQLLangChainboto3DockerFastAPI

pgvector Store

PostgreSQL + pgvector for production semantic search. No managed vector DB required.

Bedrock LLM

Claude and Titan models via AWS Bedrock API. Swap models without changing application code.

RAG Pipeline

Full retrieval-augmented generation: chunk → embed → store → retrieve → generate.

Enterprise Stack

Production patterns: retry logic, observability hooks, cost tracking per query.

01
Ingest
Cocktail dataset → chunked documents → Bedrock embeddings
02
Store
Vectors → pgvector PostgreSQL with cosine similarity index
03
Retrieve
User query → embed → top-k semantic search
04
Generate
Retrieved context + query → Bedrock Claude → grounded response