CS undergrad (2027) obsessed with one question: how do AI systems actually think at scale?
I build at the intersection of retrieval, reasoning, and real-world impact โ from RAG pipelines to multi-step agents. My work leans backend-heavy, systems-first, and always production-aware.
- ๐ง Focus: AI Systems ยท RAG Architectures ยท Agent Frameworks ยท Backend Engineering
- ๐ฌ Currently building: CodeRAG โ an AI debugging agent
- ๐ฑ Learning: Advanced agentic systems ยท System design at scale ยท Hybrid search architectures
- ๐ค Open to: AI/ML collabs, open-source contributions, research discussions
- ๐ Punjab, India
- ๐งฉ Strong understanding of AI system design (RAG + Agents)
- โ๏ธ Ability to build end-to-end backend systems
- ๐ Focus on problem-solving, not just implementation
- ๐ Experience with real-world datasets and ML workflows
- ๐ Builder mindset โ turning ideas into working systems
AI / ML
Backend
Frontend
Databases & Search
Data Science
Tools & Platforms
An AI-powered debugging system that finds root causes โ not just symptoms.
Most debugging tools tell you where the error is. CodeRAG tells you why it happened โ by reasoning across your entire codebase context.
User reports bug โ CodeRAG indexes codebase + logs + docs + git history
โ Hybrid search retrieves relevant context
โ LangGraph agent reasons across evidence
โ Root cause + suggested fix returned
| Layer | Technology |
|---|---|
| ๐ Backend | FastAPI |
| ๐ฅ๏ธ Frontend | Next.js |
| ๐งฌ Code Embeddings | CodeBERT |
| ๐๏ธ Vector Store | ChromaDB |
| ๐ Search Engine | Elasticsearch |
| ๐ค Agent Framework | LangGraph |
| ๐ Search Strategy | Hybrid (BM25 + Vector) |
What makes it different:
- ๐ Code-aware search โ understands functions, classes, call graphs
- ๐ Cross-context reasoning โ correlates logs, docs, and commits
- ๐งฉ Multi-step agents โ doesn't guess; it traces
- ๐ก Actionable output โ fix suggestions with reasoning, not just pointers
class Sneha:
status = "Shipping AI systems that retrieve, reason, and solve real problems"
learning = ["Advanced RAG patterns", "Agentic system design", "Distributed systems"]
exploring = ["Multi-agent orchestration", "Graph-based retrieval", "Eval frameworks for LLMs"]
open_to = "Interesting problems worth solving"- ๐ Winter of Blockchain โ Open-source contributor in Web3 ecosystem
- ๐ค AI/ML Coursework โ Deep Learning ยท NLP ยท Computer Vision ยท Generative AI
- ๐๏ธ Projects shipped โ AI debugging systems, web apps, RAG pipelines
- ๐ GitHub Achievements โ Pull Shark ร 2 ยท Quickdraw
"Good software is a system, not a script."
I approach every project by asking:
- What breaks at scale? โ Design for it upfront
- Where does reasoning fail? โ Add structure and evaluation
- What's the actual problem? โ Solve that, not the surface symptom
I prefer boring infrastructure that works over clever code that doesn't.

