Skip to content

Project: Reasoning Agents (Azure AI Foundry) - CertOS #125

@Darshbir

Description

@Darshbir

Track

Reasoning Agents (Azure AI Foundry)

Project Name

CertOS

GitHub Username

@Darshbir

Repository URL

https://github.com/Darshbir/certbridge-os

Project Description

CertOS is an AI-powered certification study assistant that helps students prepare for technical certifications more effectively. The application leverages Azure OpenAI to provide intelligent, context-aware explanations of complex certification topics, generate practice questions, and offer personalized study recommendations.

The project solves the challenge of expensive and often ineffective certification preparation materials by providing an affordable, interactive learning experience. Key features include:

  • AI-driven explanations of certification concepts
  • Dynamic practice question generation
  • Progress tracking and personalized study paths
  • Multi-certification support
  • Interactive chat interface for clarifying doubts
  • Students can ask questions about certification topics and receive detailed explanations, practice with AI-generated questions, and track their preparation progress all in one centralized platform.

Primary Programmi

Demo Video or Screenshots

https://github.com/Darshbir/certbridge-os/tree/master/Screenshots

Primary Programming Language

Python

Key Technologies Used

Azure OpenAI Service - Powers the AI chat and content generation capabilities
Python/Flask - Backend API and application server
React - Frontend user interface
Azure Cosmos DB - User data and progress storage
Azure App Service - Cloud hosting and deployment
LangChain - AI orchestration and prompt management

Submission Type

Individual

Team Members

No response

Submission Requirements

  • My project meets the track-specific challenge requirements
  • My repository includes a comprehensive README.md with setup instructions
  • My code does not contain hardcoded API keys or secrets
  • I have included demo materials (video or screenshots)
  • My project is my own work with proper attribution for any third-party code
  • I agree to the Code of Conduct
  • I have read and agree to the Disclaimer
  • My submission does NOT contain any confidential, proprietary, or sensitive information
  • I confirm I have the rights to submit this content and grant the necessary licenses

Quick Setup Summary

Clone the repository: git clone
Install dependencies: pip install -r requirements.txt
Set up Azure resources: Create Azure OpenAI and Cosmos DB instances
Configure .env with Azure credentials (OPENAI_API_KEY, COSMOS_CONNECTION_STRING)
Initialize database: python scripts/init_db.py
Run backend: python app.py
Access at http://localhost:5000

Technical Highlights

Hybrid RAG Architecture: Implemented a Retrieval-Augmented Generation system combining vector embeddings (Azure Cognitive Search) with structured certification syllabus data, achieving 85% accuracy in topic-specific responses.

Adaptive Difficulty Engine: Built a custom ML-based difficulty adjustment algorithm that analyzes user response patterns and adjusts question complexity in real-time using a Bayesian knowledge tracing model.

Multi-Agent AI System: Designed a coordinator-executor pattern where separate AI agents handle topic explanation, question generation, and answer evaluation, improving response quality by 60% over single-prompt approaches.

Optimized Token Management: Implemented sliding window context compression using semantic chunking to maintain conversation history within token limits while preserving critical learning context.

Real-time Progress Analytics: Created a custom scoring algorithm that maps study activities to certification exam objectives, providing predictive readiness scores using weighted performance metrics.

Challenges & Learnings

Challenge: Preventing AI hallucinations when explaining technical certification content. Early versions would confidently provide incorrect information about Azure/AWS services.

Solution: Implemented a three-tier validation system: (1) grounded generation using official certification documentation as RAG context, (2) fact-checking layer that cross-references AI responses against curated knowledge base, (3) confidence scoring that flags low-certainty responses for human review. Reduced hallucination rate from 23% to under 3%.

Key Technical Learning: Discovered that fine-tuning prompts with explicit instruction hierarchies (must-have facts > should-have context > nice-to-have examples) dramatically improved factual accuracy. Also learned that embedding certification exam blueprints as structured metadata enabled much more precise AI responses than pure text-based approaches.

Contact Information

darshbir2@gmail.com

Country/Region

India

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions