Full-Stack & Applied AI

AI-Powered Quiz & Proctoring Platform

MERN · Local LLM · Real-Time Proctoring

Completed
Full-Stack & Applied AI
MCQ · T/F · FIB · Coding
Question Types
JS · Python · Java · C++
Languages Covered
Local Ollama (qwen2.5-coder:7b)
AI Backend
Face · Tab · Clipboard · Fullscreen
Proctoring Signals

Overview

An end-to-end quiz platform with role-based access (admin, instructor, student), AI-driven question generation against a custom-fine-tuned Ollama Modelfile (`quiz-master` on top of qwen2.5-coder:7b) with a multi-provider fallback chain (Ollama → Gemini → OpenAI → docker-model-proxy), and live proctoring through face-api.js, tab-switch detection, copy-paste blocks, and full-screen enforcement. Question generation uses a length-aware token budget (~400 tokens per question), a schema-locked prompt, a `questionValidatorService` that rejects malformed output, and type-mix balancing so the model produces exactly the requested count across MCQ, true/false, fill-in-the-blank, coding, and essay types. Proctoring events stream over a JWT-authenticated raw WebSocket (`/ws/proctor`, `ws` library) and persist to a dedicated `ProctoringEvent` collection linked from each `Result.proctoringLog`, giving instructors an auditable trail. The platform supports JavaScript, Python, Java, and C++, exposes per-question and per-cohort analytics, and runs locally on Mac Mini M4 with auto-start scripts and an Ngrok tunnel for remote access.

The Problem

Online assessments live in an awkward middle: SaaS proctoring tools are expensive, leak student data to third parties, and still produce templated question banks that students memorize within a semester. Instructors want to author quizzes from a topic prompt, not a spreadsheet, and they need proctoring signals they can actually audit. The goal was a self-hostable platform that generates fresh questions on demand and watches the candidate without sending anything to an external API.

The Approach

The frontend is a React/Vite app with role-based dashboards, a Monaco editor for coding questions, and a face-api.js proctoring layer that runs entirely in the browser. The backend is a Node/Express service backed by MongoDB, with JWT auth, a raw `ws` WebSocket at `/ws/proctor` for live proctoring events, a dedicated `ProctoringEvent` collection linked from each `Result.proctoringLog`, and an audit log for every violation. Question generation calls a custom-fine-tuned Ollama Modelfile (`quiz-master` over qwen2.5-coder:7b) with a length-aware token budget (~400 tokens per question), a schema-locked prompt, a post-parse `questionValidatorService`, type-mix balancing, and a multi-provider fallback chain (Ollama → Gemini → OpenAI → docker-model-proxy) so a request for, say, '12 mixed Python questions' returns exactly that even when the M4 host is offline. The whole stack — Mongo, backend, frontend, Ngrok tunnel — boots from a single Docker Compose file with auto-start scripts for Mac Mini M4.

Results

An assessment platform that generates JS/Python/Java/C++ questions on demand with reliable counts and a working multi-signal proctoring layer (face presence, tab switches, clipboard, full-screen). Runs end-to-end on a single Mac Mini M4 with no third-party AI dependency required, exposes admin/instructor/student dashboards with detailed analytics, and ships with seeded demo accounts and a one-command boot.

Process & Timeline

  1. Phase 1

    Auth & RBAC

    Built the Express + Mongo backbone with JWT auth, role-based access (admin/instructor/student), and audit logging.

  2. Phase 2

    Quiz authoring & question bank

    Authored MCQ, true/false, fill-in-the-blank, coding, and essay schemas with a Monaco-powered editor and a versioned question bank.

  3. Phase 3

    Local-LLM generation & fallback

    Wrote a custom `quiz-master` Modelfile over qwen2.5-coder:7b, a token-budget + validator + type-mix pipeline, and a multi-provider fallback chain (Ollama → Gemini → OpenAI → docker-model-proxy).

  4. Phase 4

    Proctoring layer

    Added face-api.js presence checks, tab-switch and clipboard interceptors, full-screen enforcement, and a JWT-authenticated `ws` WebSocket at `/ws/proctor` persisting typed events to a `ProctoringEvent` collection.

  5. Phase 5

    Packaging & deploy

    Wrapped the stack in Docker Compose with Mongo + backend + frontend + Ngrok, plus Mac Mini M4 auto-start scripts and a public smart-quiz URL.

Like what you see?

I'm always open to collaborations on AI, robotics, edge computing, or embedded systems.