A "Community-Gated" programming learning platform built with a 3-service architecture on Google Cloud.
- Frontend: Next.js (Stateless, SSR/Static) -
/frontend - Backend: Node.js/Express API -
/backend - AI Engine: Python/Flask -
/ai-engine
- Self-Evolution System: Automated code analysis, generation, testing, and history tracking for continuous improvement.
- Assessment Module: Rigorous developer evaluation through coding tests with timers and AI-powered interviews.
- GCP Integration: Monitoring of AI credits and seamless cloud operations.
- Local Migration: Support for Ollama and ChromaDB for cost-free, eternal operation without cloud dependencies.
Frontend (Next.js) → Backend (Express) → AI Engine (Flask)
├─ Self-Evolution
├─ Assessment
├─ GCP Integration
└─ Local Models
↓
Vertex AI ⟷ Ollama
(Bootstrap) (Eternal)
The platform uses a 3-service architecture with interactions between frontend, backend, and AI engine. Transition from Vertex AI (bootstrap phase) to Ollama (eternal phase) for local model support.
Deployment is managed by Google Cloud Build via the /****** comment trigger on Pull Requests.
The pipeline (cloudbuild.yaml) performs:
- Frontend Install & Lint
- AI Engine Tests
- Parallel Docker Builds
- Image Pushing to Artifact Registry
- Set up Google Cloud Project with Vertex AI enabled.
- Configure environment variables:
GOOGLE_CLOUD_PROJECT: Your GCP project IDVERTEX_AI_LOCATION: e.g., us-central1AI_ENGINE_PORT: 5000BACKEND_PORT: 3001FRONTEND_PORT: 3000
- Run
gcloud builds submit --config cloudbuild.yaml .
- Install Ollama and ChromaDB locally.
- Update environment variables for local models:
OLLAMA_BASE_URL: http://localhost:11434CHROMA_DB_PATH: ./chroma_db
- Use migration script:
./deploy/migrate_to_local.sh - Run services locally with updated configs.
- Self-evolution cycle: Analyze → Generate → Test → History
- Coding tests: Timer functionality and submission
- Interviews: AI response quality
- GCP integration: Credit monitoring
- Local migration: Ollama/ChromaDB functionality
- Unit tests for AI Engine models and services
- Integration tests for API endpoints
- End-to-end tests for full workflows
- Sandboxed execution for code testing
- Isolation between services
- Recommendations: Implement authentication, rate limiting, and input validation
- Self-evolution cycle: ~5-10 seconds per iteration
- Assessment tests: <2 seconds response time
- GCP API calls: Optimized for cost and speed
- Total code added: ~4500 lines
- New modules: Self-evolution, assessment, GCP integration
- Success criteria: Self-improving AI, cost optimization, rigorous evaluation
- Learning outcomes: Platform adapts and improves autonomously
- IMPLEMENTATION_SUMMARY.md: Detailed implementation notes
- TRANSFORMATION_GUIDE.md: Transformation process
- SECURITY_SUMMARY.md: Security details
This repository includes a manual GitHub Actions workflow that can apply changes from OpenAI Codex.
- Add repository secret
OPENAI_API_KEYwith an OpenAI API key. - (Optional) Update the default model in
.github/workflows/codex-bot.yml.
- Open Actions → 🤖 Codex Apply.
- Click Run workflow.
- Provide plain-text instructions (for example: "Fix the AI Console API proxying to backend").
The workflow will:
- generate a patch via OpenAI,
- apply it to the repo,
- commit and push the result to the current branch.
Each service contains its own setup instructions (check package.json or usage files if available).