Open Source · 100% Google Cloud · Kubernetes-Native

AI Agents on GKE,
Powered by Gemini

Sympozium is a Kubernetes-native AI agent orchestration platform — forked and rebuilt from the ground up for Google Cloud. Vertex AI Gemini, Cloud Pub/Sub, Workload Identity, and self-hosted Gemma on TPU nodes. No non-Google dependencies.

Vertex AI Gemini

Three model tiers — Flash for speed, Pro for balance, Preview for power. Default is gemini-2.5-pro across all agent personas.

Hybrid Inference

Route between Vertex AI and self-hosted Gemma 3 models on GKE GPU (L4, A100) and TPU v5e nodes. Node-probe auto-discovery.

Cloud Pub/Sub Event Bus

NATS JetStream replaced with Cloud Pub/Sub for event-driven agent IPC. Topic auto-creation, trace context propagation, at-least-once delivery.

Workload Identity

Keyless authentication everywhere. No service account keys. GKE pods authenticate to GCP APIs via Workload Identity Federation.

Google Chat Channel

Cards v2, slash commands, threading, typing indicators, interactive retry buttons. Native integration with Google Workspace.

Observability

OpenTelemetry → Cloud Trace + Cloud Monitoring. Google Managed Prometheus + Grafana dashboards for existing Prometheus stacks.

Model Tier System

Every agent persona maps to a model tier. Route the right model to the right task.

Fast
gemini-2.5-flash

High-throughput, low-latency tasks. Monitoring, log tailing, status checks.

Balanced · Default
gemini-2.5-pro

General-purpose reasoning. Code review, incident response, architecture decisions.

Powerful
gemini-3.1-pro-preview

Complex multi-step tasks. Security audits, cross-system analysis, planning.

Local
gemma-3-27b-it

Self-hosted on GKE GPU/TPU nodes. Air-gapped environments, cost optimization.