Forge is a secure, portable AI Agent runtime. Run agents locally, in cloud, or enterprise environments without exposing inbound tunnels.
-
Updated
Mar 6, 2026 - Go
Forge is a secure, portable AI Agent runtime. Run agents locally, in cloud, or enterprise environments without exposing inbound tunnels.
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
Unified Local AI Interface & LLM Runtime (Support GGUF, Ollama, OpenAI, Gemini, etc.). Insearch of building sovereign AI system ✨
LLM agent runtime with paged virtual memory and spatial context awareness
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
Production-grade TypeScript AI runtime focused on reliability, governance, and reproducible LLM systems. Multi-provider gateway, agents, RAG, workflows, policy engine, audit trails, and deterministic testing — built for teams shipping AI in production.
Multi-provider LLM runtime core: routing, key management, and resilient fallback execution for agent orchestration.
mindscript-runtime is the minimal, reference implementation of the MindScript engine. It provides: - a CLI for running .ms / .ms.md files - a parser that converts MindScript into an internal AST - a stage-based runtime that executes each stage sequentially - adapters for different LLM backends (OpenAI, Gemini, local stub)
Agent memory runtime: short/long-term context, vector persistence, compression, and personalization primitives.
Add a description, image, and links to the llm-runtime topic page so that developers can more easily learn about it.
To associate your repository with the llm-runtime topic, visit your repo's landing page and select "manage topics."