OARC & AgentChef

Advancing AI Agent Development with Local Models and Dataset Generation

About OARC 🌌

Ollama Agent Roll Cage (OARC) is a Python-based framework combining the power of ollama LLMs, Coqui-TTS, Keras classifiers, LLaVA Vision, Whisper Speech Recognition, and YOLOv8 Object Detection into a unified chatbot agent API for local, custom automation.

Unified API 🔄

Streamline chatbot agent design and deployment with a comprehensive API.

Multimodal Capabilities 🎯

Integrate speech, vision, and data retrieval seamlessly.

Custom Automation 🤖

Build tailored workflows for your unique use cases.

What is an Agent? 🤖🧠

"An agent refers to the algorithmic logic that wraps a model and via iteration, generates the chain of thought output for the model" — Borch

Agentic Multi-modal Superalignment 🎯

OARC aims to achieve multi-modal super alignment within Agentic action spaces and inter-communication protocols.

The Agentic Action Space 🚀

Where LLM-generated text prompts transcend mere words and become actionable through programming logic.

Core Components of OARC Agents 🛠️

Modal Flags 🏁

Configure agent capabilities (TTS_FLAG, STT_FLAG, LLAVA_FLAG, etc.)

Models Configuration 📊

Define which language, vision, and speech models the agent uses

Tool Integration 🔧

External services and functions the agent can access

Speech-to-Speech Architecture 👂👄

OARC implements a sophisticated speech-to-speech pipeline that enables fluid voice interaction with agents.

Core Features 🎤

Silence removal preprocess, smart user interrupt, wake words, and debate moderator mode.

Smart Processing 🧠

LLM sentence chunking preprocess for TTS with real-time generation.

AgentChef 👨‍🍳

A comprehensive Python library for AI research, dataset generation, and conversation management using large language models.

AI-Powered Research 🔍

Search and process web content, ArXiv papers, and GitHub repositories.

Dataset Generation 📊

Create and expand high-quality conversation datasets for AI training.

Quality Control ✨

Clean and validate generated datasets to ensure high quality.

View on GitHub

Documentation 📚