sumeetghimire started a new conversation+100 XP
5mos ago
Multi-Provider Fallback in Laravel AI Orchestrator
Now in Laravel AI Orchestrator, you can chain multiple AI providers, and it will automatically try the next one if one fails — no manual try/catch, no downtime.
Ai::prompt("Summarize why caching improves Laravel performance.")
->using('openai')
->fallback(['anthropic', 'gemini', 'ollama'])
->toText();
Ai::embed("Laravel is a PHP web application framework")
->using('custom')
->fallback(['huggingface', 'openai'])
->toEmbeddings();
If OpenAI fails — it automatically switches to Anthropic → Gemini → Ollama, until one succeeds.
A clean, Laravel-native approach to reliability and orchestration.
This makes the package one of the first in Laravel to support multi-level AI fallback, along with contextual memory and embeddings.
🔗 GitHub: sumeetghimire/Laravel-AI-Orchestrator
sumeetghimire wrote a reply+100 XP
5mos ago
sumeetghimire liked a comment+100 XP
5mos ago
sumeetghimire started a new conversation+100 XP
5mos ago
Laravel AI Orchestrator is a Laravel package similar to prism, It supports multiple providers like OpenAI, Anthropic, Gemini, and Ollama with automatic fallback if one fails, structured outputs for clean data handling, and optional Contextual Memory that lets your AI remember previous conversations using cache or database. You can easily test and manage everything with built-in Artisan commands (ai:test, ai:usage, ai:status, etc.) and toggle memory via .env. This is a new package I’ve built and tried for the first time, aiming to create a Laravel-native AI layer that combines Prism’s structured approach with advanced orchestration and memory support.
Example : Ai::remember('support-session-42') ->prompt('What did I decide about cache TTL earlier?') ->toText();
Github : /sumeetghimire/Laravel-AI-Orchestrator