Engineering
Using Ollama for Local LLM Workflows in a Production-Oriented System
Where local LLMs fit inside GRAXEL and where managed APIs are still the safer choice.
#Ollama#LLM#AI#Fallback#Operations
Blog tag
Read 4 GRAXEL articles tagged #Operations and continue exploring related topics.
Where local LLMs fit inside GRAXEL and where managed APIs are still the safer choice.
How GRAXEL uses an ARM server for heavier backend workloads while keeping the public portal lightweight.
A practical operations note on keeping the GRAXEL automation agent alive with pm2, browser automation, and Discord command workflows.
How GRAXEL structures policy data, hybrid search, and AI responses for the MyHyetaek government-benefits assistant.