Updated: 2024-01-18
Deploy the LLM Gateway with Azure OpenAI in under 10 minutes. Follow our guide to integrate Gecholog.ai quickly and efficiently.
Do you like this article? Follow us on LinkedIn.
In this article, we explore how to deploy the LLM Gateway Gecholog.ai on your own Azure subscription in less than ten minutes. We will guide you through the deployment process and show you how to make your first test call to your LLM API via the gateway. These are the steps:
Gather prerequisites
One-Click Deployment
Make a test request
We will deploy the LLM Gateway Gecholog.ai on Azure and show how to connect it to your Azure OpenAI endpoint. In order to proceed with this tutorial, make sure you have:
Access to an Azure Subscription, normally with the Azure Contributor role so that you can execute a deployment, create a resource group, and resources.
An Azure OpenAI Endpoint commonly referred to as OPENAI_API_BASE, which normally looks something like this: https://your.openai.azure.com/
An Azure OpenAI API Key, that is the OPENAI_API_KEY, retrieved from your Azure OpenAI deployment
An Azure OpenAI Deployment from when you created your Azure OpenAI Endpoint.
This tutorial shows how easy and quick it is to deploy Gecholog.ai on Azure and to connect it to Azure OpenAI. Since Gecholog.ai is a container based LLM Gateway, you can deploy it in any cloud or on premise. Since it forwards the traffic, you can integrate it to any LLM provider, either on the same network or over internet.
On the Gecholog.ai resource GitHub page, you will find the resource definition if you want to inspect the deployment template or run it via the Azure CLI. However, we want to facilitate the "One-Click" deployment, and it's as easy as clicking this link.
Make sure you log in to your Azure account and simply fill out the deployment form.
Create a new resource group with a name of your choice, something like gecholog-eval
.
Add your AISERVICE_API_BASE
to the deployment form.
Review and create the deployment. Within a few minutes, all the resources will be deployed, and the Gecholog.ai container will be connected, up, and running.
What's included in the deployment? The deployment consists of:
A pre-built but customizable Gecholog.ai Dashboard for DevOps.
The standard gecholog/gecholog:latest
container from the public Docker Hub repository.
An Azure Log Analytics workspace for ingesting logs.
An Azure Storage Account.
Gecholog.ai is now configured to forward the traffic to your Azure OpenAI endpoint.
The Azure "One-click" deployment uses managed Azure Container Instances (ACI) to host the Gecholog.ai container. ACI is configured to generate a fully qualified domain name (FQDN) for the container's endpoint, allowing immediate use of the Gecholog.ai service. You can locate the FQDN of the Gecholog.ai container in the Overview section:
The Gecholog.ai LLM Gateway deployed on Azure receives an automatic URL that you can use to send your LLM API requests (which will be forwarded to your Azure OpenAI endpoint). Everything is now deployed under your own Azure subscription.
In order to make the first request, we will demonstrate how it can be done using Postman, but you can test it any way you want, for example, using cURL from the command line or Python.
First, you need the URL in the form:
gecholog-y0urun1qu3url.northeurope.azurecontainer.io:5380/service/standard/openai/deployments/gpt4/chat/completions?api-version=2023-05-15
Where you replace y0urun1qu3url
and gpt4
with the information from your Azure FQDN and your Azure OpenAI deployment. Add the URL and OPENAI_API_KEY into Postman as shown:
Add the standard "Hello World" type body and submit the request.
And with the response, you have successfully 1) deployed Gecholog.ai on Azure, 2) configured it for your LLM API endpoint, and 3) made your first request to the LLM API via the LLM Gateway.
We recommend the following articles if you want to explore the world of LLM Gateway and Gecholog.ai more:
Azure Deployment Info on our docs site.
Experience the Powers of LLM Gateway: Five Pillars of LLM DevOps
LLM DevOps Optimization: Introduction to Traffic Routing with LLM Gateway
Deploying an LLM Gateway such as Gecholog.ai on Azure can be efficiently accomplished in just minutes. This guide highlights the ease of setup, the flexibility of Gecholog.ai across different cloud environments, and the steps to conduct your first API request, providing a compelling solution for businesses and developers seeking effective LLM integration.
Explore the endless possibilities of LLM Gateway and Azure OpenAI integration. Simplify your AI deployment, and begin your journey to smarter, faster, and more efficient AI operations today!