How to Deploy LLM Gateway Gecholog.ai on Azure In Less Than Ten Minutes

Updated: 2024-01-18

Deploy the LLM Gateway with Azure OpenAI in under 10 minutes. Follow our guide to integrate Gecholog.ai quickly and efficiently.

Do you like this article? Follow us on LinkedIn.

Gecholog.ai intro in 90s?

Overview

In this article, we explore how to deploy the LLM Gateway Gecholog.ai on your own Azure subscription in less than ten minutes. We will guide you through the deployment process and show you how to make your first test call to your LLM API via the gateway. These are the steps:

  1. Gather prerequisites

  2. One-Click Deployment

  3. Make a test request

Gather Prerequisites

We will deploy the LLM Gateway Gecholog.ai on Azure and show how to connect it to your Azure OpenAI endpoint. In order to proceed with this tutorial, make sure you have:

  • Access to an Azure Subscription, normally with the Azure Contributor role so that you can execute a deployment, create a resource group, and resources.

  • An Azure OpenAI Endpoint commonly referred to as OPENAI_API_BASE, which normally looks something like this: https://your.openai.azure.com/

  • An Azure OpenAI API Key, that is the OPENAI_API_KEY, retrieved from your Azure OpenAI deployment

  • An Azure OpenAI Deployment from when you created your Azure OpenAI Endpoint.

Cloud Agnostic LLM Gateway

This tutorial shows how easy and quick it is to deploy Gecholog.ai on Azure and to connect it to Azure OpenAI. Since Gecholog.ai is a container based LLM Gateway, you can deploy it in any cloud or on premise. Since it forwards the traffic, you can integrate it to any LLM provider, either on the same network or over internet.

Use "One-Click" Deployment of LLM Gateway

On the Gecholog.ai resource GitHub page, you will find the resource definition if you want to inspect the deployment template or run it via the Azure CLI. However, we want to facilitate the "One-Click" deployment, and it's as easy as clicking this link.

Make sure you log in to your Azure account and simply fill out the deployment form.

Create a New Resource Group

Create a new resource group with a name of your choice, something like gecholog-eval.

Azure Resource Group for LLM Gateway Gecholog.ai

Image: Azure Resource Group for LLM Gateway Gecholog.ai

Add your AISERVICE_API_BASE to the deployment form.

Azure OpenAI API Base for LLM Gateway Gecholog.ai

Image:Azure OpenAI API Base for LLM Gateway Gecholog.ai

Proceed to Deploy

Review and create the deployment. Within a few minutes, all the resources will be deployed, and the Gecholog.ai container will be connected, up, and running.

What's included in the deployment? The deployment consists of:

  • A pre-built but customizable Gecholog.ai Dashboard for DevOps.

  • The standard gecholog/gecholog:latest container from the public Docker Hub repository.

  • An Azure Log Analytics workspace for ingesting logs.

  • An Azure Storage Account.

Gecholog.ai is now configured to forward the traffic to your Azure OpenAI endpoint.

Send Request to LLM API via LLM Gateway

Find the FQDN of the LLM Gateway

The Azure "One-click" deployment uses managed Azure Container Instances (ACI) to host the Gecholog.ai container. ACI is configured to generate a fully qualified domain name (FQDN) for the container's endpoint, allowing immediate use of the Gecholog.ai service. You can locate the FQDN of the Gecholog.ai container in the Overview section:

FQDN of LLM Gateway Gecholog.ai deployed on Azure

Image: FQDN of LLM Gateway Gecholog.ai deployed on Azure

Make your first request via LLM Gateway

The Gecholog.ai LLM Gateway deployed on Azure receives an automatic URL that you can use to send your LLM API requests (which will be forwarded to your Azure OpenAI endpoint). Everything is now deployed under your own Azure subscription.

In order to make the first request, we will demonstrate how it can be done using Postman, but you can test it any way you want, for example, using cURL from the command line or Python.

First, you need the URL in the form:

gecholog-y0urun1qu3url.northeurope.azurecontainer.io:5380/service/standard/openai/deployments/gpt4/chat/completions?api-version=2023-05-15

Where you replace y0urun1qu3url and gpt4 with the information from your Azure FQDN and your Azure OpenAI deployment. Add the URL and OPENAI_API_KEY into Postman as shown:

URL and OPENAI_API_KEY via LLM Gateway using Postman

Image: URL and OPENAI_API_KEY via LLM Gateway using Postman

Add the standard "Hello World" type body and submit the request.

Making a Request to LLM API via LLM Gateway using Postman

Image: Making a Request to LLM API via LLM Gateway using Postman

And with the response, you have successfully 1) deployed Gecholog.ai on Azure, 2) configured it for your LLM API endpoint, and 3) made your first request to the LLM API via the LLM Gateway.

Further Reading

We recommend the following articles if you want to explore the world of LLM Gateway and Gecholog.ai more:

Conclusion

Deploying an LLM Gateway such as Gecholog.ai on Azure can be efficiently accomplished in just minutes. This guide highlights the ease of setup, the flexibility of Gecholog.ai across different cloud environments, and the steps to conduct your first API request, providing a compelling solution for businesses and developers seeking effective LLM integration.


LLM GatewayAzure OpenAI IntegrationLLM API

Ready to Elevate Your AI Capabilities With the LLM Gateway on Azure?

Explore the endless possibilities of LLM Gateway and Azure OpenAI integration. Simplify your AI deployment, and begin your journey to smarter, faster, and more efficient AI operations today!