π₯ How I Built a High-Performance AI-Powered Chatbot with Deno and No Frameworks (Seriously!) π¨βπ»π€―
Skip the overhead of traditional frameworks and learn how I used Deno, the fast and secure JavaScript/TypeScript runtime, to create a blazing-fast AI chatbot with zero dependencies on frameworks like Express, Koa, or Fastify. This post walks through:
Letβs dive in π
Most tutorials for building AI chatbots jump into some heavy setup with Express, middleware stack, and three different levels of abstraction. Frankly, many of those are overkill for a chatbot.
Deno is modern, fast, secure by default, and has out-of-the-box support for TypeScript. When paired with Web APIs like fetch() and built-in HTTP server capabilities, it's more than enough.
π« Say no to
npm install express body-parser cors dotenv axiosβοΈ
Make sure you have Deno installed.
deno --version # deno 1.42.0 (or higher)
Initialize your project:
mkdir deno-chatbot && cd deno-chatbot touch main.ts
Thatβs it. No package.json, no node_modules.
Create a .env file (Deno works with dotenv via deno-dotenv if you choose to use it, but we wonβt rely on it for this demo).
Or you can directly export your API key:
export OPENAI_API_KEY="sk-xxxxxx"
Then in your TypeScript code:
// openai.ts
export async function sendMessage(prompt: string): Promise<string> {
const apiKey = Deno.env.get("OPENAI_API_KEY");
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${apiKey}`,
},
body: JSON.stringify({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }],
}),
});
const data = await response.json();
return data.choices[0].message.content;
}
Letβs build a basic server-side chatbot handler:
// main.ts
import { serve } from "https://deno.land/[email protected]/http/server.ts";
import { sendMessage } from "./openai.ts";
serve(async (req: Request) => {
if (req.method === "POST" && new URL(req.url).pathname === "/chat") {
const { prompt } = await req.json();
const reply = await sendMessage(prompt);
return new Response(JSON.stringify({ reply }), {
headers: { "Content-Type": "application/json" },
});
}
return new Response("404 Not Found", { status: 404 });
});
Youβve just written a production-ready, ultra-fast AI chatbot backend π
Run your server:
OPENAI_API_KEY="sk-xxxxxx" deno run --allow-net --allow-env main.ts
Test it via curl or your frontend:
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me a joke"}'
Hereβs what I found with wrk after testing both versions:
Running 10s test @ http://localhost:8000/chat
8 threads and 64 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 41.23ms 12.22ms 92.56ms 85.00%
Req/Sec 200.30 11.50 230
> Total requests: ~20% higher than equivalent Express implementation
β
Fewer dependencies
β
Better performance
β
Zero configuration
You could:
std/log)You can:
You donβt need a truckload of libraries to build something cool. Deno challenges the Node ecosystem's tendency toward bloat, and this chatbot shows itβs more than capable on its own.
Now, go forth and build better bots π¦Ύ!
π‘ Want help building your own AI chatbot like this? We offer AI Chatbot Development services.
Information