Markdown comes standard with most Large Language Model (LLM) outputs, but rather than crafting yet another custom parser, why not elevate your UI instantly? By leveraging the Response
component from @ai-elements/response
(beautifully integrated with shadcn), you can render polished LLM responses right out of the box within your React application. This guide walks you through setting up a lean React + TypeScript project featuring real-time LLM streaming with TailwindCSS, shadcn, and Fency.ai powering the experience.
Here's an example of what your LLM responses will look like:

1. Create a new Vite project
Start with a fresh React + TypeScript project:
npm create vite@latest streamdown-example -- --template react-ts
cd streamdown-example
npm install
npm
You should now see the default Vite app running at http://localhost:5173
.
2. Add TailwindCSS
Install Tailwind and its Vite plugin:
Import tailwind at the very top of your index.css
:
3. Configure TypeScript paths required by shadcn
Add baseUrl
and paths
to compilerOptions
in tsconfig.json
:
{
"files": [],
"references": [
{ "path": "./tsconfig.app.json" },
{ "path": "./tsconfig.node.json" }
],
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"]
}
}
}
Add the same baseUrl
and paths
to compilerOptions
in tsconfig.app.json
:
{
"compilerOptions": {
"tsBuildInfoFile": "./node_modules/.tmp/tsconfig.app.tsbuildinfo",
"target": "ES2022",
"useDefineForClassFields": true,
"lib": ["ES2022", "DOM", "DOM.Iterable"],
"module": "ESNext",
"types": ["vite/client"],
"skipLibCheck": true,
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"moduleDetection": "force",
"noEmit": true,
"jsx": "react-jsx",
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"erasableSyntaxOnly": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true,
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"]
}
},
"include": ["src"]
}
4. Configure Vite for shadcn
Install @types/node
to enable path
and __dirname
.
npm install -D @types/node
Update vite.config.ts
so Tailwind and path aliases work:
import tailwindcss from '@tailwindcss/vite'
import react from '@vitejs/plugin-react'
import path from 'path'
import { defineConfig } from 'vite'
export default defineConfig({
plugins: [react(), tailwindcss()],
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
},
},
})
5. Setup shadcn and @ai-elements Response
Initialize shadcn:
You will be asked a few questions to configure components.json
.
Add the Response
component from AI Elements:
Finally, update index.css
so the styling for streamdown
works correctly. The start of your index.css
should now look like this:
@import "tailwindcss";
@import "tw-animate-css";
@source "../node_modules/streamdown/dist/index.js";
@custom-variant dark (&:is(.dark *));
:root {
6. Add Fency.ai
We’ll use Fency to utilize LLMs in React. If you haven’t already:
Sign up at app.fency.ai/signup
Create a new publishable key in the dashboard
Install the npm packages:
Update main.tsx
to include the FencyProvider
and your newly created publishable key.
import { loadFency } from '@fencyai/js'
import { FencyProvider } from '@fencyai/react'
import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import App from './App.tsx'
import './index.css'
const fency = loadFency({
publishableKey: 'fency_pk_replace_with_your_own',
})
createRoot(document.getElementById('root')!).render(
<StrictMode>
<FencyProvider fency={fency}>
<App />
</FencyProvider>
</StrictMode>
)
7. Build the App
Update your App.tsx
to include a simple app that streams an OpenAI chat completion and renders it with <Response>
:
import { useStreamingChatCompletions } from '@fencyai/react'
import { Response } from './components/ai-elements/response'
function App() {
const { latest, createStreamingChatCompletion } =
useStreamingChatCompletions()
return (
<div className="w-screen h-screen flex flex-col p-10">
<div>
<button
onClick={() => {
createStreamingChatCompletion({
openai: {
model: 'gpt-4o-mini',
messages: [
{
role: 'user',
content:
'Show me some react code and 1 example of how to use it.',
},
],
},
})
}}
className="px-4 py-2 rounded-lg"
>
{latest?.loading
? 'Streaming...'
: 'Create Chat Completion'}
</button>
</div>
<div className="mt-10">
{latest?.loading && <div>Streaming...</div>}
{latest?.response && <Response>{latest.response}</Response>}
</div>
</div>
)
}
export default App
8. Try It Out 🚀
Run the app with npm run dev
Click “Create Chat Completion”.
Watch the LLM response stream in, beautifully rendered with markdown, code blocks, and formatting thanks to <Response>
.
The complete codebase is available at https://github.com/fencyai/streamdown-example