Getting started
Fency.ai provides seamless LLM integration for React webapps. Get started in minutes with our React SDK and start building AI-powered features.
Quick start
To get started with Fency.ai in your React application:
- Sign up: Create a Fency.ai account
- Create a publishable key: Generate your first publishable key in the dashboard
- Start building: Follow our React integration guide
What you can build
With Fency.ai, you can easily add AI capabilities to your React applications:
- Chat interfaces: Build conversational AI experiences
- Data analysis: Analyze and summarize data
- Code assistance: Help users with programming tasks
Supported models
Fency.ai supports leading LLM models from OpenAI, Gemini and Anthropic:
- OpenAI: GPT-4.1, GPT-4.1-mini, GPT-4.1-nano, GPT-4o, GPT-4o-mini
- Anthropic: Claude Opus 4.0, Claude Sonnet 4.0
- Gemini: Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash Lite
Publishable keys
Publishable keys are the primary way to authenticate with Fency.ai from your React webapp. These keys are safe to expose in client-side code and provide secure access to our LLM integration services.
Creating a publishable key
To create a publishable key, navigate to your Fency.ai dashboard and go to the API Keys section. Click "Create new key" and select "Publishable key" as the key type.
Your publishable key will look something like this:
pk_1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Using your publishable key
Once you have your publishable key, you can use it to initialize the Fency.ai client in your React webapp:
import { loadFency } from '@fencyai/js'
import { FencyProvider } from '@fencyai/react'
import { createRoot } from 'react-dom/client'
import { BrowserRouter } from 'react-router'
import App from './App.tsx'
const fency = loadFency({
publishableKey: "pk_1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
})
createRoot(document.getElementById('root')).render(
<FencyProvider fency={fency}>
<App />
</FencyProvider>
)```
Allowed origins
Allowed origins (CORS) restrict which domains can use your publishable key, providing an additional layer of security.
Configuring allowed origins
When creating or editing a publishable key, you can specify which domains are allowed to make requests:
- Development:
http://localhost:3000,http://localhost:5173 - Production:
https://yourdomain.com,https://app.yourdomain.com
Best practices
To maintain security:
- Use separate publishable keys for local development and production.
- Only add origins that actually need access. You can add multiple origins for the same key.
- Regularly review and update your allowed origins list, especially for production.
Some good advice
React Integration
Fency.ai provides a dedicated React SDK that makes it easy to integrate LLM capabilities into your React applications. Our SDK includes hooks, components, and utilities designed specifically for React developers.
Installation and setup
This section contains instructions for installing and setting up Fency.ai with React.
Before you start
We assume you have a basic understanding of React and TypeScript, and have created a publishable key in the dashboard.
Installation
Install the Fency.ai SDK for JavaScript and React:
npm install --save @fencyai/js @fencyai/react
Then provide your publishable key to loadFency and wrap your app in the FencyProvider to make the Fency.ai client available to the React hooks we'll use later.
import { loadFency } from '@fencyai/js'
import { FencyProvider } from '@fencyai/react'
import { createRoot } from 'react-dom/client'
const fency = loadFency({
publishableKey:
'pk_1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef',
})
createRoot(document.getElementById('root')!).render(
<FencyProvider fency={fency}>
<>{/* <App /> or similar component */}</>
</FencyProvider>
)
Basic Chat Completions
Use the useBasicChatCompletions hook and itscreateBasicChatCompletion method to send chat completion requests to the Fency.ai API. This is the most common way to interact with LLM models.
Usage
import { useBasicChatCompletions } from '@fencyai/react'
export default function BasicChatCompletionExample() {
const { createBasicChatCompletion, latest } = useBasicChatCompletions()
const response = latest?.data?.response
const error = latest?.error
const loading = latest?.loading
return (
<div>
<button
onClick={async () => {
await createBasicChatCompletion({
openai: {
model: 'gpt-4o-mini',
messages: [
{
role: 'user',
content: 'Hello, how are you?',
},
],
},
})
}}
disabled={loading}
>
Send Message
</button>
{error && <div>{error.message}</div>}
{response && <div>{response}</div>}
</div>
)
}
Structured Chat Completions
Use the useStructuredChatCompletions hook and itscreateStructuredChatCompletion method to get responses in a specific format using Zod schemas. This is perfect for extracting structured data from AI responses.
Usage
import { useStructuredChatCompletions } from '@fencyai/react'
import { useState } from 'react'
import { z } from 'zod'
const responseFormat = z.object({
name: z.string(),
age: z.number(),
})
export default function StructuredChatCompletionExample() {
const { createStructuredChatCompletion, latest } =
useStructuredChatCompletions()
const [result, setResult] = useState<
z.infer<typeof responseFormat> | undefined
>(undefined)
const error = latest?.error
const loading = latest?.loading
const handleClick = async () => {
const result = await createStructuredChatCompletion({
openai: {
model: 'gpt-4o-mini',
messages: [
{
role: 'user',
content:
'Please tell me the name of a famous person and their age.',
},
],
},
responseFormat,
})
if (result.type === 'success') {
setResult(result.data.structuredResponse)
}
}
return (
<div>
<button onClick={handleClick} disabled={loading}>
Get Structured Response
</button>
{error && <div>{error.message}</div>}
{result && (
<pre className="text-xs">{JSON.stringify(result, null, 2)}</pre>
)}
{loading && <div>Getting structured response...</div>}
</div>
)
}
Streaming Chat Completions
Use the useStreamingChatCompletions hook and itscreateStreamingChatCompletion method to stream chat completions in real-time.
Usage
import { useStreamingChatCompletions } from '@fencyai/react'
export default function StreamingCompletionExample() {
const { createStreamingChatCompletion, latest } =
useStreamingChatCompletions()
const response = latest?.response
const error = latest?.error
const loading = latest?.loading
const handleClick = async () => {
await createStreamingChatCompletion({
openai: {
model: 'gpt-4o-mini',
messages: [
{
role: 'user',
content: 'Hello, how are you?',
},
],
},
})
}
return (
<div>
<button onClick={handleClick} disabled={loading}>
Start Streaming
</button>
{error && <div>{error.message}</div>}
{response && <div>{response}</div>}
{loading && <div>Streaming...</div>}
</div>
)
}
File Uploads
Use the useFiles hook and itscreateFile method to upload files to the Fency.ai API. The code snippet below is extremely basic and we recommend to check out our examples or our blog post covering how to use Uppy for a complete upload flow.
Detailed Blog Post Available!
Usage
import { useFiles } from '@fencyai/react'
import { S3PostRequest } from 'node_modules/@fencyai/js/lib/types/S3PostRequest'
export default function UploadFileExample() {
const { createFile } = useFiles({
onTextContentReady: async (event) => {
console.log(event.textContent)
},
onUploadCompleted: async (event) => {
console.log(event.fileId)
},
})
const handleFileChange = async (
event: React.ChangeEvent<HTMLInputElement>
) => {
const file = event.target.files?.[0]
if (file) {
const result = await createFile({
fileName: file.name,
fileType: file.type,
fileSize: file.size,
extractTextContent: true,
})
if (result.type === 'success') {
const response = result.file.s3PostRequest
try {
await postFile(response, file)
console.log('File uploaded successfully')
} catch (error) {
console.error('Upload failed:', error)
}
}
}
}
const postFile = async (request: S3PostRequest, file: File) => {
const formData = new FormData()
formData.append('key', request.key)
formData.append('policy', request.policy)
formData.append('x-amz-algorithm', request.xAmzAlgorithm)
formData.append('x-amz-credential', request.xAmzCredential)
formData.append('x-amz-date', request.xAmzDate)
formData.append('x-amz-signature', request.xAmzSignature)
formData.append('x-amz-security-token', request.sessionToken)
formData.append('file', file)
const response = await fetch(request.uploadUrl, {
method: 'POST',
body: formData,
})
if (!response.ok) {
throw new Error(
`Upload failed: ${response.status} ${response.statusText}`
)
}
return response
}
return (
<div>
<input type="file" onChange={handleFileChange} />
</div>
)
}
Website Scraping
Use the useWebsites hook and itscreateWebsite method to scrape websites and extract content.
Usage
import { useWebsites } from '@fencyai/react'
export default function ScrapeWebsiteExample() {
const { createWebsite } = useWebsites({
onHtmlContentReady: async (event) => {
console.log(event.htmlContent)
},
onTextContentReady: async (event) => {
console.log(event.textContent)
},
})
const handleClick = async () => {
await createWebsite({
url: 'https://www.google.com',
extractHtmlContent: true,
extractTextContent: true,
})
}
return (
<div>
<button onClick={handleClick}>Scrape Website</button>
</div>
)
}