Getting started
Fency.ai provides seamless LLM integration for React webapps. Get started in minutes with our React SDK and start building AI-powered features.
Quick start
To get started with Fency.ai in your React application:
- Sign up: Create a Fency.ai account
- Create a publishable key: Generate your first publishable key in the dashboard
- Start building: Follow our React integration guide
What you can build
With Fency.ai, you can easily add AI capabilities to your React applications:
- Chat interfaces: Build conversational AI experiences
- Data analysis: Analyze and summarize data
- Code assistance: Help users with programming tasks
Supported models
Fency.ai supports leading LLM models from OpenAI, Gemini and Anthropic:
- OpenAI: GPT-4.1, GPT-4.1-mini, GPT-4.1-nano, GPT-4o, GPT-4o-mini
- Anthropic: Claude Opus 4.0, Claude Sonnet 4.0
- Gemini: Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash Lite
Publishable keys
Publishable keys are the primary way to authenticate with Fency.ai from your React webapp. These keys are safe to expose in client-side code and provide secure access to our LLM integration services.
Creating a publishable key
To create a publishable key, navigate to your Fency.ai dashboard and go to the API Keys section. Click "Create new key" and select "Publishable key" as the key type.
Your publishable key will look something like this:
pk_1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Using your publishable key
Once you have your publishable key, you can use it to initialize the Fency.ai client in your React webapp:
import { loadFency } from '@fencyai/js'
import { FencyProvider } from '@fencyai/react'
import { createRoot } from 'react-dom/client'
import { BrowserRouter } from 'react-router'
import App from './App.tsx'
const fency = loadFency({
publishableKey: "pk_1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
})
createRoot(document.getElementById('root')).render(
<FencyProvider fency={fency}>
<App />
</FencyProvider>
)
Spending limits
Spending limits help you control costs and prevent unexpected charges. You can set daily, weekly, or monthly spending limits for each publishable key.
Setting spending limits
In your dashboard, you can configure spending limits when creating or editing a publishable key.
When a limit is reached, all API calls using that key will be rejected with a 429 status code until the next billing period begins.
Allowed origins
Allowed origins (CORS) restrict which domains can use your publishable key, providing an additional layer of security.
Configuring allowed origins
When creating or editing a publishable key, you can specify which domains are allowed to make requests:
- Development:
http://localhost:3000
,http://localhost:5173
- Production:
https://yourdomain.com
,https://app.yourdomain.com
Best practices
To maintain security:
- Set spending limits to prevent unexpected charges.
- Use separate publishable keys for local development and production.
- Set small spending limits for local development to prevent unexpected charges, which can happen if an unexpected loop goes wild when you are building your application.
- Only add origins that actually need access. You can add multiple origins for the same key.
- Regularly review and update your allowed origins list, especially for production.
Some good advice
Spending limits and allowed origins are not a security feature and do not provide any protection against unauthorized access. They are only meant to help you control costs and does not guarantee that unexpected charges will not happen.
Pricing and models
Fency.ai offers transparent, usage-based pricing with no hidden fees. You only pay for what you use.
Pricing structure
Our pricing is based on the number of tokens processed by the underlying LLM models. We add a small markup to cover our own costs.
Token-based pricing
Tokens are the basic units of text that LLMs process. As a rough guide:
- 1 token ≈ 4 characters in English text
- 1 token ≈ 0.75 words in English text
- 1,000 tokens ≈ 750 words in English text
LLM models
We support a wide range of models from leading AI providers, each optimized for different use cases:
Model | Input (per 1K tokens) | Output (per 1K tokens) | Context Window | Max Output |
---|---|---|---|---|
GPT-4.1 | $0.002 | $0.008 | 1,047,576 | 32,768 |
GPT-4.1-mini | $0.0004 | $0.0016 | 1,047,576 | 32,768 |
GPT-4.1-nano | $0.0001 | $0.0004 | 1,047,576 | 32,768 |
GPT-4o | $0.0025 | $0.01 | 128,000 | 16,384 |
GPT-4o-mini | $0.00015 | $0.0006 | 128,000 | 16,384 |
Gemini 2.5 Pro | $0.0025 | $0.015 | 1,048,576 | 65,536 |
Gemini 2.5 Flash | $0.0003 | $0.0025 | 1,048,576 | 65,536 |
Gemini 2.5 Flash Lite | $0.0001 | $0.0004 | 1,048,576 | 65,536 |
Claude Opus 4.0 | $0.015 | $0.075 | 200,000 | 32,768 |
Claude Sonnet 4.0 | $0.003 | $0.015 | 200,000 | 65,536 |
File uploads
File uploads and text extraction are charged separately based on the number of files processed and text extracted.
Service | Price | Description |
---|---|---|
File upload | $0.001 | Per file uploaded |
Text extraction | $0.01 | Per file text extraction |
Website scraping
Website scraping services are charged based on the type of content extraction performed.
Service | Price | Description |
---|---|---|
HTML extraction | $0.01 | Per HTML extraction from website |
Text extraction | $0.01 | Per text extraction from website |
React Integration
Fency.ai provides a dedicated React SDK that makes it easy to integrate LLM capabilities into your React applications. Our SDK includes hooks, components, and utilities designed specifically for React developers.
Installation and setup
This section contains instructions for installing and setting up Fency.ai with React.
Before you start
We assume you have a basic understanding of React and TypeScript, and have created a publishable key in the dashboard.
Installation
Install the Fency.ai SDK for JavaScript and React:
npm install --save @fencyai/js @fencyai/react
Then provide your publishable key to loadFency
and wrap your app in the FencyProvider
to make the Fency.ai client available to the React hooks we'll use later.
import { loadFency } from '@fencyai/js'
import { FencyProvider } from '@fencyai/react'
import { createRoot } from 'react-dom/client'
const fency = loadFency({
publishableKey:
'pk_1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef',
})
createRoot(document.getElementById('root')!).render(
<FencyProvider fency={fency}>
<>{/* <App /> or similar component */}</>
</FencyProvider>
)
Basic Chat Completions
Use the useBasicChatCompletions
hook and itscreateBasicChatCompletion
method to send chat completion requests to the Fency.ai API. This is the most common way to interact with LLM models.
Usage
import { useBasicChatCompletions } from '@fencyai/react'
export default function BasicChatCompletionExample() {
const { createBasicChatCompletion, latest } = useBasicChatCompletions()
const response = latest?.data?.response
const error = latest?.error
const loading = latest?.loading
return (
<div>
<button
onClick={async () => {
await createBasicChatCompletion({
openai: {
model: 'gpt-4o-mini',
messages: [
{
role: 'user',
content: 'Hello, how are you?',
},
],
},
})
}}
disabled={loading}
>
Send Message
</button>
{error && <div>{error.message}</div>}
{response && <div>{response}</div>}
</div>
)
}
Structured Chat Completions
Use the useStructuredChatCompletions
hook and itscreateStructuredChatCompletion
method to get responses in a specific format using Zod schemas. This is perfect for extracting structured data from AI responses.
Usage
import { useStructuredChatCompletions } from '@fencyai/react'
import { useState } from 'react'
import { z } from 'zod'
const responseFormat = z.object({
name: z.string(),
age: z.number(),
})
export default function StructuredChatCompletionExample() {
const { createStructuredChatCompletion, latest } =
useStructuredChatCompletions()
const [result, setResult] = useState<
z.infer<typeof responseFormat> | undefined
>(undefined)
const error = latest?.error
const loading = latest?.loading
const handleClick = async () => {
const result = await createStructuredChatCompletion({
openai: {
model: 'gpt-4o-mini',
messages: [
{
role: 'user',
content:
'Please tell me the name of a famous person and their age.',
},
],
},
responseFormat,
})
if (result.type === 'success') {
setResult(result.data.structuredResponse)
}
}
return (
<div>
<button onClick={handleClick} disabled={loading}>
Get Structured Response
</button>
{error && <div>{error.message}</div>}
{result && (
<pre className="text-xs">{JSON.stringify(result, null, 2)}</pre>
)}
{loading && <div>Getting structured response...</div>}
</div>
)
}
Streaming Chat Completions
Use the useStreamingChatCompletions
hook and itscreateStreamingChatCompletion
method to stream chat completions in real-time.
Usage
import { useStreamingChatCompletions } from '@fencyai/react'
export default function StreamingCompletionExample() {
const { createStreamingChatCompletion, latest } =
useStreamingChatCompletions()
const response = latest?.response
const error = latest?.error
const loading = latest?.loading
const handleClick = async () => {
await createStreamingChatCompletion({
openai: {
model: 'gpt-4o-mini',
messages: [
{
role: 'user',
content: 'Hello, how are you?',
},
],
},
})
}
return (
<div>
<button onClick={handleClick} disabled={loading}>
Start Streaming
</button>
{error && <div>{error.message}</div>}
{response && <div>{response}</div>}
{loading && <div>Streaming...</div>}
</div>
)
}
File Uploads
Use the useFiles
hook and itscreateFile
method to upload files to the Fency.ai API. The code snippet below is extremely basic and we recommend to check out our examples or our blog post covering how to use Uppy for a complete upload flow.
Detailed Blog Post Available!
Usage
import { useFiles } from '@fencyai/react'
import { S3PostRequest } from 'node_modules/@fencyai/js/lib/types/S3PostRequest'
export default function UploadFileExample() {
const { createFile } = useFiles({
onTextContentReady: async (event) => {
console.log(event.textContent)
},
onUploadCompleted: async (event) => {
console.log(event.fileId)
},
})
const handleFileChange = async (
event: React.ChangeEvent<HTMLInputElement>
) => {
const file = event.target.files?.[0]
if (file) {
const result = await createFile({
fileName: file.name,
fileType: file.type,
fileSize: file.size,
extractTextContent: true,
})
if (result.type === 'success') {
const response = result.file.s3PostRequest
try {
await postFile(response, file)
console.log('File uploaded successfully')
} catch (error) {
console.error('Upload failed:', error)
}
}
}
}
const postFile = async (request: S3PostRequest, file: File) => {
const formData = new FormData()
formData.append('key', request.key)
formData.append('policy', request.policy)
formData.append('x-amz-algorithm', request.xAmzAlgorithm)
formData.append('x-amz-credential', request.xAmzCredential)
formData.append('x-amz-date', request.xAmzDate)
formData.append('x-amz-signature', request.xAmzSignature)
formData.append('x-amz-security-token', request.sessionToken)
formData.append('file', file)
const response = await fetch(request.uploadUrl, {
method: 'POST',
body: formData,
})
if (!response.ok) {
throw new Error(
`Upload failed: ${response.status} ${response.statusText}`
)
}
return response
}
return (
<div>
<input type="file" onChange={handleFileChange} />
</div>
)
}
Website Scraping
Use the useWebsites
hook and itscreateWebsite
method to scrape websites and extract content.
Usage
import { useWebsites } from '@fencyai/react'
export default function ScrapeWebsiteExample() {
const { createWebsite } = useWebsites({
onHtmlContentReady: async (event) => {
console.log(event.htmlContent)
},
onTextContentReady: async (event) => {
console.log(event.textContent)
},
})
const handleClick = async () => {
await createWebsite({
url: 'https://www.google.com',
extractHtmlContent: true,
extractTextContent: true,
})
}
return (
<div>
<button onClick={handleClick}>Scrape Website</button>
</div>
)
}