API ReferenceGenerate Text

Generate Text

Generate text using the loaded language model.

Generates text using the loaded language model.

const response = await browserAI.generateText('Write a story about space', {
  temperature: 0.7,
  max_tokens: 100,
  system_prompt: "You are a sci-fi writer."
});

Parameters

ParameterTypeRequiredDescription
promptstringYesInput text prompt
optionsGenerationOptionsNoGeneration parameters

GenerationOptions

OptionTypeDefaultDescription
max_tokensnumber300Maximum response length
temperaturenumber0.6Controls randomness (0.0-1.0)
top_pnumber0.95Nucleus sampling threshold
frequency_penaltynumber0.5Penalize frequent tokens
presence_penaltynumber0.5Penalize tokens already present
system_promptstring-System context message
stop_tokensstring[][]Stop generation tokens
useWorkerbooleanfalseRun in web worker
json_schemaobject/string-JSON schema for structured output
response_formatobject-Format specification for response
streambooleanfalseEnable streaming response

Example with Structured Output

const response = await browserAI.generateText('List 3 colors', {
  json_schema: {
    type: "object",
    properties: {
      colors: {
        type: "array",
        items: {
          type: "object",
          properties: {
            name: { type: "string" },
            hex: { type: "string" }
          }
        }
      }
    }
  },
  response_format: {
    type: "json_object"
  }
});
 
// Returns:
// {
//   "colors": [
//     { "name": "red", "hex": "#FF0000" },
//     { "name": "blue", "hex": "#0000FF" },
//     { "name": "green", "hex": "#00FF00" }
//   ]
// }

Streaming Example

const chunks = await browserAI.generateText('Write a story', {
  max_tokens: 4096,
  temperature: 0.6,
  stream: true
});
 
let response = '';
for await (const chunk of chunks as AsyncIterable<{
  choices: Array<{ delta: { content?: string } }>,
  usage: any
}>) {
  // Get the new content from the chunk
  const newContent = chunk.choices[0]?.delta.content || '';
  response += newContent;
  
  // Access usage statistics if needed
  const usage = chunk.usage;
  
  // Update UI or process the partial response
  console.log('Partial response:', response);
}

Stream Response Format

Each chunk in the stream contains:

interface StreamChunk {
  choices: Array<{
    delta: {
      content?: string  // New content piece
    }
  }>,
  usage: {
    // Usage statistics for the generation
    completion_tokens: number,
    prompt_tokens: number,
    total_tokens: number
  }
}

Returns

Promise<string>