AdvancedText Generation Options

Text Generation Options

Model Configuration Options

OptionTypeDefaultDescription
quantizationstringq4f16_1Model quantization level (‘q4f16_1’, ‘q4f32_1’, ‘q0f32’, ‘q0f16’)
onProgressfunction-Callback for loading progress updates
onCompletefunction-Callback when loading completes
onErrorfunction-Callback for error handling

Generation Parameters

ParameterTypeDefaultDescription
temperaturenumber0.7Controls randomness (0.0 - 1.0)
max_tokensnumber100Maximum length of response
top_pnumber0.9Nucleus sampling parameter
top_knumber40Top-k sampling parameter
system_promptstring-System prompt for context
streambooleanfalseEnable streaming response
json_schemaobject/string-JSON schema for structured output
response_formatobject-Format specification for response

Example

import { BrowserAI } from '@browserai/browserai';
const browserAI = new BrowserAI();
 
// Load model with advanced options
await browserAI.loadModel('llama-3.2-1b-instruct', {
  quantization: 'q4f16_1',
  onProgress: (progress) => {
    console.log('Loading progress:', progress.progress + '%');
    // "Loading progress: 45%"
  },
  onComplete: () => {
    console.log('Status:', 'Ready to generate!'); // "Status: Ready to generate!"
  }
});
 
// Generate text with custom parameters
const response = await browserAI.generateText('Write a story about AI', {
  temperature: 0.8,
  max_tokens: 200,
  system_prompt: "You are a creative storyteller specialized in science fiction.",
});
 
console.log('Generated Story:', response.choices[0].message.content); // "In the year 2045..."

Structured Output Generation

You can generate structured JSON responses by providing a JSON schema and response format. This is useful when you need the output in a specific format for programmatic use.

const response = await browserAI.generateText('List 3 colors', {
  json_schema: {
    type: "object",
    properties: {
      colors: {
        type: "array",
        items: {
          type: "object",
          properties: {
            name: { type: "string" },
            hex: { type: "string" }
          }
        }
      }
    }
  },
  response_format: {
    type: "json_object"
  }
});

Web Worker Support

For better performance and to avoid blocking the main thread during text generation, you can enable web worker support:

const response = await browserAI.generateText('Write a story about AI', {
  useWorker: true,
  // other parameters...
});

Streaming Responses

You can receive text generation responses in chunks using the streaming option:

const response = await browserAI.generateText('Write a story', {
  stream: true,
  // other parameters...
});

For detailed examples of structured output, streaming responses, and web worker implementation, see the Generate Text API Reference.