Sanity Library Reference Docs
    Preparing search index...

    Class ContentAgentModel

    Vercel AI SDK LanguageModelV1 implementation that communicates with the Sanity Content Agent API via a conversation thread.

    Typically you don't construct this directly — use createContentAgent().agent(threadId) instead.

    import { createContentAgent } from 'content-agent'
    import { streamText } from 'ai'

    const contentAgent = createContentAgent({
    organizationId: 'your-org-id',
    token: 'your-sanity-token',
    })

    const model = contentAgent.agent('my-thread-id', {
    application: { key: 'projectId.datasetName' },
    config: {
    capabilities: { read: true, write: false },
    },
    })

    const { textStream } = await streamText({ model, prompt: 'Summarize my content' })
    for await (const chunk of textStream) {
    process.stdout.write(chunk)
    }

    Implements

    • LanguageModelV3
    Index

    Constructors

    Properties

    modelId: string

    Provider-specific model ID.

    provider: string

    Provider ID.

    specificationVersion: "v3" = ...

    The language model must specify which language model interface version it implements.

    supportedUrls: Record<string, RegExp[]> = {}

    Supported URL patterns by media type for the provider.

    The keys are media type patterns or full media types (e.g. */* for everything, audio/*, video/*, or application/pdf). and the values are arrays of regular expressions that match the URL paths.

    The matching should be against lower-case URLs.

    Matched URLs are supported natively by the model and are not downloaded.

    A map of supported URL patterns by media type (as a promise or a plain object).

    Methods

    • Generates a language model output (non-streaming).

      Naming: "do" prefix to prevent accidental direct usage of the method by the user.

      Parameters

      • options: LanguageModelV3CallOptions

      Returns Promise<LanguageModelV3GenerateResult>

    • Generates a language model output (streaming).

      Naming: "do" prefix to prevent accidental direct usage of the method by the user.

      Parameters

      • options: LanguageModelV3CallOptions

      Returns Promise<LanguageModelV3StreamResult>

      A stream of higher-level language model output parts.