For provider response parsing tests, store test fixtures with the true responses from the providers (unless they are too large, in which case semantic-preserving trimming is acceptable).
The fixtures are stored in a __fixtures__ subfolder, e.g. packages/openai/src/responses/__fixtures__. See the file names in packages/openai/src/responses/__fixtures__ for naming conventions and packages/openai/src/responses/openai-responses-language-model.test.ts for how to set up test helpers.
You can use examples under /examples/ai-functions to generate test fixtures.
generateText (doGenerate testing)
For generateText, log the raw response output to the console and copy it into a new test fixture.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { run } from '../lib/run';
run(async () => {
const result = await generateText({
model: openai('gpt-5-nano'),
prompt: 'Invent a new holiday and describe its traditions.',
});
console.log(JSON.stringify(result.response.body, null, 2));
});
streamText (doStream testing)
For streamText, set includeRawChunks to true and use the special saveRawChunks helper. Run the script from the /example/ai-functions folder via pnpm tsx src/stream-text/script-name.ts. The result is stored in /examples/ai-functions/output. Copy it to your fixtures folder and rename it.
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { run } from '../lib/run';
import { saveRawChunks } from '../lib/save-raw-chunks';
run(async () => {
const result = streamText({
model: openai('gpt-5-nano'),
prompt: 'Invent a new holiday and describe its traditions.',
includeRawChunks: true,
});
await saveRawChunks({ result, filename: 'openai-gpt-5-nano' });
});
Mirrored from https://github.com/vercel/ai — original author: vercel, license: Apache-2.0. This is an unclaimed mirror. Content and ownership transfer to the author when they claim this account.