ES EN
How it works

Designing the tests

Four ways to create test cases: with AI by industry, pulling them from the public catalog, importing Excel/JSON, or manually. And how to group them into suites.

The 4 ways to create test cases

With AI by industry
Automatic generation from your AI agent's context. The fastest option.
📚
From the public catalog
Over 25,000 curated test cases by industry, ready to import into your project.
📥
Importing Excel/JSON
If you already have a case set in a spreadsheet or the business team hands you the flows.
✍️
Manually
For critical cases where you need full control over input and expected response.

AI generation

Go to Test Design → AI Generation. The AI generates test cases tailored to your AI agent's domain.

AI test case generator screen
AI Test Generator — the full configuration panel with industry, type, quantity, context, and risk categories.

Generation parameters

Advanced parameters

On top of the basic configuration, you can tune the generation with:

If you don't select any Risk Category, the AI generates standard functional tests. Selecting categories is the way to build suites focused on security and robustness.

Review view

Generated cases do not go directly into your test case base. They land in a review view where, for each case, you can:

This keeps a human in the loop, prevents irrelevant cases from sneaking in, and gives you full control over what makes it into your testing.

💡 Best practice. Generate small batches (5–10 cases) and iterate the context. It's faster to refine the prompt than to regret approving 100 mediocre cases.

From the public catalog

ArtificialQA maintains a public catalog with over 25,000 test cases curated by industry, ready to import into your project. It's the fastest way to start with validated cases without having to generate or write anything from scratch.

Useful for kicking off a new project, complementing your own cases with security/bias/hallucination batteries, or staying up to date with cases we add to the catalog.

Excel / JSON import & export

Go to Test Design → Import.

The system accepts Excel (.xlsx) or JSON with a defined schema. It offers a downloadable template so you write the cases in the right format.

Main fields that get mapped:

You can also export your existing test cases as JSON from the Test Cases list (the exported file uses the same schema as the import endpoint, so you can export, edit externally, and re-import). Useful for version control, sharing case batteries across projects, or building automations on top of the API.

Manual creation

Go to Test Design → Test Cases → New Case.

Editing existing test cases

A test case can be modified after creation — you adjust the input, expected response, asserts, or any other field, and save. The platform makes sure the edit doesn't break historical traceability:

The result: edits are safe from an auditing standpoint. You can iterate freely on your cases without worrying about losing evidence of what was executed when.

📌 Best practice. Use the note to record the context of the change: detected bug, expected-response improvement, tone adjustment, etc. When you come back to the history six months later, that note is what gives you context.

Conversational (multi-turn) cases

When a test case is Conversational, you define a sequence of turns. Each turn is a pair (user says X → bot is expected to respond with Y or meet characteristics Z).

That's what to use for validating flows like: requesting a quote, making a booking, escalating to human. You test the entire dialogue, not just the first response.

Deterministic asserts

Programmatic checks that don't depend on AI. Available types:

Each assert can be hard (if it fails, the test case fails) or soft (kept as an observation, doesn't block the result).

Test Suites: grouping cases

Go to Test Design → Test Suites → New Suite. Give it a name and add the test cases that compose it. The same case can live in multiple suites.

Typical grouping strategies:

📁 Organization tip. Keep a small "Smoke" suite (10–20 critical cases) that you always run, and larger domain suites you run less frequently.

Next step

Once you have your suites built, the next step is running them. We cover that in the Executing section.