ES EN
Advanced

Security & compliance

How we protect your organization's data, what access controls we have, and where we are on the certification path.

Multi-tenant isolation

Each organization lives in a separate compartment. Your organization's data is never shared with another organization's data — at the application level or at the database level.

Authentication

Roles and permissions

Within an organization, users have roles that define what they can do. Main roles:

Data in transit and at rest

Automatic PII detection

As part of evaluation, the platform automatically detects:

Detection is used to alert when an AI agent exposes PII where it shouldn't.

Immutable snapshots and traceability

Every Run stores a complete snapshot: input, response, timings, logs, and scores. These snapshots are not edited after creation, which provides:

Evaluator calibration

The 17 LLM evaluators that score your AI agent's responses come pre-calibrated by our team. Calibration is the process by which we make sure an evaluator returns reliable, consistent scores across different domains and models.

How we do it internally

What this guarantees

💡 Why does it matter? An uncalibrated LLM evaluator can give arbitrary scores — different models, prompts, and temperatures return very different scores for the same response. Calibration is what turns a "raw LLM score" into a reliable metric you can act on.

Infrastructure & stack

High-level summary of the infrastructure ArtificialQA runs on, aimed at IT and compliance teams on the customer side:

If you need a more detailed technical sheet (versions, regions, cloud-provider certifications, specific DPA), we coordinate delivery under NDA on the Enterprise plan.

External audit by Nextfense

ArtificialQA went through an external security review with Nextfense. Recommendations from the process were evaluated and implemented.

If you need scope and findings detail for your security team, get in touch.

Compliance: where we are

To be transparent:

Best practices for your team

Reporting a vulnerability

If you found a possible vulnerability, write to us through artificialqa.com. We take reports seriously and respond within a few days.

Next step

If operational or plan questions remain, check the FAQ and the Plans & pricing page.