AiVRIC User Guide
Security policy

AI & Autonomous Technologies

Sets guardrails for developing, deploying, and monitoring AI or autonomous capabilities within AiVRIC.

Applies to AiVRIC workforce, partners, and subprocessors Trust Center Acceptable use

Purpose & scope

This policy guides how AiVRIC designs, operates, and validates AI & Autonomous Technologies across production, corporate, and partner environments.

It applies to employees, contractors, vendors, and any system interacting with AiVRIC data or services.

Key controls

  • Require documented business justification and risk assessment for each AI/autonomous use case.
  • Enable safety guardrails (prompt controls, rate limits, output filters) before production use.
  • Log model inputs/outputs for auditability; protect sensitive data via masking or redaction.
  • Establish human-in-the-loop approvals for high-impact or safety-critical actions.

Operating procedures

  • Complete AI risk intake and approval prior to connecting models or agents to production data.
  • Test guardrails in staging; measure false positives/negatives before enforcing.
  • Review AI performance and safety incidents quarterly and update controls accordingly.

Evidence & ownership

Owner: Security & Compliance. Review cadence: annually or after material changes.

Evidence: Collected via AiVRIC audit logs, ticketing systems, monitoring dashboards, and vendor records as appropriate to this policy area.

Contact: [email protected]