Customer Support Chatbot
Limited RiskLLM-powered virtual assistant handling Tier-1 customer inquiries. Operates 24/7 across web and mobile channels with human escalation.
Assessment Questionnaire
Transparency
Yes — prominent disclosure at conversation start and in the UI header. Compliant with EU AI Act Art. 52 transparency obligation for AI systems interacting with natural persons.
EU AI ActYes — confidence thresholds trigger automatic escalation message. Below 0.6 confidence, users are directed to a human agent.
ISO 42001Data Governance
Yes — PII is redacted from training logs. Conversation data retained for 90 days for quality assurance, then anonymized.
ISO 42001Content Safety
Yes — content filtering pipeline with toxicity detection, hallucination checks, and brand-safety rules. Outputs are post-processed before delivery.
EU AI ActHuman Oversight
Yes — flagged conversations are routed to human review queue. Corrections feed back into fine-tuning pipeline.
EU AI ActAccuracy
Resolution rate: 82%, CSAT: 4.2/5, hallucination rate: 1.3% — monitored daily with weekly review meetings.
ISO 42001Record Keeping
Yes — all conversations logged with role-based access. Retention: 90 days full, 2 years anonymized aggregates.
ISO 42001Gap Analysis
1 gap identified
Escalation threshold (0.6 confidence) not validated against recent production data.
Recommendation: Run calibration study on last 3 months of conversations to validate or adjust escalation threshold.