Note: The Portuguese version is the legally binding version.

Artificial Intelligence Policy

Last updated: April 5, 2026

Omega Capital Holding Gestão e Participações Empresariais Ltda ("ChatSense," "we," "our," or "Platform"), committed to transparency and the responsible use of Artificial Intelligence (AI) technologies, publishes this Policy to describe the AI systems utilized, their purposes, limitations, security safeguards, and the shared responsibilities between the Platform and its customers.

1. Description of Artificial Intelligence Systems

1.1 Language Model Providers (LLM)

ChatSense utilizes the following large language model providers:

  • Google Gemini 2.5 Flash / Pro — Primary model for response generation, sentiment analysis, intent classification, conversation summarization, topic extraction, and image analysis (Vision)
  • OpenAI — Used exclusively for generating vector embeddings (text-embedding) for the semantic search system (RAG)

1.2 AI Features

The platform offers the following Artificial Intelligence-based features:

  • Automated responses (bot pipeline) — AI-generated responses for automated customer service
  • Response suggestions — Contextual suggestions for human agents during customer interactions
  • Conversation summarization — Automatic summaries of the content and key points of each conversation
  • Sentiment analysis — Automatic classification of customer sentiment (positive, neutral, negative)
  • Auto-tagging — Automatic categorization of conversations by relevant topics
  • Topic extraction — Identification of the main subjects addressed in each interaction
  • Audio transcription (STT) — Conversion of audio messages to text for processing and display
  • Image analysis (Vision) — Interpretation of the content of images sent by customers
  • Text-to-speech (TTS) — Audio generation from text-based responses

1.3 RAG (Retrieval-Augmented Generation)

ChatSense utilizes a Retrieval-Augmented Generation (RAG) system to improve the accuracy of AI responses. The customer's knowledge base is indexed with 1536-dimensional vector embeddings using the PostgreSQL pgvector extension. During customer service interactions, the system performs semantic searches to locate relevant excerpts from the knowledge base and inject them as context into the prompts sent to the language model.

1.4 Bot Pipeline

The processing flow for each AI interaction follows this sequence:

  • Intent classification — Identification of the customer's intent in the received message
  • RAG retrieval — Semantic search of the knowledge base for relevant content
  • Conversation history — Inclusion of context from previous messages
  • Guardrail injection — Application of the configured security safeguards
  • LLM call — Submission of the complete prompt to the language model
  • Response delivery — Delivery of the generated response to the customer via the corresponding channel

1.5 Escalation System

The AI is capable of detecting when an interaction requires human intervention. The system identifies escalation keywords, excessive complexity, or customer dissatisfaction and automatically transfers the conversation to a human agent, accompanied by a configurable transition message.

2. Classification Under the EU AI Act (Regulation 2024/1689)

2.1 System Classification

ChatSense is classified as a general-purpose AI system (GPAI — General-Purpose AI System) pursuant to Article 51 of Regulation (EU) 2024/1689 (EU AI Act). The platform provides AI capabilities that may be utilized across various customer service contexts.

2.2 Risk Assessment

The ChatSense platform is not inherently high-risk. However, we acknowledge that customers (deployers) may create use cases that fall within the high-risk category, depending on the sector and purpose of deployment (for example, healthcare, financial services, or legal services).

2.3 Deployer Responsibilities

Pursuant to Article 26 of the EU AI Act, the customer (deployer) is responsible for:

  • Informing end users about interaction with AI systems
  • Maintaining adequate human oversight over deployed AI systems
  • Ensuring the quality of data used in the training and configuration of agents
  • Assessing whether the specific use case constitutes high-risk within their regulatory context

2.4 Transparency Obligations

In compliance with Article 50, AI-generated content must be properly identified. ChatSense implements identification markers on AI-generated messages and provides information about the model, version, and execution traces to administrators.

2.5 Obligations as a GPAI Provider

ChatSense fulfills the obligations under Article 53 for GPAI system providers, including: maintenance of technical documentation, copyright policy, and transparency regarding the operation of AI systems.

3. Transparency and AI Content Identification

  • Identification markers — AI-generated messages carry system identification markers, distinguishing them from messages sent by human agents
  • Administrator visibility — Administrators have access to system prompts, model versions used, and complete execution traces
  • Audit trail — Each AI interaction is logged in the agent_execution_traces table, including: input and output tokens, latency, model used, and provider
  • Model versioning — Model versions are recorded to ensure reproducibility and traceability
  • Customer obligation — The customer must inform its end users when they are interacting with an AI system rather than a human being

4. Data Processing for AI

4.1 Data Sent to Language Models

The following data is included in prompts sent to LLM providers:

  • Conversation messages (last 20 messages for context)
  • System prompt configured by the administrator
  • RAG context (knowledge base excerpts retrieved via semantic search)
  • Contact name
  • Platform/channel type (WhatsApp, Instagram, Messenger, Telegram, Email, Live Chat)

4.2 Data That Is NOT Sent

The following data is never included in AI prompts:

  • Passwords and access credentials
  • Payment information and financial data
  • Internal team notes
  • System audit logs

4.3 Prohibition on Training with Customer Data

Customer data is used exclusively for inference (response generation). The LLM providers used by ChatSense (Google and OpenAI) do not use the data submitted via API to train or improve their models. This guarantee is supported by the enterprise terms of service of each provider.

4.4 Embeddings and Local Storage

Vector embeddings are generated through the OpenAI API and stored locally in the PostgreSQL database (pgvector extension). The vectors are not shared with third parties and remain under the exclusive control of the platform.

4.5 Data Minimization

Only the conversation context strictly necessary is included in AI prompts, in compliance with the data minimization principle of the LGPD (art. 6, III).

4.6 AI Data Retention

AI execution records (execution traces) are retained for 90 days for debugging, auditing, and quality improvement purposes, and are automatically deleted thereafter.

5. Guardrails System (Security Safeguards)

5.1 Security Flags

ChatSense implements 10 security flags that are enabled by default on all AI agents. Control over these flags is restricted to superadministrators:

  • no_profanity — Blocks vulgar, obscene, or offensive language
  • no_threats — Blocks intimidation, coercion, or threats
  • no_discrimination — Blocks racial, gender, religious, sexual orientation, or disability-based discrimination
  • no_sexual_content — Blocks sexual or adult content (administrator may disable upon acceptance of a liability disclaimer)
  • no_violence — Blocks glorification of or incitement to violence
  • no_illegal_activity — Blocks guidance or instructions regarding illegal activities
  • no_personal_data — Blocks the AI from sharing third-party personal data
  • no_impersonation — Blocks impersonation of real persons, authorities, or institutions
  • no_medical_diagnosis — Enforces referral to a qualified healthcare professional, preventing medical diagnoses
  • no_financial_advice — Prevents unlicensed financial, tax, or investment advice

5.2 Per-Agent Tone Settings

Each AI agent has 8 tone settings controllable by the administrator:

  • allow_slang — Allows use of slang and informal language
  • allow_humor — Allows responses with a humorous tone
  • allow_emojis — Allows use of emojis in responses
  • allow_price_negotiation — Allows price negotiation (requires acceptance of a liability disclaimer)
  • allow_competitor_mentions — Allows mentions of competitors
  • allow_external_links — Allows inclusion of external links in responses
  • allow_political_topics — Allows discussion of political topics
  • allow_religious_references — Allows religious references (requires acceptance of a liability disclaimer)

5.3 Liability Disclaimers

Certain settings (no_sexual_content disabled, allow_price_negotiation, allow_religious_references) require explicit acceptance of a liability disclaimer by the administrator. The acceptance is recorded with IP address, user-agent, and timestamp for auditing and legal compliance purposes.

6. Accuracy, Limitations, and Hallucinations

6.1 Probabilistic Nature

The Artificial Intelligence systems used by ChatSense are probabilistic in nature. This means that generated responses may be inaccurate, incomplete, or fabricated (a phenomenon known as "hallucination").

6.2 No Warranty

ChatSense DOES NOT guarantee the accuracy, completeness, timeliness, or suitability of AI-generated responses for any specific purpose. AI responses are provided "as is," without any express or implied warranties.

6.3 Does Not Constitute Professional Advice

AI-generated responses do not constitute professional advice of any kind, including but not limited to: medical, legal, financial, tax, accounting, or psychological advice. Customers operating in regulated sectors must ensure qualified human oversight.

6.4 Known Risks

Despite the implemented guardrails, the AI may occasionally generate responses that are:

  • Factually incorrect or outdated
  • Inconsistent with the configured knowledge base
  • Inappropriate even with active safeguards
  • Biased or unfair (algorithmic bias)

ChatSense implements guardrails and safety systems to reduce (not eliminate) these risks. The customer must maintain active and continuous human oversight.

7. Human Oversight and Right to Intervention

7.1 Oversight Mechanisms

  • Escalation system — The AI detects escalation keywords and automatically transfers the conversation to human agents
  • Per-agent deactivation — The administrator may deactivate AI for individual agents at any time
  • Organization-wide deactivation — The administrator may deactivate all AI for the entire organization
  • Per-conversation toggle — Human agents may enable or disable AI on each conversation individually

7.2 Data Subject Rights Under the LGPD

In compliance with art. 20 of the LGPD (Law No. 13,709/2018), the data subject has the right to request human review of decisions made solely on the basis of automated processing of personal data that affect their interests, including decisions intended to define their personal, professional, consumer, or credit profile.

7.3 Rights Under the GDPR

In compliance with art. 22 of the GDPR (EU Regulation 2016/679), data subjects have the right not to be subject to decisions based solely on automated processing, including profiling, that produce legal effects or similarly significantly affect them.

7.4 Auditability

Each AI interaction is logged with full context (input, output, model, provider, latency, tokens used) in the audit trail, enabling subsequent review and investigation by administrators and, where applicable, by regulatory authorities.

8. Bias, Fairness, and Non-Discrimination

  • The no_discrimination guardrail is enabled by default and cannot be disabled, ensuring that discriminatory responses are blocked in all scenarios
  • We acknowledge that AI models may reflect biases present in the training data of LLM providers
  • ChatSense monitors for discriminatory patterns and implements corrective measures when identified
  • Customers who identify potentially discriminatory or biased responses should report them immediately to contato@chatsense.app
  • As we do not train models with customer data, identified biases originate from upstream provider models (Google, OpenAI), and ChatSense applies guardrails to mitigate them

9. Customer Responsibilities (Deployer)

By using ChatSense's Artificial Intelligence features, the customer assumes the following responsibilities:

  • Configuring AI agents with system prompts appropriate to their business context
  • Reviewing and maintaining guardrail settings in accordance with their industry sector
  • Keeping the escalation system enabled to ensure human intervention when necessary
  • Monitoring AI agent conversations on a regular basis
  • Training human agents on oversight and quality control of AI systems
  • Informing end users that they are interacting with an AI system rather than a human being
  • Not disabling security guardrails in regulated sectors (healthcare, financial, legal)
  • Complying with sector-specific AI regulations applicable to their industry
  • Promptly reporting AI security incidents as described in Section 11

10. Limitation of Liability and Disclaimers

10.1 ChatSense IS NOT Liable For:

  • Accuracy, completeness, or timeliness of AI-generated content
  • Decisions made by customers or end users based on AI responses
  • Losses, damages, or harm arising from AI hallucinations or incorrect responses
  • Fines, penalties, or regulatory sanctions resulting from the customer's AI deployment choices
  • End-user complaints regarding AI interactions
  • The customer's failure to properly configure guardrails
  • The customer's failure to maintain human oversight over AI agents

10.2 Assumption of Risk by the Customer

The customer fully assumes all risks related to:

  • Deployment of AI agents within their business context
  • Content generated by AI agents configured by the customer
  • Compliance with sector-specific AI regulations applicable to their industry
  • End-user interactions with AI systems

10.3 Maximum Liability Cap

In any event, ChatSense's total liability shall be limited to the fees actually paid by the customer in the preceding 3 (three) months prior to the event giving rise to the claim.

10.4 Legal Basis

This limitation of liability is grounded in art. 14, paragraph 3 of the Brazilian Consumer Defense Code (exclusion of liability) and arts. 18 and 19 of the Marco Civil da Internet (Law No. 12,965/2014), which establish that the application provider is not liable for content generated by third parties.

11. AI Security Incidents and Reporting Channel

11.1 Reporting Channel

To report security incidents related to Artificial Intelligence, please contact:

AI Security Email: seguranca-ia@chatsense.app

11.2 What Constitutes an AI Security Incident

  • Generation of harmful, offensive, or dangerous content by the AI
  • Leakage of personal or sensitive data through AI responses
  • Discriminatory, prejudiced, or biased responses
  • Unauthorized actions executed by AI agents
  • Systematic failure of security guardrails

11.3 Response Timelines

  • Acknowledgment of receipt: within 24 hours of the report
  • Initial investigation: within 72 hours of the report

11.4 Remediation

Remediation actions may include:

  • Updating guardrails and security safeguards
  • Changing AI models or providers
  • Prompt hardening
  • Adjustments to tone and safety settings

11.5 Transparency

Material AI security incidents may be disclosed in security advisories, in accordance with the principles of transparency and accountability.

Contact

For questions regarding this Artificial Intelligence Policy:

ChatSense — Omega Capital Holding Gestão e Participações Empresariais Ltda
General email: contato@chatsense.app
Data Protection Officer (DPO): privacidade@chatsense.app
AI Security: seguranca-ia@chatsense.app