HockeyStack AI: Security, Privacy, and Responsible Use

At HockeyStack, we leverage AI to deliver go-to-market intelligence to the enterprise while maintaining the highest standards of security, privacy, and ethical responsibility. Our AI-powered services use OpenAI’s foundational models in conjunction with our proprietary analysis and agentic execution framework, ensuring transparency, security, and accuracy.

This document outlines our AI policies, data handling practices, and the measures in place to mitigate risks associated with AI technologies.


1. Data Processing and AI Training

How We Handle Data

  • All data processing is conducted through OpenAI’s secure infrastructure.

  • Customer data remains isolated and is not shared with third parties.

  • OpenAI does not use customer data to train their general AI models.

  • Data is never used to train models for other customers—each customer’s data remains completely separate.

Infrastructure & Environment

  • Our AI runs on the same secure environment as our existing product functionality.

  • No new external processing environments are introduced.

  • Customer data is not used to train or fine-tune vendor models.

Data Ownership & Control

  • Customers retain full ownership of any data submitted to AI.

  • Customers own all AI-generated outputs from their data.

  • AI functionality is opt-in, ensuring customer control.


2. AI Risk & Security Considerations

We recognize that AI introduces new security and ethical challenges. Below are the key risks associated with AI and how HockeyStack mitigates them.

Potential AI Risks

  • Lack of transparency: Understanding how AI generates output.

  • Data breaches or unauthorized access.

  • Unintended consequences of AI-driven decision-making.

  • AI hallucinations: Generation of misleading or false information.

Security Measures in Place

  • Regular AI evaluations and output testing.

  • Human-in-the-loop review for non-deterministic AI outputs.

  • Strict internal documentation on AI model testing.

  • Encrypted data transmission and storage.

  • Access controls and log monitoring.


3. Ethical AI Usage Guidelines

We are committed to the ethical use of AI and ensuring that AI-generated insights are fair, unbiased, and transparent.

Our AI Ethics Principles

  1. Transparency: AI outputs must be explainable and traceable.

  2. Fairness: AI models must be free from bias and designed for equitable outcomes.

  3. Security: Customer data must remain protected at all times.

  4. Human Oversight: AI is never the sole decision-maker in high-stakes business scenarios.

Preventing AI Misuse

  • We do not use AI to manipulate or push users toward specific decisions.

  • No third parties are granted access to AI-generated customer data.

  • We maintain easy and open feedback channels for AI-related concerns.


4. Addressing AI Hallucinations & False Outputs

While AI models are powerful, they are not infallible. To ensure the integrity of AI-generated insights, we have built mechanisms to identify and correct false outputs.

How We Handle AI Inaccuracies

  • All AI-generated outputs must be reviewable by a human (human-in-the-loop process).

  • Customers can report hallucinations or inaccuracies directly to us.

  • Logging & tracking AI inconsistencies allows for continuous model improvements.


5. AI Security & Adversarial Attack Prevention

We take active measures to prevent malicious attempts to exploit AI systems, such as data poisoning or model inversion.

Our Approach to AI Security

  • Data integrity monitoring to prevent adversarial poisoning.

  • Strict API key segmentation by product and feature to prevent cross-contamination.

  • Usage of authentication layers to block unauthorized access.


We ensure that our AI aligns with legal, ethical, and regulatory requirements.

  • Risk and Conformity Assessments: Evaluating AI against security and privacy risks.

  • Responsible AI Use Policy: Ensuring compliance with ethical AI practices.

  • AI-Specific Legal Terms: Addressed within our Data Processing Agreement (DPA).

Customer Monitoring & Ownership

  • Customers have the ability to monitor AI-generated insights.

  • Customers can choose to disable AI functionality at any time.


7. HockeyStack AI Policy

Our AI policy outlines the best practices and principles that guide our use of AI.

HockeyStack AI Policy

  1. Data Protection & Privacy: All AI interactions respect strict data privacy policies.

  2. No AI Training on Customer Data: Customer data is never used to train or improve global AI models.

  3. Transparency & Explainability: AI insights must be understandable and traceable.

  4. Fair & Unbiased Models: We actively mitigate risks of AI bias.

  5. Security-First Approach: AI implementations follow industry-leading security standards.


8. AI Sustainable Use Policy

AI should be used responsibly to support sustainable and ethical business practices.

HockeyStack AI Sustainable Use Policy

  • AI should be used to augment human decision-making, not replace it.

  • AI should be monitored continuously to ensure quality and accuracy.

  • AI should not be used for deceptive practices, including misleading data manipulation.

  • AI should follow a privacy-first approach, respecting user consent at all times.


9. Frequently Asked Questions (FAQ)

1. Does HockeyStack’s AI process customer data?

Yes, but customer data remains private, is not shared with third parties, and is never used to train global AI models.

2. Can customers opt out of AI functionality?

Yes, AI features are opt-in, and customers can choose to disable them at any time.

3. Does HockeyStack allow third-party access to AI-generated data?

No, all customer data and AI outputs are kept strictly private.

4. How does HockeyStack prevent AI hallucinations?

We implement human-in-the-loop verification, logging, and customer feedback mechanisms to catch and correct AI inaccuracies.

5. How does HockeyStack protect against AI security threats?

We use encrypted data transmission, strict access controls, authentication layers, and adversarial attack prevention strategies to secure AI functionality.

Our Data Processing Agreement (DPA) includes specific AI-related terms to ensure compliance and security.

Customers can report issues via our support channels or directly through security@hockeystack.com.


For any questions regarding AI security, compliance, or usage, please contact us at security@hockeystack.com.

Last updated