At RUNSTACK Inc. (“ RUNSTACK”, “ we”, “ us”, or “ our”), we are committed to developing, deploying, and maintaining artificial intelligence (AI) systems in a manner that is ethical, transparent, and aligned with international human rights and data protection standards. This Responsible AI Policy** (“ Policy”) outlines our principles, governance structure, and operational commitments for ensuring that AI systems within the RUNSTACK ecosystem are used safely, fairly, and responsibly by both our organization and our users.
This Policy governs all AI-related activities within RUNSTACK, including the:
Design, training, and deployment of AI models and agents;
Integration of third-party AI systems;
Processing and use of data for AI purposes
Use of AI outputs by end-users.
It applies to all employees, contractors, partners, affiliates, and users of RUNSTACK products and services globally, including in Canada, the United States, the European Union, Japan, Korea, and Singapore.
RUNSTACK designs and operates AI systems under the principle that AI should serve and empower people, not replace or exploit them. We believe in AI that enhances productivity, fosters creativity, and eliminates unnecessary complexity — always with accountability and human oversight. Our Responsible AI framework is built on the following pillars:
We clearly disclose where and how AI is used within our systems. Users are informed when they interact with AI-driven agents, features, or automated decisions.
We design and test our systems to minimize bias and ensure equitable treatment across all users, irrespective of gender, ethnicity, geography, or language.
We maintain human oversight and responsibility over all AI operations. Decisions made by AI agents are auditable and reversible by authorized personnel.
We actively test AI systems for robustness, misuse resistance, and resilience against malicious manipulation.
We comply with global data protection laws (including PIPEDA, GDPR, CCPA, APPI, PIPA, and PDPA) and ensure that personal data used in AI processes is lawful, minimal, and secure.
AI decisions support — but never override — human intent. AI is built to assist, not dictate.
RUNSTACK’s Responsible AI practices are aligned with recognized international frameworks, including:
OECD Principles on Artificial Intelligence (Promoting inclusive growth, sustainable development, and human-centered values)
EU Artificial Intelligence Act (2024) (Risk-based classification, transparency, and human oversight)
Canada’s Artificial Intelligence and Data Act (AIDA) (Responsible innovation and harm prevention)
U.S. NIST AI Risk Management Framework (Governance, accountability, and measurement of AI risk)
UNESCO Ethics of AI Recommendation (2021) (Human rights, transparency, and fairness)
RUNSTACK maintains an internal AI Governance Committee, responsible for overseeing ethical, technical, and compliance aspects of all AI systems. The committee includes representatives from:
Engineering and Model Development
Legal and Compliance
Data Security
User Experience and Accessibility
Ethics and Policy
Evaluate new AI features for ethical and legal risk;
Approve AI model deployments and major updates;
Review AI incidents, bias reports, or user complaints;
Conduct regular audits and publish transparency reports.
Data Use and Protection (No data is used for AI training, inference, or improvement.)
All content on the Platform—including but not limited to software, code, text, graphics, images, logos, interfaces, and design—is the intellectual property of RUNSTACK or its licensors, protected by Canadian, U.S., EU, and international copyright and trademark laws. You acknowledge and agree that:
The RUNSTACK name, logo, and all related marks are trademarks owned or licensed by RUNSTACK.
You have no ownership rights in the Platform or its contents.
Any unauthorized use, reproduction, or modification of our intellectual property may result in civil or criminal penalties.
If you believe any material on the Platform infringes your intellectual property rights, please contact us immediately at legal@runstack.ai
We adhere to the following principles during the design and training of AI models:
Training data is obtained from lawful and ethically sourced datasets. Data is assessed for potential bias, quality, and representativeness.
We strive for model interpretability and provide documentation that explains how AI agents generate outputs.
AI systems are tested for safety, bias, accuracy, and resilience before deployment. Performance is continuously monitored to detect anomalies or drift.
Models are regularly updated to enhance fairness, security, and efficiency.
RUNSTACK applies a risk-based approach to AI development, classifying systems into four tiers inspired by the EU AI Act:
Minimal Risk – Chat interfaces, workflow automation, and content summarization.
Limited Risk – Predictive suggestions, productivity enhancement, or contextual recommendations.
High Risk – AI-driven decision-support for compliance or security operations (subject to audit).
Prohibited Uses – AI applications that engage in manipulation, social scoring, surveillance, or rights violations.
RUNSTACK strictly prohibits such use. All high-risk systems undergo independent review, user impact assessment, and audit logging.
If an AI system behaves unexpectedly, produces harmful content, or poses a risk to users or third parties:
RUNSTACK will immediately suspend the affected system, investigate the cause, and document the incident.
Affected users will be notified when appropriate.
Findings will be reported to relevant authorities where legally required.
All incidents are logged and reviewed by the AI Governance Committee for corrective action and future prevention.
To the fullest extent permitted by applicable law:
RUNSTACK and its affiliates, officers, employees, and partners shall not be liable for any indirect, incidental, consequential, or punitive damages, including lost profits or data, arising from or related to your use of the Platform.
RUNSTACK’s total cumulative liability to you shall not exceed one hundred (100) U.S. dollars (USD) or the total amount you have paid to RUNSTACK in the past six months, whichever is greater.
Certain jurisdictions do not allow limitations of liability for personal injury or fraud; in such cases, these limitations apply only to the extent permitted by law.
Users have the right to:
Obtain an explanation for automated decisions that materially affect them;
Challenge or appeal AI-assisted decisions;
Opt-out of non-essential AI-driven personalization features;
Request data access, correction, or deletion; and
Report ethical or safety concerns to RUNSTACK.
Requests can be submitted to legal@runstack.ai
RUNSTACK responds to valid requests in accordance with applicable data protection laws.
RUNSTACK implements Responsible AI by Design throughout the product lifecycle:
Integrating ethics checkpoints during design and prototyping.
Embedding compliance and fairness testing before launch.
Conducting ongoing human-in-the-loop validation.
Maintaining transparent documentation and user education materials.
This Policy is reviewed at least annually or whenever major legal or technological changes occur. RUNSTACK may publish transparency summaries detailing:
AI system categories in operation;
Data handling methods;
Model performance and fairness metrics; and
Notable improvements to AI governance and safety.
These Terms are governed by the laws of the Province of British Columbia and the federal laws of Canada, without regard to conflict-of-law principles. All disputes shall be resolved exclusively in the courts of Ontario, British Columbia, unless otherwise required by applicable consumer protection laws. Users in other jurisdictions may have additional statutory rights which these Terms do not supersede.
RUNSTACK encourages users, employees, and partners to report potential violations of this Policy or unethical AI behavior. Reports may be submitted confidentially via: legal@runstack.ai. All reports are investigated promptly, with findings documented and corrective actions implemented as necessary.
This Policy is maintained by RUNSTACK’s AI Governance Committee and approved by executive management. For inquiries regarding this Policy, please contact:
RUNSTACK Inc.
Ontario,
British Columbia, Canada
Email: info@runstack.ai
By using RUNSTACK’s AI systems and Services, you acknowledge that you have read, understood, and agreed to comply with this Responsible AI Policy. RUNSTACK is dedicated to fostering a world where AI enhances human capability, promotes fairness, and operates with integrity — everywhere it runs.
RUNSTACK empowers users and partners to interact with and build upon AI agents within our ecosystem. To maintain responsible usage:
If an AI system behaves unexpectedly, produces harmful content, or poses a risk to users or third parties:
Users must not employ RUNSTACK AI for unlawful, harmful, or deceptive purposes.
Users must disclose when AI-generated content or recommendations are used in professional or public communications, where appropriate.
Users are prohibited from using AI outputs to infringe on privacy, intellectual property, or applicable regulations.
AI-generated code or actions must not be used to execute malicious operations or compromise systems.
RUNSTACK reserves the right to monitor, restrict, or disable AI features for accounts that engage in misuse or violate this Policy.