Skip to main content

Howspace and the EU AI Act: trustworthy AI by design

Learn how Howspace’s AI features align with the EU AI Act’s risk categories.

Updated this week

At Howspace, we believe that artificial intelligence should empower, not replace human decision-making. That’s why we’ve integrated AI into our digital facilitation platform in a way that’s responsible, transparent, and fully aligned with the European Union’s Artificial Intelligence Act (EU AI Act, Regulation (EU) 2024/1689).

We partner with Microsoft Azure OpenAI, a global leader in secure, enterprise-grade AI, to deliver advanced AI capabilities while ensuring compliance and maintaining user trust.

A risk-based, responsible approach to AI

The EU AI Act introduces a risk-based regulatory framework to ensure that AI systems in the EU are safe, lawful, and respect fundamental rights.

Howspace does not use AI for high-risk purposes as defined in the EU AI Act (e.g., employment, credit scoring, biometric surveillance, or law enforcement).

Instead, our AI features, such as summarization and prompting, are designed to assist users in collaboration and sense-making. These are classified as low-risk AI use cases.

As a result, Howspace is subject to transparency and general use obligations under the Act, not the stricter requirements reserved for high-risk systems.

Howspace as a “deployer” of AI

In regulatory terms, Howspace is a deployer of AI systems under the EU AI Act. This means we use AI services (such as Azure OpenAI) within our platform under our own brand and authority. We do not re-sell, modify, or fine-tune AI models.

Built on Microsoft Azure OpenAI

All AI functionality in Howspace is powered by Microsoft Azure OpenAI, which has adopted a layered compliance approach that directly aligns with the EU AI Act. Microsoft’s commitments include:

  • Proactive implementation of the AI Act’s requirements for general-purpose AI (GPAI) models

  • Internal Responsible AI Standard and restricted-use policies that mirror the AI Act’s prohibited practices provisions

  • Updated contracts and Codes of Conduct that prohibit unlawful uses (e.g., emotion recognition in the workplace, social scoring, or biometric surveillance)

  • Transparency Notes, documentation, and tooling that support downstream compliance

Howspace benefits from these safeguards as a downstream user and builds additional safeguards on top.

Transparency and user awareness

In line with Article 50 of the EU AI Act, Howspace ensures that:

  • Users are informed when they are interacting with AI features (e.g., AI-generated summaries or clustering).

  • AI-generated content is distinguishable and appropriately explained in context.

  • Documentation is available to help customers understand the purpose, functionality, and limitations of Howspace’s AI features.

What we don’t do

Howspace does not use AI for any of the following prohibited or high-risk purposes:

  • Emotion recognition or biometric categorization

  • Real-time or remote biometric identification

  • Criminality prediction or social scoring

  • Employment or credit-related decision-making

These practices are either prohibited outright or subject to strict regulation under the AI Act. We have explicitly excluded them from our product roadmap.

Contracts, controls, and commitment

We provide contractual clarity and safeguards to our enterprise customers:

  • Our use of Azure OpenAI is governed by Microsoft’s updated terms and Code of Conduct.

  • We are committed to ongoing monitoring of legal developments and updates to the AI Act.

  • We provide customers with clear data handling and security documentation, aligned with GDPR and ISO/IEC 27001 standards.

Responsible AI is a shared journey

Howspace embraces the EU AI Act as a forward-looking framework that supports innovation while protecting people. Together with Microsoft, we are building AI-powered collaboration tools that are safe, human-centric, and fully aligned with European values.

For more information, visit:

Did this answer your question?