Skip to main content
Howspace & the EU AI Act

Learn how Howspace’s AI features align with the EU AI Act’s risk categories.

Updated this week

Summary

The EU AI Act outlines different risk categories for AI systems. Based on our internal assessment, Howspace’s AI features generally fall into the minimal risk or transparency risk categories, depending on how you use the platform.

EU AI Act Risk Categories (Brief Overview)

  • Unacceptable Risk: AI uses that violate fundamental rights or manipulate individuals. These are banned.

  • High Risk: AI that significantly affects safety or fundamental rights (e.g., healthcare, hiring, credit). These must meet strict compliance requirements.

  • Transparency Risk: AI that can confuse or manipulate, such as chatbots or deepfakes. Clear user disclosure is required.

  • Minimal Risk: The majority of AI systems. They can be used under existing legislation without extra requirements.

Howspace’s AI Features

  • Minimal Risk: Tools such as word clouds we consider “minimal risk.”

  • Transparency Risk: Certain features, such as AI-generated summaries or admin prompting (where a user interacts with AI), may fall under “transparency risk.” We ensure any AI-generated content is clearly labeled so users know when AI is involved.

  • Optional Usage: You can turn off generative AI at the workspace level, giving full control over whether or not to use features that produce AI-generated content.

  • Unacceptable or High Risk: These categories do not apply to Howspace’s AI tools.

Disclaimer

This help center article is based on Howspace’s internal assessment of the EU AI Act proposal. For more details about the AI Act, please refer to the European Commission’s Q&A on the AI Act: https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683

Did this answer your question?