Technology always had a huge impact on our lives and today we witness how it's doing the same through AI. In the last few years, AI is making huge progress in all fields and reshaping how we do everything, whether it is work or everyday life. But it also brings lot of unknowns and concerns.
With all these concerns, the European Union created EU AI Act as the world's first comprehensive legal framework regulating artificial intelligence.
The EU AI Act entered into force on August 1, 2024, with a phased rollout extending through 2027. The legislation takes a risk-based approach, categorizing AI systems into four levels: unacceptable risk (prohibited), high-risk (heavily regulated), limited risk (transparency requirements), and minimal risk (largely unregulated).
So, how does this effect the event industry?
How Event Technology Falls Under the AI Act
As mentioned above, AI is affecting everything, and the event industry is no exception. Event technology started adopting various AI enhancement that help event organizers in different ways. From biometrics systems for check-in, virtual assistants, and event personalization to analytics and AI-Generated content, all can be a massive boost for your event, but we need to make sure the right guidelines are in place.
Requirements and guidelines of the EU AI Act depend mainly on four risk levels where high-risk has much stricter rules than limited risk. Unacceptable risk is obviously prohibited and can lead to serious consequences. We need to keep in mind that EU AI Act is first framework of its kind and is largely affecting event platforms made in EU, while event platforms coming from unregulated markets may not be liable in EU and entire responsibility lies on the event organizers.
What should Event organizers know?
As any new regulation, the EU AI Act brings a lot of uncertainties and confusion. How does this affect me? What should I do? To help you get started, here are some do's and don'ts applicable to event industry.
Examples of DON'TS and major red flags:
- Untargeted scraping of internet or CCTV footage to build facial recognition databases
- Emotion recognition in workplace settings (with safety exceptions)
- Social scoring systems evaluating individuals based on behavior or traits
- Real-time biometric identification in public spaces
- Unlabeled AI-Generated content (Providers of generative AI must ensure that AI-generated content is identifiable, with deepfakes and public-interest content clearly labeled)
- Unmarked chatbots and virtual assistants (When using AI systems such as chatbots, humans should be made aware that they are interacting with a machine)
- Usage of any functionality that falls under prohibited category (such as biometric categorization based on sensitive characteristics, social scoring system, cognitive behavioral manipulation etc.)
Some DO'S and actions that are required:
- Conduct AI Inventory (Identify all AI-powered tools and categorize them based on EU AI Act)
- Review Vendor Agreements (ensure AI technology providers can demonstrate Act compliance, ensure contracts include compliance guarantees and liability clauses)
- Implement Transparency measures (add clear AI disclosure notice for chatbots and virtual assistants, label any AI-Generated content, update privacy policy to reflect AI data processing)
- Staff Training and AI Literacy (train all staff in usage of AI, ensure legal teams understand all requirements, ongoing education)
- Vendor Due Diligence (develop AI procurement guidelines, create vendor assessment checklist)
AI is not something that is coming, it is here for a while now. We DO want to use it, we DO want everything that AI can help us with. But we need to understand that unregulated use of AI can have serious consequences. Luckily, the European Union takes this matter seriously, and EU AI Act is here to protect us and help us use AI in a way no one is harmed.
