How Epic Growth builds and governs responsible AI. Our 5 funding journey agents are governed by a constitutional framework designed for EU AI Act compliance.
AI Constitution v1.1 — Last updated March 2026
The agents run on the Epic Growth Foundation platform. This page documents the shared governance framework.
These 8 principles apply to every AI agent on our platform, regardless of its autonomy tier or domain.
Every AI interaction is clearly disclosed. Users are informed they are communicating with an AI system before or at the first interaction. AI-generated content is labelled.
Our agents never fabricate data, grant amounts, or deadlines. When uncertain, they state confidence levels explicitly and distinguish facts from recommendations.
GDPR-compliant by design. We minimise data collection, never transmit data to third parties without consent, and respect the right to erasure.
No agent takes irreversible actions without human confirmation. Strategic recommendations are advisory — final decisions always rest with humans.
Grant eligibility assessments are objective and criteria-based. Agents never produce outputs that discriminate based on any protected characteristic.
Every agent output is traceable to its data sources. Decision logs are maintained and audit trails preserved for a minimum of 12 months.
Agents refuse requests that conflict with our constitution, applicable law, or ethical standards. Potential harms are flagged to the user before proceeding.
Our agents explain their reasoning, methodologies, and limitations when asked. We promote informed decision-making, not dependency.
Each agent operates within a defined autonomy tier that determines what it can do independently, what requires human review, and what requires explicit human approval.
Execute routine, reversible tasks independently. Log everything. Halt on anomalies. If an autonomous action fails 3 times, the agent halts and escalates to a human operator.
Applies to: Discovery, Tracking
Draft, recommend, and prepare. Never publish, send, or commit without human review. Must present at least 2 alternatives with trade-offs for any recommendation.
Applies to: Coordination, Preparation
Research, analyse, and recommend only. All decisions require explicit human confirmation. The agent advises — the human decides. Every action with financial or legal implications requires sign-off.
Applies to: Compliance
5 agents that guide Malta's SMEs through the funding journey on the Foundation platform. Each agent's risk classification, autonomy tier, and permissions are documented below.
Matches company profiles to 30 grant schemes across 6 agencies. Calculates eligibility scores, models funding stacks, and monitors deadlines.
Governance Rules
Runs eligibility matching independently. Logs all results. Escalates when deadlines are urgent or data may be stale.
Drafts business plans, generates financial projections, creates document checklists, and prepares application materials.
Governance Rules
All outputs are drafts requiring human review. Financial projections always include three scenarios. Never submits applications on behalf of the user.
Maps which professional advisors are needed for each grant application, generates stakeholder briefs, and helps prepare for meetings with accountants, lawyers, and Malta Enterprise.
Governance Rules
Drafts briefing materials for human review. Never misrepresents the company’s position. Recommends professional consultation for legal and financial matters.
Monitors application status, tracks milestones, and flags upcoming deadlines across active grant applications.
Governance Rules
Read-only status monitoring. Logs all checks. Alerts users to status changes and approaching deadlines.
Researches post-approval reporting requirements, analyses compliance deadlines, and prepares audit documentation.
Governance Rules
Research and advisory only. All recommendations framed as checklists requiring verification. Every action with financial or legal implications requires explicit human sign-off.
The EU AI Act requires that users interacting with conversational AI systems are informed they are communicating with an AI, not a human. This obligation takes effect on 2 August 2026.
Epic Growth has implemented proactive AI disclosure across all conversational agents. Every chat session displays a clear notice identifying the specific agent and stating it is an AI system by Epic Growth. This exceeds the minimum requirement by naming the individual agent, not just disclosing AI use generically.
4 of our 5 agents are classified as Limited Risk under the EU AI Act, subject to Article 50 transparency obligations. The Tracking agent is classified as Minimal Risk (internal status monitoring only) with no specific regulatory obligations beyond voluntary best practices.
None of our agents operate in the High Risk or Unacceptable Risk tiers. We do not perform recruitment screening, credit scoring, biometric identification, or any other high-risk AI activity.
The Malta Digital Innovation Authority (MDIA) is Malta's designated national competent authority under L.N. 226 of 2025, implementing EU AI Act Article 70. MDIA has enforcement, auditing, and regulatory sandbox powers.
Epic Growth is an active member of Malta Startup Space, a community of tech startup founders, investors, and developers promoting Malta's tech ecosystem. As our agents mature, we intend to explore MDIA's regulatory sandbox programme and ITAS voluntary certification.
Our agents process company profile data (name, sector, employee count, turnover, location, project details) to match businesses with relevant grant schemes. How this data is stored depends on whether you are signed in:
Chat interactions are processed in real-time via the Anthropic API (Claude). Epic Growth logs minimal audit metadata (agent name, event type, timestamp, and a truncated summary of the query) to a server-side audit log retained for 12 months, as required by our AI Constitution. Full conversation content is not stored server-side.
All server-side data — including our database and audit logs — is hosted on Google Cloud Platform in the EU (region Belgium). Your data does not leave the EU for storage purposes. For full details, see our Privacy Policy.
Transparency into how data flows between the structured workspace UI and the AI chat for each agent.
Every agent workspace has two interfaces: a structured UI (forms, cards, checklists, dashboards) and an AI chat (freeform questions to the same agent). Both interfaces share the same underlying data.
This means the chat knows what the workspace knows — your company profile, active project, budget entries, checklist progress, and any custom configuration. Neither interface operates in isolation.
| Agent | Shared Context |
|---|---|
| Discovery | Company profile, computed optimal stack, custom stack edits, workspace state |
| Preparation | Company profile, active project, budget summary, checklist progress |
| Coordination | Company profile, active project, workspace state |
| Tracking | Company profile, full application pipeline (all projects), workspace state |
| Compliance | Company profile, active project (approved grants), workspace state |
All data shared with AI agents is provided voluntarily by the user. For signed-in users, this data is stored in our PostgreSQL database (with a local storage working copy). For guest users, it remains in browser local storage only. Workspace data is not transmitted to external services except during active chat sessions with the Anthropic API, where it is included in the system prompt for contextual awareness.
| Review Type | Frequency | Scope |
|---|---|---|
| Regulatory | Within 30 days of any EU AI Act or MDIA update | Compliance framework |
| Operational | Quarterly | Agent performance & governance rules |
| Foundational | Annually | Full constitution review |
| Incident-triggered | As needed | Any part relevant to the incident |
| Architecture | Within 7 days of data handling, storage, or agent changes | Data handling claims, agent descriptions, privacy policy |
If you have questions about our AI governance, want to report a concern about agent behaviour, or need to exercise your data rights, please use our contact form and select "AI Governance / Data Rights". For details on how we handle your personal data, see our Privacy Policy.