Published by the Digital Tourism Think Tank

AI Transparency Framework

A suite of open frameworks for tourism organisations, public agencies, enterprises and industry bodies to record, measure and communicate their use of artificial intelligence, across all work types and outputs.

Artificial intelligence is now embedded in how organisations across tourism and destination management write, research, design, communicate and deliver. In most cases, that involvement is invisible. Audiences have no way to assess how content was produced. Procurement and commissioning relationships rest on assumptions about human effort that AI has materially changed. The environmental costs of AI use go unreported. The ethical dimensions of AI-generated content go unassessed.

The DTTT AI Transparency Framework provides practical, self-service tools for addressing that absence. It comprises four independently versioned models, each addressing a different dimension of AI use. Together they cover what AI contributed, how much time it saved, what it cost the environment, and whether the content produced was ethically sound. Each model can be adopted and applied independently or used as part of a complete disclosure approach.

All models are published under a Creative Commons Attribution 4.0 licence. Any organisation can adopt and apply them. DTTT uses the framework across its own work and invites the wider industry to do the same.

Early adoption — have your say. The framework is designed to evolve through collective adoption. Working groups for each model, a cross-model governance committee and a case study programme are all open for organisations to join. DTTT will convene the first governance committee session at X. Design Week 2026, with sessions to test, debate and refine the framework in practice.

Whether you want to register as an adopter, join a working group or contribute to governance, express your interest using the button below.

Model 1
AI Transparency Model
v1.1

Records the balance of human and AI contribution to any piece of work using a five-point A to E scale. Applicable to any organisation and any output type, from a social media post to a major strategy document.

Model 2
Productivity and Delivery Extension Model
v1.2

Measures time savings and delivery capability extension. Supports internal performance reporting, supplier disclosure and demonstrating the tangible value of AI adoption to clients, boards and funders.

Model 3
AI Environmental Impact Model
v0.5

An indicative A to E scale for communicating the relative energy intensity of AI use. A disclosure tool, not a sustainability certification, for organisations reporting honestly on the environmental dimension of their AI activity.

Model 4
AI Content Integrity Model
v0.1

A risk classifier for the ethical dimensions of AI-generated and AI-manipulated visual content. Covers consent, authenticity and audience disclosure, addressing questions that standard image licensing frameworks were not designed to answer.

AI Transparency Toolkit

Score your work and generate a disclosure card

The free self-grading toolkit lets you assess any piece of work against any combination of the four models and generate an embeddable disclosure card for reports, articles, websites and publications.

Use the toolkit ›

Before you begin — AI Ethics Consideration Tool

Assess the ethical dimensions of any AI task before starting

Five questions that surface the ethical, legal and data considerations relevant to your specific AI task, producing a routing recommendation and a tailored checklist before work begins.

Open the ethics tool ›

Who the framework is for

The framework is designed for any organisation that uses AI in producing work for or about the tourism sector. This includes destination management and marketing organisations, national and regional tourism bodies, enterprise and investment agencies, local authorities with tourism responsibilities, travel and hospitality businesses, event and conference organisers, tourism-focused research institutions, and agencies and consultancies working in the sector.

It applies equally to public-sector bodies and commercial operators, to large organisations and independent practitioners, and to those at the early stages of AI adoption and those with mature AI workflows.

Consumer-facing and public communications

Content published to general audiences, including editorial articles, marketing campaigns, social media posts, imagery, video and any other AI-generated or AI-assisted material where transparency builds public trust. The framework gives audiences the information they need to assess how content was produced, and gives publishers a consistent standard for making that disclosure.

Organisational and strategic work

Research reports, destination strategies, funding applications, policy papers, presentations, briefings and internal documents. The Transparency, Productivity and Environmental models all apply to knowledge-based and analytical work, enabling clear disclosure to clients, commissioners and boards while demonstrating the real efficiency gains AI delivers.

Procurement and supply chain relationships

Requiring agencies, vendors and suppliers to apply the framework as a condition of engagement establishes a new standard of disclosure across the supply chain. The Content Integrity Model includes a procurement clause for AI-generated visual content. Organisations that make adoption a requirement set a precedent that raises standards across the sector as a whole.

JSON schemas

All four model definitions are published as JSON schema files under CC BY 4.0, including full changelogs, grade definitions and boundary rationale. The versions page holds the complete public release log and schema downloads for all versions.

View versions and schemas ›

Governance and community

All models are developed openly, versioned publicly and governed by a committee of industry representatives. Working groups, adopter registration, governance participation, case study submission and the academic research partnership programme are all described on the Research tab.

Research and community ›

Model 1 of 4

AI Transparency Model

A standard scale for recording and communicating AI involvement in any piece of work. It applies to any organisation and any output type, covering the full range from minimal AI use to fully AI-generated content.

Version: v1.1 Status: Published Released: March 2026

The full scale

The scale has two declared extremes and five graded bands. The declared extremes sit outside the A to E grading system and are recorded on the Report Card without a letter grade.

Human Created0%
Declaration — no grade
No AI involvement at any stage. Declared directly. Recorded on the Report Card without a letter grade.
GradeLabelRangeDefinition and typical use
AHuman Led1–10%AI influence is negligible. AI may have been consulted briefly for minor suggestions, but the work is substantially human in origin.

Work where AI informed a single background section or was used briefly as a sounding board before production began.

BAI-Assisted11–36%AI used for research, ideation or drafting support. The human retains full creative and strategic direction throughout.

Research papers where AI supported literature review; presentations where AI assisted with background material.

CCollaborative37–63%Balanced human and AI contribution. AI handles research, initial drafting and structural suggestions while the human directs strategy, edits and refines. The most common grade across knowledge-based outputs.

Standard reports, strategy documents and analytical briefings across most professional contexts.

DAI-Generated64–89%AI-led production with human review, quality control and final judgement. The human shapes direction but AI produces the bulk of the content.

Data-heavy research compilations, rapid benchmarking outputs and large-scale content catalogues.

EAI Led90–99%Minimal human involvement. Output is substantially unedited AI generation; human contribution is limited to brief, direction and light review.

Automated template outputs; AI-generated content reviewed without significant editing.

Fully AI100%
Declaration — no grade
No human authorship beyond the initial brief. Declared directly. Recorded on the Report Card without a letter grade.

Grades A and E are deliberately narrow. Both are strong claims that require genuine confidence. Grade C is the widest band because collaborative working is the most common pattern across professional knowledge work. The two declared extremes sit outside the letter grade scale entirely.


Assess your work

Select the option that best describes this piece of work. If AI played some part, use the grader to set a precise percentage.


The Transparency Report Card

Once a piece of work is graded, the result and supporting detail are recorded in a Transparency Report Card. Two formats are available: a Standard Report Card suitable for all outputs, and a Detailed Declaration Card for significant, strategic or public-facing work.

The Detailed Declaration Card includes the deliverable title, issuing organisation, prompt quality notes, source documents and any additional scores from the Productivity and Environmental models. It is designed to sit at the front of a published or delivered document as a formal AI disclosure record. An embeddable HTML version is generated by the AI Transparency Card toolkit.

FieldDescription
AI Involvement GradeA to E with percentage, or Human Created or Fully AI declaration.
Models UsedEvery AI model used: Claude, ChatGPT, Gemini, Perplexity, Midjourney, DALL-E and others.
AI-Supported TasksFrom the standard list: Research, Drafting, Data analysis, Brainstorming, Structure, Editing, Visual mockups, Prompt engineering, Translation, Fact-checking, Code generation, Presentation design.
AI Contribution Summary2 to 3 sentences on what AI specifically contributed.
Human Contribution Summary2 to 3 sentences on strategic direction, prompt quality, source documents provided and editorial decisions.
Transparency NoteA single sentence for the footer of the deliverable itself.
Standard grading prompt — copy and use

You are assessing AI involvement in a completed deliverable using the DTTT AI Transparency Model v1.1. Apply the following rules exactly.

Scope: generative AI tools only (ChatGPT, Claude, Gemini, Copilot, Midjourney and similar). Spellcheck, grammar tools, search engines and predictive text do not count. Prompt writing and source document selection are human contributions.

Grade boundaries: Human Led (A) 1–10% / AI-Assisted (B) 11–36% / Collaborative (C) 37–63% / AI-Generated (D) 64–89% / AI Led (E) 90–99%. Human Created = 0% declaration. Fully AI = 100% declaration.

Assess only the final deliverable. Do not weight formatting unless AI was used to design it. Focus on content authorship, research origin, strategic direction and quality of human editing. Produce: grade with percentage and one-sentence justification, models used, task list, AI contribution summary (2 to 3 sentences), human contribution summary including prompt quality and source documents, transparency note. Be accurate. If AI contribution was limited, record it as limited.

Enhanced grading prompt — includes full JSON schema

Recommended for significant deliverables, first-time users and any output that will be publicly disclosed. Paste the text below into your AI tool. The full model definition is inlined so the AI applies the correct boundaries, rationale and scope without any assumptions.

Alternatively, attach dttt-transparency-v1.1.json directly to your conversation and use the standard prompt above — most AI tools can read an attached JSON file.

JSON schema

The canonical machine-readable definition of this model is published as a JSON schema file under CC BY 4.0. It is the authoritative reference for integrations, tooling and version tracking. Previous versions and the complete version log are available on the Versions and schemas page.

dttt-transparency-v1.1.json — AI Transparency Model, current published version

Download JSON ›

Not yet started? Use the AI Ethics Consideration Tool to assess the ethical dimensions of your task before beginning.

Ethics tool ›

Use the AI Transparency Card toolkit to assess your work against this model and generate an embeddable disclosure card.

Use the toolkit ›

Model 2 of 4

Productivity and Delivery Extension Model

A framework for measuring the time savings and capability gains that AI delivers across any type of work. Applicable to individual practitioners, teams and organisations of any size, and useful for internal performance reporting, client disclosure and board-level value demonstration.

Version: v1.2 Status: Published Released: March 2026

The model captures two dimensions that a simple time-saving estimate cannot express on its own. The Productivity Score records how much faster a piece of work was completed. The Delivery Extension Score records whether AI enabled something that would not otherwise have been possible within normal capacity: a richer format, access to an adjacent discipline, or an entirely new form of output. Together they give a more honest picture of what AI contributed than either measure could provide alone.

Both scores are self-assessed by the practitioner responsible for the work, using the decision rules below. Scores should be specific and justifiable. Vague or optimistic estimates undermine the value of the record for both disclosure and future planning purposes.

The two scores

Each score runs from 1 to 5 and is recorded on the Transparency Report Card alongside the AI Transparency grade. They measure different dimensions of AI value and are assessed independently.

AI Productivity Score

Core question: how long would this work have taken without AI?

#Label and decision rule
1Minimal gainLess than 10% time saved. Roughly the same effort either way.
2Moderate gain10–25% time saved. Faster in places; human effort still dominated.
3Significant gain25–50% time saved. Research, structuring or iteration noticeably faster.
4High gain50–75% time saved. This would have taken roughly twice as long without AI.
5Transformative75%+ time saved. This depth of output was not feasible within the hours available without AI.

Delivery Extension Score

Core question: how far did AI extend the delivery of the team's expertise?

#Label and decision rule
1Standard deliveryAI used within existing approach. Same format and scale as without it.
2Extended scaleDelivered at greater scale or depth, with more sources, greater breadth, faster without loss of quality.
3Enhanced formatExpertise delivered in a richer format than the standard toolkit supports, being better structured, more polished and more thoroughly evidenced.
4Adjacent capabilityWork extended into a technical discipline adjacent to core expertise. Judgement and direction are the team's; AI provided production access they would not otherwise hold.
5New capabilityDeliverable type entirely new to this team member's practice. Their expertise is the substance. AI gave access to a delivery form not previously in their toolkit.

Justifications must be specific for both scores. Name the tasks affected and describe what would have been different without AI.


Self-assessment tool

Select a score for each dimension. The tool produces a combined overall score, a plain-language reading and suggested justification prompts for your Report Card.

Select your scores

Select one option in each column. Your chosen score expands to show its full definition. Select both to see the combined result.

AI Productivity Score
Delivery Extension Score
AI Productivity Score
Delivery Extension Score
Combined AI Value
Justification prompts for your Report Card

How the Combined AI Value score is calculated. The score is produced by a 5×5 lookup matrix rather than a simple average. The matrix rewards strong performance across both dimensions; a combined score of 4 or 5 requires both individual scores to be high. A single strong score on one dimension produces a combined 3 at most.

On ceiling effects. Combinations of 5/4, 4/5 and 5/5 all reach a combined score of 5. This is a property of summary scoring. The individual scores always appear alongside the combined score, so the distinction between a 5/4 and a 5/5 remains visible.

JSON schema

The canonical machine-readable definition of this model, including the scoring matrix and Combined AI Value lookup table, is published as a JSON schema file under CC BY 4.0.

dttt-productivity-v1.2.json — Productivity and Delivery Extension Model, current published version

Download JSON ›

Use the AI Transparency Card toolkit to score your work against this model and generate an embeddable disclosure card.

Use the toolkit ›

Explore productivity measurement, sector benchmarking and the DTTT research partnership programme.

Go to Research ›

Model 3 of 4

AI Environmental Impact Model

An indicative disclosure scale for communicating the relative energy intensity of AI use. Applicable to any piece of work, any output type and any level of AI activity.

Version: v0.5 Status: Published Released: March 2026

This model is a disclosure tool, not a sustainability certification. It communicates the relative energy intensity of different AI tasks rather than making verified carbon claims. Grade A does not mean zero impact; it means the indicative impact is negligible at the current state of measurement. Scores will be tightened at v1.0 as first-party provider data becomes available under EU AI Act requirements from August 2026.

As AI use becomes routine in professional and creative work, its environmental footprint is a growing area of public and regulatory interest. Verified per-model energy figures are not yet available for most AI systems at the level of detail needed for precise accounting, and the methodologies providers do publish use incompatible approaches that make comparison difficult.

This model takes a practical approach to that challenge. Rather than claiming a precision the data cannot support, it uses task type, model category and usage intensity to produce an indicative A to E grade that any organisation can apply and disclose. The grade sits alongside the AI Transparency grade on the Report Card and can be included in deliverable footers as a standard transparency measure.


How scoring works

Every assessment combines three factors. Where a session involved multiple task types, score based on the highest-intensity task used.

Axis 1: Task type

Task typePlain language
TextWriting, researching, editing, analysing, translating or generating code
Image generationCreating images or visual assets using AI
Extended reasoning or agenticUsing an AI tool's advanced research or reasoning mode, or running multi-step automated workflows
Video generationCreating video content using AI

Axis 2: Model category

Model categoryPlain language
On-device or lightweightA lightweight or on-device tool
Standard cloudChatGPT, Claude, Gemini or similar used for everyday tasks
Frontier or largeAn advanced or specialist tool, or a general assistant in its most powerful mode

Axis 3: Usage intensity

IntensityDescriptionEffect on grade
LightUnder ~30 minutes, single taskMove one grade down (min A)
Moderate~1–2 hours, several tasksNo change
Heavy or continuousThroughout a working day or projectMove one grade up (max E)

Base grade grid

Task typeOn-deviceStandard cloudFrontier
TextAAB
Extended reasoningBCD
Image generationBCD
Video generationDDE

The A–E grade scale

Green — negligible
Amber — moderate
Red — very high
GradeLabelDefinitionTypical use
ALow intensityLow energy use at session level. Disclose as part of standard transparency reporting.Brief text query, on-device tools, single query to any text AI.
BLow-moderateLow to moderate energy use. Disclose as part of standard transparency reporting.Full working day on text tasks with Claude or ChatGPT; occasional AI image generation.
CModerateModerate energy use. Disclose in deliverable transparency notes.Continuous frontier model use across a multi-day project; moderate-volume image generation campaign.
DHighHigh energy use. Disclose and consider lower-intensity alternatives for future work.Frontier image generation campaign; extended deep research mode use; any video generation.
EVery highVery high energy use. Disclose and actively consider lower-intensity alternatives before committing to this approach.Heavy AI video production; frontier reasoning at high volume; large-scale automated content generation.

Assess your own work

Environmental impact self-grader

Answer three questions to get an indicative environmental grade.

1. What did you use AI for?
Select the most intensive task type used.
2. What type of tool did you use?
A general assistant like ChatGPT, Claude or Gemini is Standard cloud for everyday tasks.
3. How intensively did you use it?
A


Methodology sources

  • IEA — Electricity 20242024 · Data centre electricity consumption projections · iea.org
  • Goldman Sachs — AI is Poised to Drive 160% Increase in Data Center Power Demand2024 · AI power demand projections
  • Google — Gemini environmental impact disclosure2025 · First major first-party per-query disclosure. Reports 0.24 Wh median text query.
  • Epoch AI — How Much Energy Does ChatGPT Use?2025 · ~0.3 Wh per GPT-4o query · epoch.ai
  • Hugging Face / Luccioni — What kind of environmental impacts are AI companies disclosing?2025 · Current disclosures use incompatible methodologies. Supports band-based approach · huggingface.co
  • Bailey / Medium — The Energy Impact of LLMs: Where Are We in 2025?2025 · Image ~60x text; video ~2,000x text · medium.com
  • MIT Technology Review — We did the math on AI's energy footprint2025 · technologyreview.com
  • EU AI Act — General-Purpose AI Code of Practice2025 · Signed by OpenAI and Anthropic July 2025. Enforcement August 2026.

JSON schema

The canonical machine-readable definition of this model, including the base grade grid, axis definitions and methodology sources, is published as a JSON schema file under CC BY 4.0.

dttt-environmental-v0.6.json — AI Environmental Impact Model, current published version

Download JSON ›

Use the AI Transparency Card toolkit to assess your work against this model and generate an embeddable disclosure card.

Use the toolkit ›

Model 4 of 4

AI Content Integrity Model

A risk classification framework for the ethical dimensions of AI-generated and AI-manipulated visual content. Applicable to any organisation publishing AI-assisted imagery, video, animation or synthetic personas.

Version v0.1 Published March 2026 Status Published

About this model

This model applies to AI-generated imagery, AI-animated photography, synthetic video and AI personas. Text-only AI content is not in scope; consent and disclosure obligations for text are addressed by existing editorial standards and the AI Transparency Model.

The AI Transparency Model records how much AI was involved in producing a piece of work. That is a necessary disclosure, but it does not address a separate and equally important question: whether the AI involvement was ethically sound in relation to the people depicted and the audiences reached. Standard image licensing and photography rights frameworks address ownership and permission to use, but they were not designed for AI manipulation of real people's likenesses, and they do not require disclosure of AI involvement to the audience. This model fills that gap.

The model assesses three things: what AI did to the content, what authorisation exists for any real people depicted, and what the audience was told. From those three inputs it produces an Integrity classification (Clear, Caution, High Risk or Not Recommended) and a machine-readable disclosure code that can travel with the asset through production and delivery pipelines. The classification sits alongside the Transparency grade on the Report Card; the two scores answer different questions and are assessed independently.

Disclosure is necessary but not sufficient. An AI declaration does not resolve an underlying consent failure. A piece of content can carry a transparent AI label and still be ethically unacceptable if the people depicted did not consent to AI manipulation of their image.


The three assessment axes

Every piece of AI visual content is assessed against three axes. The results combine to produce an Integrity classification and a machine-readable disclosure code. Each axis is scored independently using the codes below.

Axis 1: Intervention

What AI did to or created in the content. This determines the baseline level of ethical consideration required.

CodeNameDescriptionBaseline risk
AI-TXTText onlyAI generated or assisted with written content. No imagery, no real people manipulated.Low
AI-IMGSynthetic imageryAI-generated visuals with no identifiable real individuals.Low
AI-ENHImage enhancementAI animation or enhancement of real photography where any people are unidentifiable — distant, backs turned, faces obscured.Caution
AI-ANIMAnimation of real peopleReal photographs manipulated to show identifiable individuals moving, performing actions or appearing altered.High risk
AI-CHARCharacter personaA clearly fictional AI persona — creature, mascot or stylised character not intended to pass as human.Caution
AI-PERSHuman-passing personaA synthetic human-like persona designed to appear realistic. Audiences may not recognise it as AI without explicit disclosure.Not recommended

Axis 2: Consent

What authorisation exists for real subjects depicted in the content. A standard photography release covers the use of an image; it does not cover AI manipulation of the people in that image.

CodeNameDescriptionRisk
CS-NANot applicableNo real people depicted.Low
CS-ANONAnonymousPeople visible but cannot reasonably be identified.Low
CS-AIAI consent obtainedSubjects provided specific written consent for AI manipulation of their image. Must be explicit — a standard release does not qualify.Low
CS-STDStandard release onlyStandard photography release in place but no specific AI consent. Standard releases do not cover AI manipulation.High risk
CS-NONENo consentReal, identifiable individuals depicted with no model release or consent of any kind.Not recommended
CS-ESTEstate / deceasedDeceased or historical individuals. Estate consent not confirmed.Not recommended

Axis 3: Disclosure

What the audience was told about the AI involvement in the content. Meaningful disclosure needs to be visible to the audience, not only recorded in metadata or filed with a regulator.

CodeNameDescriptionStandard
DC-PROMIn-content labelA visible label, watermark or on-screen notice embedded in the content itself. Visible regardless of whether the audience reads captions.Best practice
DC-CAPCaption or postAI use acknowledged in accompanying text — caption, post copy or article description. Visible on-platform to anyone who reads the copy.Acceptable
DC-METAMetadata onlyAI disclosure recorded in file metadata or provided to the platform for regulatory compliance. Not visible to the audience.Insufficient
DC-NONENo disclosureNo indication given to the audience that AI was involved. Not recommended for any visual or persona content.Not recommended

Integrity classification

The three axes combine to produce one of four classifications. This is a risk classifier, not a quality scale. It does not measure how much AI was used; it assesses whether the ethical conditions for responsible publication have been met.

ClassificationDescriptionAction
Clear Ethical use confirmed. Consent and disclosure meet the standard for responsible practice. Proceed with publication.
Caution One or more aspects of consent or disclosure fall below the recommended standard. Editorial review advised before publishing. Document the review decision.
High risk Significant ethical and potentially legal concerns identified. Do not publish without formal legal and editorial review.
Not recommended Fundamental ethical issue identified that cannot be resolved through disclosure alone. Do not publish.

The disclosure code system

Every assessed piece of content produces a three-part machine-readable disclosure code combining one value from each axis. The code is designed to be embedded in asset metadata (EXIF, XMP, IPTC), included in agency delivery notes, referenced in procurement clauses and appended to social media captions. It is readable by humans, embeddable by systems and usable as a supply chain compliance check.

Disclosure code format
AI-[type] · CS-[consent] · DC-[disclosure]
Example: AI-ANIM · CS-AI · DC-CAP — animated real people, explicit AI consent obtained, caption-level disclosure

The code is designed to be attached to any asset: in file metadata (EXIF, XMP, IPTC), in contract deliverable notes, in social captions or in report footers. It is readable by humans, embeddable by systems and usable as a procurement compliance check.

Procurement use. A suggested contract clause for commissioning AI-generated or AI-manipulated content: "All AI-generated or AI-manipulated visual content delivered under this contract must carry a DTTT disclosure code in the format [AI-type] · [CS-consent] · [DC-disclosure]. Assets classified as High Risk or Not Recommended require written approval before delivery." Take legal advice before using this clause in live contracts.


Assess your own content

Content Integrity self-grader

Answer three questions about a specific piece of AI content to receive an Integrity classification and a ready-to-use disclosure code.

Code building: AI-??? · CS-??? · DC-???
1. Intervention: what did AI do?
Select the type of AI intervention applied to this content.
2. Consent: what authorisation exists?
Select the consent status for real subjects depicted.
3. Disclosure: what was the audience told?

Your disclosure code
· ·
Recommended action:
Suggested disclosure note:

JSON schema

The canonical machine-readable definition of this model, including all intervention, consent and disclosure codes, classification logic and the disclosure code system, is published as a JSON schema file under CC BY 4.0.

dttt-integrity-v0.1.json — AI Content Integrity Model, current published version

Download JSON ›

Use the AI Transparency Card toolkit to assess your content against this model and generate a machine-readable disclosure code and embeddable card.

Use the toolkit ›

Research and community

Research, measurement and community

Tools for quantifying AI productivity gains, a programme for independent research, and a growing community of practitioners and institutions building knowledge around this framework.

The Productivity and Delivery Extension Model records how much time AI saves and how far it extends delivery capability on individual pieces of work. This section takes the next step: turning those individual scores into aggregate measures, setting targets and building the evidence base for sector-level productivity claims.


Productivity gain calculator

Enter the details of any AI-assisted piece of work to calculate the estimated hours saved, equivalent cost saving and annualised productivity rate. Results are indicative and based on your own inputs; they are transparent estimates and not audited figures.

These figures are transparent estimates based on your Productivity Score range. The midpoint of each band is used for calculation. They are indicative and not audited, and should be presented as such. DTTT encourages organisations to record the assumptions behind any aggregate productivity claim.


Benchmarking and sector-level measurement

Individual productivity scores become significantly more valuable when aggregated across an organisation's portfolio of work and compared against sector peers. A single P3 score tells you AI saved time on one task. A consistent average of P3 across 40 deliverables over a quarter tells you something meaningful about the organisation's operational relationship with AI.

Building an organisational baseline

An organisation can begin benchmarking by grading all AI-assisted work consistently using the Productivity Score over a defined period, typically a quarter. From that dataset, three measures are immediately available: average Productivity Score across all graded work, proportion of deliverables reaching P3 or above, and distribution of Delivery Extension Scores by work type. Together these form an organisational productivity baseline against which future periods can be compared and targets set.

Setting productivity targets

Once a baseline exists, organisations can set forward-looking targets. A well-formed target specifies a time period, a scope (all AI-assisted work, or a defined category such as content production or research), and a measurable threshold, for example, achieving an average Productivity Score of 3.2 or above across all advisory deliverables by the end of the financial year. The framework provides the instrument; organisations set the ambition level appropriate to their context.

Sector-level benchmarking

If multiple destination organisations report aggregate scores into a shared pool on an anonymised and opt-in basis, DTTT can publish sector-average productivity figures and give individual organisations a sense of where they sit relative to peers. This turns the framework from a private disclosure tool into a shared industry productivity benchmark, relevant not only for internal performance management but for demonstrating the value of AI investment to funders, boards and government bodies.

DTTT is developing a sector benchmarking model. Organisations interested in participating in the initial cohort should contact info@thinkdigital.travel.

Connecting to sector productivity

Tourism faces a long-standing productivity challenge relative to many other sectors. Output per worker has historically been constrained by the labour-intensive nature of hospitality and visitor services, seasonal demand patterns and fragmented supply chains. The digital transformation of destination management has begun to address some of these constraints, but the evidence base for measuring that impact at scale remains thin.

AI adoption is one of the most significant levers now available for improving productivity across knowledge-based work in tourism, from research and strategy to marketing, communications and programme delivery. Demonstrating that impact requires consistent measurement across a range of indicators: time saved per task, volume of output per FTE, cost per unit of delivery, and the relationship between AI capability extension and the breadth of work that teams can realistically take on. These are all dimensions the DTTT Productivity Model captures directly.

Aggregate data from organisations using the framework consistently over time creates the foundation for sector-level productivity analysis. This is relevant not only to individual organisations managing performance and investment decisions, but to national tourism bodies, enterprise agencies and government departments making the case for digitisation investment, demonstrating returns to funders and informing policy on AI adoption in the visitor economy. The academic partnership programme described below is designed to support the independent research needed to make that case with rigour.


Research partnership programme

The DTTT AI Transparency Framework generates structured, consistently coded data on AI use, time savings, delivery extension and environmental impact across a growing set of organisations. This is an unusually clean dataset for a field where most productivity research relies on surveys or controlled experiments rather than real organisational output. DTTT is seeking independent academic institutions to develop the research potential of this data across two stages.

Stage 1

Model validation study

Do the Productivity Score self-assessments correlate with independently measured time savings? Are the grade boundaries well-calibrated? Does the Environmental Model's indicative scoring align with emerging first-party disclosure data?

Stage 1 establishes whether the framework is a reliable instrument, a necessary foundation before any sector-level productivity claims can be made with confidence.

Stage 2

Productivity impact study

What is the measurable effect of AI adoption on output volume, quality or cost efficiency across destination organisations? How does consistent AI use translate into improvements in sector productivity indicators? What is the relationship between Delivery Extension Score distribution and the breadth of work organisations can take on?

Stage 2 produces the independent evidence base that elevates the framework from a self-assessment tool to an industry standard with peer-reviewed credibility.

Call for research partners

DTTT is seeking academic institutions with expertise in tourism management, digital economy, or AI and the future of work to develop this research programme. DTTT provides the framework, sector network access and a growing dataset of consistently structured AI use records. The academic partner provides research design, methodological rigour and peer-reviewed output.

Relevant programmes include tourism management, hospitality and digital economy research at universities in the UK, Europe and internationally. Doctoral researchers and early-career academics with relevant interests are also welcome to make contact.

Express interest in research partnership ›

Get involved

The DTTT AI Transparency Framework is most valuable when organisations are using it consistently and contributing their experience to its development. The following routes are open to practitioners, researchers and organisations at any stage of adoption. All are free to participate in. Whether you want to register as an adopter, join a working group, contribute to governance or share your experience, use the form below to express your interest.

Get involved

Express your interest

For individuals and organisations who want to actively participate in the framework's development and community.

Register as a framework adopter

Organisations that formally adopt the framework and apply it to their AI work can register as official adopters. Registration gives public acknowledgement of your commitment, listing on the DTTT framework adopters directory, and a direct channel into the governance process for feedback on the models.

Model working groups and governance

Each model has a working group open to practitioners applying the framework in their organisations. Working groups meet periodically to discuss emerging edge cases, proposed changes and practical implementation questions. The most active working group members contribute directly to the governance committee's deliberations — working groups are the main pathway into governance participation.

The governance committee operates at the cross-model level. It reviews change proposals, advises on whether proposed changes warrant a minor or major release, and provides feedback on scoring definitions and applicability across sectors. It is open to practitioners, researchers and industry representatives with relevant expertise. Meetings are conducted by written circulation or videoconference.

Model 1
Transparency Model

Grade boundary calibration, grading consistency across output types, and scope questions on AI tool definitions.

Model 2
Productivity Model

Score calibration across work types, benchmarking methodology and aggregate reporting standards.

Model 3
Environmental Model

Methodology review as provider disclosure data improves, usage intensity thresholds and v1.0 grade boundary revision.

Model 4
Content Integrity Model

Classification logic review, AI-PERS ceiling question, audio deepfake and real-time avatar intervention codes, and procurement clause refinement.

Submit a case study

Organisations applying the framework to real work are invited to submit case studies for publication with attribution. Case studies help other adopters understand how the models apply across different contexts and contribute to the body of practice that will inform future versions. If you have applied the framework to a project or output and would like to share your experience, get in touch and we will discuss how best to do that.

Implementation guidance

How to apply the framework

Practical guidance for applying the DTTT AI Transparency Framework across six common scenarios. Each scenario identifies which models apply, the recommended workflow and the disclosure format that fits the output.

The framework applies to any AI-assisted work, but the right combination of models and the appropriate disclosure format varies by context. Use the quick reference table to identify your scenario, then follow the detailed guidance below.

Before starting any AI task, consider using the AI Ethics Consideration Tool to assess the ethical, legal and data dimensions upfront. The four framework models are applied after the work is complete. Together they form a complete record from pre-use assessment to post-use disclosure.


Quick reference

Which models apply to your scenario.

Scenario Transparency Productivity Environmental Content Integrity Ethics tool
Consumer and social mediavisualvisual/video
Procurement and commissioningvisual delivery
Strategic reports
Internal reporting
Event and programme deliverylarge-scalepromo assets
Data analysis and researchalways

Scenario 1

Consumer marketing and social media content

AI is widely used in consumer content production across tourism, from caption writing and campaign copy to image generation and video creation. Applying the framework here builds audience trust and establishes a consistent standard for how AI involvement is communicated in public-facing material.

Models to apply

Transparency Environmental Content Integrity (any visual content)

Workflow

  1. For imagery, video or content involving real people, use the AI Ethics Consideration Tool before starting.
  2. Complete the work. Record the AI tools used, prompts applied and editorial decisions made during production.
  3. Grade the output using the Transparency Model. Set the AI involvement percentage honestly.
  4. If visual AI content was used, complete a Content Integrity assessment and record the three-part disclosure code.
  5. Assess the Environmental grade — task type, model category and usage intensity.
  6. Generate a disclosure using the toolkit and publish it with the content in the appropriate format.

Disclosure format

For social media, use the compact pill format from the toolkit or a plain-text caption disclosure. For articles and web pages, embed the Display Card or Detailed Declaration Card at the foot of the content.

Example caption disclosure AI-assisted content · Grade C · AI-IMG · CS-NA · DC-CAP · ai.thinkdigital.travel

Do I need a disclosure for every AI-assisted post? Yes, if it reached a public audience. The compact pill format takes seconds to add and makes the commitment to transparency visible at scale. AI-assisted text does not require a Content Integrity assessment — that applies to visual and mixed-media content only.


Scenario 2

Procurement and agency commissioning

Requiring agencies, suppliers and vendors to apply the framework as a condition of engagement creates a consistent standard of disclosure across the supply chain. It makes AI involvement visible in contracted deliverables and gives commissioning organisations the information they need to evaluate what they are receiving.

Models to apply

Transparency Productivity Content Integrity (where visual content is delivered)

Writing the framework into a brief

Include a requirement in your brief and contract that all deliverables must carry a DTTT AI Transparency grade and Declaration Card. For content commissions involving imagery or video, add the Content Integrity disclosure code requirement. A suggested procurement clause is included in the Content Integrity Model tab.

What to look for in a supplier submission

A compliant submission includes the AI Transparency grade with percentage, a brief AI contribution summary, tools used and, for visual content, the Integrity disclosure code. Grades D or E are not inherently problematic but warrant a quality assurance conversation. Any High Risk or Not Recommended Integrity classification means the asset should be withheld pending review.

Evaluation workflow

  1. Include the framework disclosure requirement in both the brief and the contract.
  2. At delivery, confirm a Transparency grade and Declaration Card are present.
  3. For visual content, verify the Integrity disclosure code and confirm the classification is Clear or Caution.
  4. For any High Risk or Not Recommended classification, request revision before accepting delivery.
  5. Retain the Declaration Card with the project file as a permanent record.

Setting a framework disclosure requirement in procurement signals to the market that AI transparency is expected. Organisations that make adoption a condition of engagement help build the sector-wide standard that makes honest disclosure the default.


Scenario 3

Strategic reports and research publications

Research reports, destination strategies, policy papers and major publications represent the highest-stakes application of the framework. The Detailed Declaration Card is designed specifically for this context — a formal record of AI involvement that sits at the front of the document and travels with it through publication and archiving.

Models to apply

Transparency Productivity Environmental

Workflow

  1. Record AI tool usage, prompts and key editorial decisions as you work. Do not reconstruct from memory after completion.
  2. On completion, grade the final document using the Transparency Model. Grade the dominant pattern; note significant exceptions in the Declaration Card.
  3. Assess Productivity and Delivery Extension scores, noting any new capability or format the work required.
  4. Calculate the Environmental grade based on model category and usage intensity across the project.
  5. Generate a Detailed Declaration Card from the toolkit and place it on the inside cover or in the methodology section.

Multi-session project grading

For projects spanning multiple sessions and task types, use the highest-intensity task type for the Environmental grade and the dominant AI contribution pattern for the Transparency grade. Document the rationale in the Declaration Card's human contribution field.

Declaration Card placement conventions Published reports: inside cover or first item in methodology section
Digital publications: end of document or linked from the header
Presentations: grade in slide footer, full Declaration Card in appendix

Scenario 4

Internal reporting and performance management

The Productivity Model is particularly valuable for internal use. Consistent scoring over time builds an organisational baseline that supports target setting and performance conversations with leadership and funders.

Models to apply

Transparency Productivity

Building a scoring habit

For internal reporting to be meaningful, scoring must be consistent. Agree a team convention: score every AI-assisted piece of work above a defined threshold, use the same decision rules for the Productivity Score across the team, and log scores in a shared record. The Research tab includes a calculator for translating scores into estimated time and cost savings.

Presenting aggregate scores to leadership

Present the average Productivity Score across a quarter, the proportion of outputs reaching P3 or above, and the Delivery Extension Score distribution by work type. Use the calculator to express these in estimated time savings and cost equivalents. Present as indicative estimates with transparent assumptions, not as audited figures.

Example quarterly summary format AI-assisted deliverables graded: 34
Average Productivity Score: 3.4 (Significant gain)
Outputs reaching P3 or above: 76%
Estimated hours saved: 148 hrs
D4 or D5 outputs (adjacent or new capability): 8

Scenario 5

Event and programme delivery

Events, conferences, training programmes and destination experiences involve AI across production, communications and content. The framework applies to promotional materials, speaker briefs, event reports and post-event publications, as well as the environmental dimension of AI-heavy production work.

Models to apply

Transparency Environmental (video or large-scale) Content Integrity (promotional assets)

Where the framework applies

Apply the Transparency Model to event reports, speaker briefings, marketing copy and press materials. Apply the Content Integrity Model to AI-generated or AI-manipulated promotional imagery and video. Apply the Environmental Model to productions involving frontier model use or image and video generation at scale.

Event-specific disclosure convention: For printed programmes and on-screen materials, the compact pill format is most practical. For event websites and post-event reports, use the Display Card or Detailed Declaration Card. Where a destination brand is jointly credited, the grade applies to materials produced for that event, not to the organisation as a whole.


Scenario 6

Data analysis and research projects

AI is increasingly used in visitor analytics, market research, segmentation and programme evaluation. These applications involve data sensitivity considerations distinct from content production. The Ethics Consideration Tool is the essential first step before any data is processed through an AI tool.

Models to apply

Transparency Productivity Ethics tool (pre-use, always)

Workflow

  1. Use the AI Ethics Consideration Tool before passing any dataset through an AI tool. Pay particular attention to data sensitivity and whether identifiable personal data is involved.
  2. Confirm that processing data through this AI tool is consistent with your data protection obligations and any confidentiality agreements.
  3. Complete the analysis. Record tools used, instructions applied and human analytical decisions made in interpreting outputs.
  4. Apply the Transparency Model to the final published output — grade what the audience receives, not the analytical process.
  5. Score the Productivity Model if AI delivered meaningful time savings or enabled work not otherwise feasible.
  6. Include an AI methodology note covering tools used, AI's role, limitations and the human review applied.

Ready to start?

The Ethics Consideration Tool is the right first step for any task where the ethical dimensions need thinking through before work begins. The AI Transparency Card toolkit handles grading and disclosure card generation once the work is complete.

AI Ethics Consideration Tool › AI Transparency Card toolkit ›

Disclosure record

This page's own Report Card

In keeping with the principle of transparency, this page carries its own AI Transparency Declaration covering all applicable models. The declaration below is produced using the same Detailed Declaration Card format available in the toolkit.

The framework content described on this page was developed by the DTTT team across multiple working sessions. The page itself was built by AI working under sustained team direction. All strategic decisions, scoring definitions, model architecture and editorial positions are human-authored. AI handled all HTML, CSS and JavaScript production throughout.

DTTT AI Transparency Framework, combined page v5.0
Digital Tourism Think Tank
DTTT Framework
D
AI-Generated, 70% AI involvement
AI Transparency Model v1.1, DTTT Framework
AI tools used
Text and language
Specific models
Claude Sonnet 4.6
AI-supported tasks
Drafting Structure Code generation Editing
AI contribution
AI produced all HTML, CSS and JavaScript for this page, built every interactive grader, implemented the full design system and generated all structured content from the framework source materials.
Human contribution
The framework methodology, all scoring definitions, model architecture and editorial positions were developed by the DTTT team. Direction, structure and purpose for this page were provided throughout by the team.
Productivity
5/5
Transformative
Delivery Extension
5/5
New capability
Combined AI Value
5/5
Exceptional
Environmental Grade
C
Moderate intensity
Transparency note: This page was produced collaboratively with AI assistance under sustained team direction. All framework methodology, scoring definitions and editorial decisions are human-authored. Environmental grade C reflects frontier model use at heavy intensity across the full development cycle.
Scored using the DTTT AI Transparency Framework · Assessments are self-reported