← All articles
EN NLFR

Claude vs ChatGPT vs Gemini: document creation compared

We tested all three AI assistants on the same real-world document tasks. Here's what each one actually produces — with honest strengths and weaknesses.

Everyone has an opinion on which AI is “the best”. Most of those opinions are based on vibes, not evidence. We decided to test the three main AI assistants on the same set of practical document tasks — the kind of work that Belgian professionals actually do every day.

No synthetic benchmarks. No cherry-picked examples. Just five common tasks, run through Claude (with Cowork), ChatGPT, and Google Gemini, with the same prompts.

How we tested

The setup:

  • Claude Pro ($20/month) with Cowork mode
  • ChatGPT Plus ($20/month) with GPT-5
  • Google Gemini Advanced (~$22/month) with Gemini 3

The rules:

  • Same prompt for each tool (in Dutch, our default work language)
  • Each tool gets one attempt — no back-and-forth iteration
  • We judge the output on: accuracy, format quality, usability, and whether you’d actually use it as-is

Test 1: Write a client proposal

Prompt: “Schrijf een voorstel voor een klant die een dataplatform wilt bouwen. Budget: €50.000. Scope: data-integratie van 3 bronnen, dashboard, en maandelijkse rapportage. Lever een Word-document op.”

Claude (Cowork)

Produces an actual .docx file. The proposal has a professional structure — executive summary, scope definition, deliverables, timeline, pricing breakdown, and terms. The content reads naturally and the formatting is clean. You could send this to a client after minor edits.

Verdict: Delivers a real, downloadable Word document. The content quality is strong and the structure is solid. Best output for this task.

ChatGPT

Outputs the proposal as text in the chat window. The content quality is comparable to Claude — well-structured, good tone, covers all the key sections. But there’s no .docx file. You’d need to copy-paste it into a Word template yourself.

Verdict: Good content, but no file output. Extra manual work to get it into a usable format.

Gemini

Similar to ChatGPT — text output in the chat, no downloadable file. The content is decent but less structured. It tends to be more verbose and includes more filler language. The pricing breakdown is less detailed.

Verdict: Adequate content, but needs more editing. No file output.

Winner: Claude. The ability to produce an actual Word file is a genuine differentiator for this type of task.

Test 2: Analyse a PDF contract

Prompt: We uploaded the same 15-page service agreement (in Dutch) to all three. “Analyseer dit contract. Identificeer de 5 belangrijkste risico’s voor de dienstverlener. Geef per risico de relevante clausule en een aanbeveling.”

Claude (Cowork)

Reads the full PDF, identifies five specific risks with exact clause references (e.g., “Article 7.3 — Limitation of liability is capped at contract value, but excludes indirect damages only for the client, not for you”). Recommendations are practical and specific.

Verdict: Thorough analysis with precise clause references. Actionable recommendations.

ChatGPT

Also identifies five risks, with good analysis. Clause references are present but sometimes approximate (“around section 7” rather than exact article numbers). The recommendations are slightly more generic.

Verdict: Good analysis, slightly less precise in referencing.

Gemini

Identifies risks but is more surface-level. Tends to summarise rather than quote specific clauses. Misses one risk that both Claude and ChatGPT caught (a non-compete clause buried in the annexes).

Verdict: Acceptable for a first pass, but you’d want a more thorough review.

Winner: Claude, narrowly over ChatGPT. The precision of clause references makes a real difference when you’re discussing findings with a lawyer.

Test 3: Create an Excel budget tracker

Prompt: “Maak een Excel-projectbudget met: tabblad 1 overzicht per categorie, tabblad 2 individuele uitgaven. Formules die automatisch totalen berekenen. Categorieën: personeel, materiaal, software, extern.”

Claude (Cowork)

Produces a .xlsx file with two sheets. Sheet 1 has a summary table with SUMIFS formulas pulling from Sheet 2. Sheet 2 has columns for date, category, description, and amount, with data validation dropdowns for the category column. The formulas work correctly in Excel.

Verdict: Real Excel file with working formulas. Ready to use.

ChatGPT

Outputs the structure as a code block (Python code using openpyxl). You can run it in the code interpreter to get a file, but the extra step is friction. The resulting file works, but the formulas are simpler.

Verdict: Gets the job done, but requires more user interaction. The file quality is adequate.

Gemini

Describes the spreadsheet structure in text and offers to create it via Google Sheets. If you’re in the Google ecosystem, this works. The Sheets integration is smooth, but the result is a Google Sheet, not an Excel file.

Verdict: Good if you want a Google Sheet. Not ideal if you need .xlsx.

Winner: Claude for Excel workflows. Gemini if you work in Google Sheets.

Test 4: Summarise meeting notes into minutes

Prompt: We pasted 2 pages of messy meeting notes (mix of Dutch and English, typos, incomplete sentences) and asked: “Maak gestructureerde notulen met: beslissingen, actiepunten met verantwoordelijken, en open vragen.”

Claude

Clean, well-organised output. Correctly extracts decisions, assigns action items to the right people (even when names appeared in different formats), and lists open questions. Handles the Dutch-English mix gracefully.

Verdict: Excellent. You could email these minutes to the team as-is.

ChatGPT

Very similar quality. Also handles the mixed languages well. Slightly different formatting but equally usable. Marginally more verbose in the decision descriptions.

Verdict: Excellent. Essentially tied with Claude for this task.

Gemini

Good output, but occasionally misattributes action items (assigns an action to the wrong person in one case). The mixed language handling is solid but slightly less natural.

Verdict: Good, but verify the action item assignments.

Winner: Tie between Claude and ChatGPT. Both handle this task extremely well.

Test 5: Create a presentation from data

Prompt: “Maak een PowerPoint-presentatie van 8 slides over onze verkoopresultaten. Hier zijn de cijfers: [quarterly revenue data, team performance data]. Stijl: professioneel, data-gedreven.”

Claude (Cowork)

Produces a .pptx file with 8 slides: title slide, key metrics, quarterly revenue chart (described in text — no embedded chart image in the actual PPTX), team performance overview, regional breakdown, highlights, challenges, and next steps. The structure is excellent but the charts are described rather than rendered as PowerPoint chart objects.

Verdict: Good structure and content. You’ll need to create the actual charts in PowerPoint yourself, but the data and narrative are solid.

ChatGPT

Text output only — a detailed slide-by-slide outline. Good content suggestions and narrative flow. No file output. You’d build the presentation from this outline.

Verdict: Strong outline, but no file. More manual work required.

Gemini

Offers to create a Google Slides presentation. If you approve, it creates slides directly in your Google account. The integration is seamless — real slides with actual layouts. Content quality is good, though the narrative is less polished than Claude’s.

Verdict: Best integration for Google Slides users. Actual slides, not just outlines.

Winner: Gemini if you use Google Slides (native integration). Claude if you need a .pptx file.

The summary

TaskClaudeChatGPTGemini
Client proposal (Word)★★★★★★★★★★★★
Contract analysis (PDF)★★★★★★★★★★★★
Excel budget tracker★★★★★★★★★★★★★ (Sheets)
Meeting minutes★★★★★★★★★★★★★★
Presentation★★★★★★★★★★★★ (Slides)

Our take

There is no single “best” AI. Each excels in a different context:

  • Claude wins on document creation tasks where you need a real, downloadable file. The Cowork capability — creating actual Word, Excel, and PowerPoint files — is something ChatGPT and Gemini can’t match in the same way.

  • ChatGPT is the most versatile all-rounder. Its text quality matches Claude, and it has the broadest plugin ecosystem. It just doesn’t produce files as directly.

  • Gemini shines when you’re in the Google ecosystem. The native Google Slides and Sheets integration is genuinely useful. For everything else, it’s good but not leading.

For a Belgian professional who creates a lot of documents, our recommendation: use Claude Cowork as your primary tool for document-heavy work, and supplement with ChatGPT or Gemini for tasks that fit their strengths.

What’s next?