Thought

AI decides what people see?

Search is turning into answers. If AI decides what people see, NGOs must design content as data and proof to stay visible, trusted and persuasive.

by

Dirk Kunze

Share Article

Everyone who saw the latest Google I/O '25 knows, that we are no longer shown search results. We will see answers. That’s the quiet revolution Sundar Pichai confirmed in his keynote: Search is becoming generative. Instead of a list of links, we all get Google's own summary from across the web. It decides what’s worth knowing. 

If your organisation isn’t part of what the AI sees, you’re not part of what the public sees either. You can post more and still vanish. The gap is structural, not only creative.

This article shows what to change now. It explains why machine readability and proof signals matter. It gives a plan that any civic team can run this quarter.


AI decides what people see: the new visibility game

Generative systems summarize the field and pick the next action. They surface sources that feel reliable to a machine, then present a confident path to the user. They prefer structure, clarity and corroboration over style.

The implication is simple. Your best work must be discoverable as evidence, not only as prose. Presence in the answer starts with what the model can parse and verify.

Think of visibility in two layers. The first is machine facing structure that lets an agent understand your claim. The second is human facing substance that people want to share and trust.


Why most NGO content is invisible to AI

Most mission content was written for clicks and shares. It was not designed as structured data that connects claims to sources and entities. Models struggle to extract a clear, verifiable answer from that format.

The language often centers internal frames. It assumes brand familiarity and asks the reader to do extra work. Assistants punish friction because they try to resolve intent fast.

Many sites also bury authorship and expertise. They hide dates and use vague titles. Systems that value recency and authority cannot reward what they cannot see.

The result is a silent penalty. Your ideas exist, but the answer layer does not trust them enough to feature them. That is a strategy flaw, not a fate.


From content to data that machines can trust

Treat every page as a data object. Name the claim, name the source, name the person who stands behind it. Use a consistent template so a model can find each element in the same place.

Use clear headings and tight summaries. Put the key statement near the top, then support it with short sections that map to common questions. Agents scan for structure far more than for rhetoric.

Add factual anchors that can be crawled. Dates, locations and explicit definitions reduce ambiguity. Link to primary evidence and to respected third parties who echo your findings.

Mark up people, places and organizations with schema that engines read. Identify authors, topics and page type. You are not gaming a system. You are describing your work in a format the system understands.


Signals that raise trust in the answer layer

Trust is a pattern. Machines infer it from surrounding signals as much as from what is on your page. You need a footprint that says you are reliable to both humans and models.

Make authorship visible. Show real names and relevant expertise. Connect profiles to pages so identity is machine readable and consistent.

Show your review process. If a page was fact checked, say so in a standard spot. If an expert contributed, state the name and role.

Cite high quality sources and earn citations back. Your work should sit inside a web of references that the model already trusts. Over time that network lifts your default credibility.

Keep pages fresh. Add a visible update note when you revise facts or guidance. Stale pages degrade trust even when the prose still feels fine.


Distribution still matters in an AI first web

AI does not float in a vacuum. It watches what people open, save and share. It notices which explanations reduce follow up queries.

Invest in engagement that signals usefulness, not just sentiment. Saves and deep reads carry more weight than quick likes. Threads that resolve confusion tend to surface more.

Be present where the questions start. Forums, creator channels and local pages feed the same discovery graph that assistants read. Participation seeds both human awareness and machine memory.

Do not abandon classic channels. Email, podcasts and events still shape what the web cites. Those citations still shape what the answer presents.


Logiq Insight: build an AI layer for traction, not vanity

Most teams still ship pages for people and hope machines will cope. We reverse the order without losing the human. We design pages that a model can parse and a person can love.

At Logiq Media we align audience design, machine readability and distribution into one loop. We call the loop audience, asset, answer. Audience defines the question, asset encodes the proof, answer measures if we surfaced in the places that matter.

Impact Engineering is the operating method behind that loop. It replaces guesswork with weekly experiments that test which framing earns saves and which structure earns inclusion in AI answers. It keeps the focus on movement among people outside your base.

The point is not to chase every tweak. It is to institutionalize a way of working that produces content as data and proof. That is how better ideas show up where the public now decides.


A 30 day plan to get AI ready

  • Week one, audit structure. Pick your ten most important pages. Add clear summaries, explicit claims and visible authors. Mark up people, organizations and pages with standard schema.

  • Week two, audit proof. Add citations to primary evidence and to respected outlets that echo your findings. Link outward with care and invite credible partners to link back to the exact pages you want assistants to surface.

  • Week three, rewrite for questions. Collect the top questions your missing audience actually asks. Rewrite the same ten pages so each section resolves one question in clean language.

  • Week four, distribute with purpose. Seed those pages inside communities where those questions appear. Pair each page with two short formats that drive saves, replies and time on page.

  • Lock the loop. Each month, retire a weak page and promote a strong one. Each week, test one element that affects machine readability or human usefulness.


Craft that earns both inclusion and attention

Lead with the answer, then earn the read. Assistants quote what is concise, specific and verifiable. People stay for what is vivid and useful.

Use concrete language. Replace abstract nouns with scenes and numbers. State what happens in a time frame a person can feel.

Design templates that reduce friction. Predictable placement of key elements helps both agents and readers. Consistency beats cleverness when scale matters.


Targeting when platforms will not do it for you

Platform automation will overfit to your current audience. You must define the missing audience and build for it on purpose. Life stage, media habits and motivation are the anchors.

Find where their questions live and show up there first. Package your answer to match the context while keeping the claim and proof intact. Borrow trust from credible hosts.

Move one slice at a time. Expansion is a sequence, not a blast. Prove lift in one group, then clone the path for a neighbor group.


Metrics that prove visibility and influence

Track inclusion in AI answers for your core queries. Use simple prompts that mirror real questions and log where your pages appear. You do not need perfect data to see a trend.

Track expansion among people who did not follow you last week. Saves, deep reads and shares from non followers are the strongest signals that you are leaving the bubble.

Track narrative lift at issue level. When your frame gains share against competitors in a defined slice, your AI and human strategies are working together.

Publish what you learn. A short playbook helps partners repeat wins and stop what fails. A field that shares evidence compounds reach.


Governance that keeps integrity intact

Do not trade accuracy for inclusion. Sequence truth for comprehension, but never bend it. Your process is a trust asset, not a burden.

Write down what you will not do. No fake authors, no bought citations, no hidden edits. Constraints let teams move faster with confidence.

Review pages on a regular cadence. Facts change, and assistants notice. Freshness is a fairness to the reader and a signal to the machine.


Takeaway

AI decides what people see, so you must decide what AI can see from you. Design content as data and proof, then distribute it where real questions begin. Use Impact Engineering to turn this into a weekly habit that grows visibility with people outside your base.


One question to close your next content meeting. Which single page will we make machine readable, socially useful and present in real answers before the month ends.

Get in touch

Get in touch

Get in touch

Make your organization part of the solution.

Make your organization part of the solution.

Make your organization part of the solution.

The volume of people engaging with your ideas determines how those ideas flow and grow and shape the future

The volume of people engaging with your ideas determines how those ideas flow and grow and shape the future

The volume of people engaging with your ideas determines how those ideas flow and grow and shape the future

© Logiq Media, 2025 | A project of Idea Dept