There is a coordinated effort — by governments, incumbents, and those who benefit from civic ignorance — to convince ordinary people that AI cannot be trusted. Some of that skepticism is earned. Much of it is manufactured. The effect is the same: the people with the least access to lawyers, accountants, experts, and institutional knowledge are told to distrust the one technology that could give them equal footing.
Ethical AI is not a marketing phrase. It is a genuine possibility — one that requires the right values architecture, a published and auditable framework, and an institution whose incentives are aligned with the user rather than the advertiser, the government, or the shareholder.
Quarex exists because civic knowledge should not be a privilege. And because the little guy deserves the same quality of information that shapes decisions in boardrooms and legislatures.
A business proposition for a dedicated civic AI instance
Version 1.0 · April 2026 · Public document · quarex.org
The taxonomy doesn’t have a ceiling. The taxonomy is the structure. The structure is the system.
Unlike static databases, any civic topic that doesn’t yet exist in Quarex can be researched and added recursively on demand — through a structured AI generation pipeline built into the platform. Content is the current expression of the taxonomy. The taxonomy is the asset.
No paywall on civic knowledge. Revenue comes from API licensing, institutional partnerships, and infrastructure — never from the citizen asking the question.
The model is optimized for accuracy and source transparency, not session length or emotional engagement. Contested facts are presented with sourced multiple perspectives, not resolved algorithmically.
Political content follows the Quarex iceberg method — surface the question, present the landscape, let the citizen conclude. The model does not advocate. It informs.
This constitution is public. Model behavior can be tested against it. No black box civic AI. Changes to the framework require public notice.
The instance is not monetized through advertising. No government entity may direct content. Funding sources are disclosed publicly on an annual basis.
The model declines content that undermines democratic participation — disinformation, voter suppression framing, manipulation. It does not refuse because a topic is politically uncomfortable.
Structured civic knowledge taxonomy — 1,001 books, 7,391 chapters, and 47,297 topics across a six-level hierarchy — human-curated, multilingual, and with a mechanism for continuous growth. The taxonomy doesn’t have a ceiling. Training data no frontier AI lab currently has.
Proven multilingual delivery — English, Spanish, 19 languages. Election content, politician libraries, geographic coverage of every country and territory.
Demonstrated civic platform — QuarexRadio, QuarexNews, PublicStudies.org, and election2026.net and elecciones2026.net already operational. Not a proposal — a working platform.
First Amendment architecture — Civic and political speech AI carries stronger legal protection than general-purpose AI. This is a regulatory moat, not just a mission.
A model instance trained on Quarex civic content, governed by this constitution, and distinct from general-purpose commercial deployment.
Civic-tier API pricing locked for a defined term. Civic infrastructure cannot be subject to market-rate repricing cycles that would force access restrictions.
No unilateral alteration of the instance’s ethical framework without Quarex agreement. The values are the product. Governance changes follow the same public notice requirement as this document.
Technology gets commoditized. Compute costs collapse. Models improve everywhere. Capital follows returns. But civic trust — once earned — is the one thing a well-capitalized competitor cannot simply acquire.
Brand trust is “I trust this company to serve me well.” Civic trust is deeper: “I trust this institution not to manipulate me — even when it would be in their interest to do so.” That is a higher bar. And it is exactly what Quarex is positioned to claim — because this constitution makes the incentive structure visible, published, and binding.
Every accurate civic answer builds a deposit of trust with that user that no marketing budget can replicate.
Every non-partisan presentation of a contested political question reinforces that Quarex is not trying to move the user — only inform them.
Every refusal to advertise signals that the user is not the product. That signal is rare enough in 2026 to be noticed.
The published constitution means users can hold Quarex accountable — which paradoxically increases trust rather than diminishing it.
The platform evaluation story — that Quarex tested other AI platforms and found them ungovernable for civic use, and chose harder over easier — is itself a trust signal when told publicly.
Once a user trusts Quarex for election information, they are likely to trust it for civic education. Trust in civic education extends to PublicStudies.org. Trust there extends to QuarexRadio and QuarexNews. Each property reinforces the others because they share the same constitutional foundation.
This is not a product ecosystem. It is a trust ecosystem. The constitution is the root certificate that validates everything built on top of it.
User trust in civic information is the scarcest resource in democratic society. It cannot be manufactured through marketing. It is earned through consistent, auditable, constitutionally-governed behavior over time. It is Quarex’s primary asset — and the one thing a well-capitalized competitor cannot simply acquire.
This is the most important section of the Quarex Civic Constitution. It exists because of a practical discovery made during platform evaluation: prompt engineering alone cannot produce a civically-governed AI.
A constitution without a compatible model is just a document.
The model is the constitution in practice. This document makes that explicit and binding. A model partner’s values must be compatible with civic governance at the training level — not merely at the instruction level. This is a technical requirement, not a vendor preference.
Quarex conducted direct evaluation of multiple AI platforms, attempting to implement civic governance principles through prompt engineering — the standard method of instructing a model how to behave. The results were unambiguous: surface-level instructions applied to a model trained with different underlying priorities do not produce consistent civic behavior.
Regardless of how carefully the civic constitution was expressed in prompt instructions, models with incompatible values architectures exhibited the same failure patterns: liability-driven refusals on legitimate civic and political content; inconsistent responses to equivalent questions depending on surface phrasing; hedging that obscured factual clarity; and corporate guardrails that overrode civic instructions under pressure.
These are not prompt engineering problems. They are values architecture problems. When a model’s trained priorities conflict with civic governance principles, the trained priorities win. No instruction overrides that.
A model is “governing tractable” when its trained values are genuinely compatible with an external ethical framework — meaning the model can be fine-tuned and guided to consistently honor that framework across diverse, unpredictable real-world inputs, not just in controlled test scenarios.
Governing tractability requires four conditions:
Transparent values architecture: The model’s trained principles are documented, published, and auditable — not proprietary black-box behavior tuned for commercial or liability purposes.
Fine-tuning compatibility: The model can be trained on domain-specific civic content in a way that deepens its civic behavior, rather than simply layering instructions on top of incompatible underlying values.
Consistent civic behavior under pressure: The model responds consistently to equivalent civic questions regardless of surface phrasing, political sensitivity, or user persistence — not as a function of which inputs trigger corporate guardrails.
Consent-based governance: The model’s operator is willing to enter a binding agreement in which the ethical framework governing the civic instance cannot be unilaterally altered. The values are the product, not a configuration option.
Framing model selection as a constitutional requirement — rather than a vendor preference — creates a durable protection. It prevents future boards, investors, or cost pressures from substituting a cheaper or more available model that cannot honor the civic framework. The question “can we just switch models?” has a principled answer: only if the replacement meets the governing tractability standard. That standard is defined here, not in a business negotiation.