Rules Are Not The Enemy Of Innovation; They Are The Condition That Makes Trust Possible.

Rules Are Not The Enemy Of Innovation; They Are The Condition That Makes Trust Possible.

Playing with Fire: Governing AI Before It Governs Us
By Douglas Martin Levermore | @DMLevermore

There is an old West African proverb about a village that discovered fire. At first, the people marveled at its power—how it could cook food, warm homes, and extend the day beyond sunset. But without agreed rules, the same fire that nourished them soon burned fields, destroyed huts, and threatened the very community it had uplifted. It was only when the elders gathered and set boundaries—where fire could be used, who would tend it, and how it would be contained—that it became a tool of progress rather than a force of destruction. Artificial intelligence now sits in that same uneasy space: powerful, transformative, and, without governance, potentially destabilizing.

The regulatory clock on artificial intelligence is no longer theoretical—it is ticking in real time.

What once lived in the imagination of science fiction now shapes how people work, learn, transact, and even receive medical care. Generative systems draft legal briefs, screen job applicants, recommend treatments, and influence public discourse at scale. Yet the velocity of this transformation has far outpaced the development of rules to govern it, creating a widening gap between capability and control. In that gap lies a growing unease: questions of bias, misinformation, labor disruption, and systemic risk are no longer abstract—they are immediate and tangible.

For policymakers, the challenge is not whether to regulate, but how to do it. Too heavy a hand risks suffocating innovation under layers of bureaucracy; too light a touch invites harm, erodes trust, and ultimately undermines adoption. The task, therefore, is one of balance—designing regulatory frameworks that are firm enough to protect society, yet flexible enough to allow technological progress to flourish. This is not merely a technical exercise. It is a question of public trust, economic competitiveness, and democratic accountability.

Evidence increasingly suggests that well-designed governance is not a drag on innovation but a catalyst for it. Research from Stanford’s Human-Centered AI Institute and the OECD consistently finds that jurisdictions with clear, credible AI policies attract greater investment and foster stronger innovation ecosystems. Investors seek predictability. Citizens demand assurance. When both are present, adoption accelerates. In this sense, regulation becomes less a constraint and more a signal—a declaration that the playing field is stable, fair, and sustainable.

Effective AI policy must begin with clarity of risk. Not all AI systems carry the same level of consequence, and regulatory approaches must reflect that reality. High-risk applications—those affecting healthcare decisions, financial access, employment, or public safety—require rigorous oversight, including impact assessments, independent audits, and ongoing monitoring. Lower-risk applications should be governed more lightly, allowing experimentation and iteration. This principle of proportionality ensures that compliance remains meaningful without becoming prohibitive.

Equally critical is the shift from promises to proof. It is no longer sufficient for developers to assert that their systems are safe or unbiased; they must demonstrate it. Transparency requirements—such as safety reports, incident disclosures, and standardized testing protocols—create a culture of accountability. When firms are required to show their work, governance moves beyond rhetoric into measurable reality. This approach not only deters negligence but also builds confidence among users and stakeholders.

Governments must also look inward. The regulation of AI cannot exist solely as an external imposition on industry; it must be embedded within the machinery of government itself. This means appointing accountable AI leads within agencies, maintaining public inventories of AI systems in use, and integrating risk assessments into procurement and budgeting processes. When governments model responsible AI use, they set a standard that the private sector is more likely to follow.

Another emerging frontier in policy is the regulation of inputs—not just outputs. Access to high-performance computing, quality data, and energy resources is increasingly central to AI development. By shaping how these inputs are governed—through secure data frameworks, sustainable infrastructure requirements, and equitable access policies—governments can influence the trajectory of innovation itself. This approach recognizes that control at the foundational level often proves more effective than attempting to regulate outcomes after the fact.

Yet, no nation operates in isolation. AI is inherently global, and its governance must reflect that reality. Aligning national policies with international frameworks—such as the OECD AI Principles—creates consistency, reduces regulatory fragmentation, and facilitates cross-border collaboration. At the same time, enforcement must remain locally grounded, reflecting domestic legal systems, cultural values, and economic priorities. The balance between global alignment and local adaptation is essential for both competitiveness and sovereignty.

Perhaps most importantly, AI governance must be adaptive. The technology evolves too quickly for static rules to remain effective. Regulatory frameworks should incorporate mechanisms for continuous learning—periodic reviews, regulatory sandboxes, and iterative guidance that evolves alongside technological advances. This ensures that policy remains relevant, responsive, and rooted in evidence rather than reaction.

The moment calls for deliberate, coordinated action. Jamaica must move with urgency to establish a coherent national AI framework—one that is bipartisan and approved by Parliament, ensuring continuity beyond electoral cycles and signaling long-term national commitment. Such a framework should define risk tiers, embed accountability, and measure performance through transparent indicators—adoption rates, audit compliance, incident reporting, and demonstrable productivity gains across key sectors.

Without it, the country risks drifting into a reactive posture, where technology is consumed rather than strategically directed, and where inefficiencies, bias, and missed opportunities quietly compound. The cost of inaction will not be theoretical; it will appear in slower economic growth, diminished competitiveness, weakened public service delivery, and a widening gap between Jamaica and economies actively leveraging AI to drive innovation. In a world where intelligence—artificial or otherwise—is increasingly a factor of production, failing to embrace and govern this technology is not neutrality; it is the quiet surrender of national advantage. The choice is clear: act now, or fall behind

Excellence in artificial intelligence rests on a more fundamental prerequisite: a high standard of basic intelligence across the population. AI does not replace foundational skills—it magnifies them. Systems grounded in data, logic, and pattern recognition require citizens, developers, and policymakers who can think critically, reason quantitatively, and communicate with precision. Where these fundamentals are weak, AI risks amplifying error, bias, and misjudgment; where they are strong, it becomes a powerful engine of insight, productivity, and innovation. Any credible national ambition in AI must therefore be matched by an equally serious investment in human capability—strengthening literacy, numeracy, analytical reasoning, and digital fluency. Only then can the society engaging with the technology move beyond passive use to active stewardship, shaping AI not just as a tool, but as a disciplined instrument of national development.

Effective regulation must be anchored in outcomes. It is not enough to establish rules; governments must measure whether those rules are working. Metrics such as incident
rates, audit compliance, bias reduction, and public trust indicators should be tracked and reported transparently. This shift toward outcome-based governance transforms
regulation from a compliance exercise into a performance system—one that can be refined over time to achieve its intended goals.

The question is not whether artificial intelligence will shape the future—it is already doing that. The real question is whether that future will be guided by deliberate design or left to unfold unchecked. The early evidence offers a clear lesson: societies that define risks, demand accountability, build institutional capacity, and remain open to adaptation are better positioned to harness AI’s potential while safeguarding against its dangers.

Like the village that learned to live with fire, the goal is not to extinguish the technology, but to master it—so that its power serves the many, rather than endangering them.

Douglas Levermore, MBA, JP, is an independent management consultant and the founding Executive Director of Jamaica’s Public Investment Management Secretariat (PIMSEC)—the government unit established to strengthen project appraisal, fiscal discipline, and oversight of public investment, now known as the Public Investment Appraisal Branch (PIAB) within the Ministry of Finance and the Public Service. He also serves as a FINRA arbitrator and a commissioned Notary Public in the Commonwealth of Virginia. Douglas writes on social issues, leadership, management lessons, and organizational strategy, drawing on extensive real-world experience across both the public and private sectors.