“It depends on us.”
Those words have echoed in my mind since I first read them in Vilas Dhar’s piece for The Boston Globe.
They were spoken by a high school student in a central Illinois classroom in response to his question: “What do you think an AI-powered future will look like?”
We know AI will help cure diseases, transform scientific research, redesign algorithms, and in general improve efficiencies across organizations and around the world.
But will it extend real knowledge to more people than ever before? And will there still be sufficient jobs available for kids like the ones he interviewed?
I wrote recently on where I see the future of AI going and what it could mean for both businesses and people around the world.
Today I look at the non-technological piece of this mission: governing artificial intelligence, why it matters, frameworks already ready, and the AI policy impact on businesses.
Why AI Governance Matters Now
Whether you’re aware of all the ways or not, your company is using AI right now.
It’s almost guaranteed. With AI embedded in commercial software applications of almost every kind, SaaS systems, and even search engines, employees everywhere are using it at work, sometimes even when they’re not aware of it themselves.
And adoption is only increasing, meaning the stakes almost couldn’t be higher.
As the Martial Hebert, the dean of Carnegie Mellon University’s School of Computer Science told Bill Brink:
“We sometimes joke about really smart people, ‘They’re going to cure cancer someday.’ Well, these people working on AI are going to cure cancer. All the main human diseases, I totally believe, are going to be cured by giant computer clusters, doing large-scale machine learning and related techniques. And we’re just scratching the surface now.”
With such awesome capabilities, put to use in an almost unbelievably broad spectrum for things like pattern and image recognition, prediction and forecasting, targeting, improved decision making, translation, diagnostics, workflow and contact automation, content creation of nearly every kind, and extensive personalization, there is no doubt AI touches all of us in our daily lives, in ways both large and small.
But as of now, there is often a lack of clarity around ownership. Who is responsible?
AI comes in numerous shapes and sizes, so who owns the behavior of AI systems at these varying levels of use?
And for business, how are we aligning it with our objectives? How are we securing it and ensuring stability?
AI Ethics and Governance: The Goals
For starters, I am heavily involved in AI myself, and I do not seek to impede the freedom and power of innovation and development.
Good AI governance should improve development by increasing the spread of knowledge and thereby widening not only the decision-making capacity, but also the potential for innovation.
We’ve seen the impacts that open-source AI alone has had, not only opening the door to smaller players but also enabling better understanding of how systems work. It’s also lowered costs.
Governance should ensure safety and human rights, promote transparency, and enable broader innovation, without sacrificing oversight.
Not to be minimized in all of this is the importance of ensuring public trust.
According to figures by Deloitte, more than 90% of organizations need to improve their AI governance as of December, and yet most are unclear on what needs to be done.
And for some incentive? They also found companies that have strong governance report 5% higher revenue, 28% better staff buy-in on AI, and an improved reputation among customers.
AI Governance Challenges to Resolve
What drives the importance of AI governance in particular?
Per ISACA, an international professional organization that’s come to be known for IT governance, the unique risks from AI include:
- Bias and Discrimination: With increasing use in decision-making and built from massive, often opaque datasets, how are we sure it’s being fair?
- Lack of Explainability: Essential for accountability, how clear are companies on how their AI systems are working, and what’s gone into them?
- Data Privacy and Regulatory Exposure: The GDPR, CCPA, and the EU AI Act are out there now, as well as other US state regulations, but for the very reasons I’m writing this article, more are sure to follow.
- Cybersecurity Questions: How do we monitor against adversarial inputs and model-based attacks, such as those geared to extract training data?
- Ownership Gaps: As I mentioned above, who is ultimately responsible for the various AI outcomes within organizations?
With the global race burning red hot, and countries restricting each other on who can access and utilize what, how do we manage these borders for global companies?
And while some sectors of business (healthcare, for example) are highly regulated, others are less so, making their interaction further complicated.
Then there’s the question of resources.
I wrote recently about AI’s growing power needs (you can read how it stacks up to prior crises as well as techniques being used to improve it here), and even more recently, the MIT Technology Review ran an excellent, in-depth profile on the importance of getting a handle on AI energy consumption now, because of the steep growth rate.
A Look at Existing AI Governance Frameworks
Before I get into some of the frameworks that are out there (particularly COBIT), I want to consider additional views on what AI governance should look like.
Observations from the Empire
Author Karen Hao, who’s written for The Atlantic, Wall Street Journal, and MIT Technology Review, has been heavily featured in the tech news outlets recently for her new book The Empire of AI.
Written from her experience embedded with OpenAI during its rise to power, it takes an often pessimistic view on the overall value of GenAI, the course that we’re currently on, and how benefits are being distributed.
Yet like many with far rosier views on AI and societal impact, she also advocates for the importance of AI governance.
On The Hard Fork podcast, she told hosts Kevin Roose and Casey Newton that current governance is often too focused on just one area. To be truly effective, it must:
- Consider the Entire Supply Chain: Focus on data, compute, models, and applications, with human oversight throughout.
- Curate the Datasets: With current lawsuits fighting over training sources, she urges that people should be able to opt in or out, and that the content should be moderated and far more visible.
- Curate Compute: With data centers being positioned all over the world, what are the longer-term costs going to look like for local populations, and how are burdens being shared?
Adaptive AI Regulation and Policy
Traditionally, IT governance looks to put in place fixed systems, requirements, or guidelines for things like risk management, oversight, and disclosure.
But with AI’s incredibly fast rate of change, breadth of use, and complexity, this approach can be extremely challenging.
Research from Stanford University’s Anka Reuel and Trond Arne Undheim urges instead to make use of an adaptive approach.
With an emphasis on flexibility, feedback loops, and responsiveness, and drawing from agile methodologies, their work advocates for a dynamic, broad approach to match AI’s own fast-moving nature.
This includes involving society at large in the process, with civil society working as a watchdog, advocate for inclusion, and builder of public awareness.
For companies, it means board-level responsibility, sharing frameworks in use, and implementing strong internal governance.
ISACA COBIT’s AI Governance 2025
I mentioned ISACA above. Their COBIT framework was originally created in the 1990s to help the financial audit community and later expanded to be more broadly applicable.
Today it’s the most commonly used regulatory framework for Sarbanes-Oxley (or SOX) compliance.
They’ve recently released an effective white paper on utilizing COBIT for AI, which gives companies a very tangible place to start. It aims to help meet legal obligations foremost, but also to be more agile and effective, and help build and maintain trust.
It is based on five processes:
- Evaluate, Direct, and Monitor (EDM): for aligning AI with enterprise goals and ethical standards.
- Align, Plan, and Organize (APO): to structure teams and define processes.
- Build, Acquire, and Implement (BAI): for secure, validated implementation.
- Deliver, Service, and Support (DSS): to maintain systems and monitor performance.
- Monitor, Evaluate, and Assess (MEA): to ensure performance, compliance, and institute feedback loops.
It encourages proactive documentation, for building more traceability into your systems. This can help when trying to track down why a model provided a given output, or where and how training data was approved, for example.
It also seeks to address challenges like a lack of central ownership, communication gaps between the technical and non-technical, and the rapid evolution of AI in cybercrime.
As with all models, executive support is essential, as is cross-functional collaboration (as via IT, HR, compliance teams, data science, security) and continuous training (not only for AI literacy and updates but also for company policy).
With a capacity to take and use what you need, frameworks like COBIT can really aid with strategic alignment and mitigating risks throughout both AI use cases, and the various life cycles.
They can also reinforce the kind of monitoring and metrics that ensure ROI and consistent, quality implementations.
Deloitte’s AI Compliance for Companies
As covered in the Harvard Law School Forum on Corporate Governance, Deloitte has its own model geared for the enterprise.
With an end-to-end view, it specifies board and management level responsibilities, focusing on strategy, risk, performance, talent, and culture and integrity.
It involves being specific about the organization’s AI strategy and tolerance for risk, then builds structures, including roles and accountability, lifecycle controls, and ongoing performance monitoring.
As with COBIT, their approach isn’t a sidebar but instead is fully integrated into the organizational infrastructure.
Additional Models Abound
COBIT and Deloitte’s are just two of the frameworks out there. There are numerous public and global governance solutions being made available as a starting place for organizations.
This includes via the EU and the UN, as well as through NIST, from the US government.
The NIST AI Risk Management Framework (AI RMF) is focused on voluntary guidelines for responsible development but also can aid organizations in identifying their own exposures to risk.
It stresses the importance of continuous monitoring, human oversight, and transparency in AI use throughout the organization, and aims for more explainability, fairness, security, and focus on the impact on society at large.
How PTP Can Help Recognize the Positive Impact of AI Governance on Business
At PTP, we’re utilizing AI heavily to reinforce and scale what we already do effectively. The challenge has been to do so safely, and transparently, and to this end good governance can also be good business.
With more than 27 years of experience providing top tech talent, we can help you find the quality people you need for implementation, as well as ongoing oversight and flexibility.
We’re also ready to help organizations that could use our assistance getting a handle on AI use across the board, whether it’s practical implementation or governance.
Conclusion: Building Responsible AI Frameworks
I’ve heard many say that the most urgent question of this generation may well be how to effectively govern AI.
Because while the technology is changing fast and has truly incredibly potential, it’s likely the way we use it that will make the biggest difference in the world at large.
What that student said to Vilas Dhar is right. I believe it’s also a challenge.
It’s up to us.
Together, let’s see what we can do with the challenge.
References
Leveraging COBIT for Effective AI System Governance, and COBIT: A Practical Guide for AI Governance, ISACA
Strategic Governance of AI: A Roadmap for the Future, Harvard Law School Forum on Corporate Governance
AI is here to stay. How Do We Govern It?, Carnegie Mellon University News
AI at a crossroads: building trust as the path to scale, Deloitte
AI’s energy impact is still small—but how we handle it is huge, MIT Technology Review
Generative AI Needs Adaptive Governance, arXiv:2406.04554