The RAID Framework: How to Get Started with Deploying Responsible AI
Five steps: (1) Empower, (2) Design, (3) Resource, (4) Build and (5) Iterate
I am delighted to co-author this post with Steve Cadigan, a world-class talent advisor and future of work evangelist who was LinkedIn’s first Chief HR Officer. This collaboration emerged from a conversation about Maury’s posts on the need for specific, local actions to deliver responsible AI, and Steve’s experience as an educator getting questions about how to get started with responsible AI.
There is so much noise in the market about AI—particularly generative AI. People and organizations want to get involved, but it’s often hard to know where to start. And particularly how to do so responsibly.
There is no widely-accepted answer, although there are plenty of frameworks, consultants1 and others offering answers. In this post, we jump into the fray by proposing the Responsible AI Deployment (RAID) Framework. There is of course a risk that another AI framework will just add to the noise2:
But we believe there is room in the conversation for a simple, practical, open approach to getting started with AI, informed by concrete examples. Our approach is about how to begin responsibly—not by racing ahead, but by asking better questions together, and experimenting with the answers.
I had to start it somewhere, so it started there.
— Pulp, Common People
The RAID Framework
The RAID Framework has five steps. It’s aimed at organizations of all kinds, and can also be applied by individuals experimenting with AI. Before diving into the details, let’s look briefly at what we mean by the words in the framework’s name: ‘AI’, ‘responsible’ and ‘deployment’.
Artificial intelligence is evolving too fast to be precious about definitions. A definition we like is the one used by Wikipedia:
Artificial intelligence (AI) refers to the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making.
But there are many other definitions that are workable. Rather than focusing on precise definitions, we recommend thinking broadly and flexibly about AI and its capabilities. This has important practical implications. For example, depending on the project, it’s sometimes important to know what’s going on at the cutting edge of AI. But, probably more often, well-established AI techniques can do the job very well and more efficiently.
As to being ‘responsible’, there are dozens or hundreds of frameworks proposing detailed principles for responsible AI, ranging from Google to McKinsey to the EU AI Act to the OECD to many others. Again, we won’t focus on details, or attempt to synthesize these frameworks. In fact, detailed principles for responsible AI distract from the core point. For us, responsible AI is an attitude about shared humanity—using AI in a myriad of specific situations in a way that benefits individuals, society and the environment. (That’s not intended to advocate a slowdown of AI or a rejection of profit. We share the vision of many for AI to improve society, and to support many successful businesses.)
AI is not about rolling out another software upgrade, and we cannot treat it like previous technology implementation. We’re navigating a foundational shift in how we work, make decisions, and even define value—perhaps the most massive change management project of our lives. In such a fluid and transformational context, we will need to depend on the judgments of millions of individuals and organizations on how to get things right—as opposed to using the power of AI to detrimental ends. Ultimately, the best test for responsible AI may be the one that US Supreme Court Justice Potter Stewart articulated for obscenity in 1964: “I know it when I see it.”3
In contrast to this non-prescriptive approach to defining ‘AI’ and ‘responsible AI’, we want to be much more precise about ‘deployment’. The RAID Framework is mostly about discliplined thinking on how to get things done in practice. The five steps of the framework are:
Empower: Designate the people responsible for leading and governing your organization’s AI projects.
Design: Choose an approach for initial AI projects or programs, either top-down or bottom-up.
Resource: Ensure that the organization has the resources for the selected approach.
Build: Deliver the selected projects or programs.
Iterate: Learn from experience, and keep building in evolving ways.
1. Empower
To start with AI deployment, an organization must choose the people to lead it—and these people must be empowered by leadership to get things done.
Instead of choosing a single AI champion, we believe organizations are better served by building an AI council—a diverse group pulled from across functions, geographies, and levels of the organization (and even outside it). Why? Because this moment is too complex, too nuanced, and too consequential to place in the hands of one person or one perspective. A council can bring the kind of multi-dimensional thinking we desperately need right now: business logic, ethical reasoning, operational pragmatism, and human intuition.
This council’s job isn’t just to greenlight tools and supervise projects—it’s to help raise confidence across the organization. To build literacy, surface use cases, and ask the uncomfortable questions no vendor is incentivized to ask. The council can provide the connective tissue between exploration and governance, between innovation and trust. They can help set the tone: AI is not here to replace us—it’s here to expand what’s possible if we’re intentional.
A few examples from large companies illustrate possible models for an AI council (the examples are mostly from the financial services sector, which is a leader in risk management):
Global insurer Allianz has a Data Advisory Board, which it features as the key governing body of its combined data ethics and responsible AI program. The objective of the board “is to bundle expertise and support decision-making on Data Ethics and further Data-related topics within the Allianz Group.”
DBS Bank in Singapore applies a PURE (Purposeful, Unsurprising, Respectful, and Explainable) framework to “guide responsible AI and data use”, with a ‘PURE committee’ that “deliberates on ambiguous cases, ensuring aligned, responsible AI utilisation”4.
Australian telco Telstra has a detailed approach to AI governance, including a Risk Council for AI & Data (RCAID), which reviews and must approve (or escalate to management) any deployment of a high-impact AI system (whether third-party or internally-developed). RCAID is “a cross-functional body with experts from across Telstra’s business, including its legal, data, cyber security, privacy, risk, digital inclusion and communications teams.”
Global bank HSBC has an AI Review Committee, “a collection of senior executives with a diverse set of expertise”, which reviews and approves novel AI use cases.
Allianz, DBS Bank and Telstra have all chosen to treat responsible AI as part of the larger problem of data governance. This makes obvious sense for data-based businesses serving millions of individuals.
A council is a flexible vehicle, and should be tailored to the needs of each organization, likely starting small (but with suitable ambition). A couple of key considerations:
An important first step in building a council is to discover where AI fluency exists in the organization’s ecosystem—employees, investors, advisors, mentors, local universities, etc.—and pull these resources closer to leverage their expertise.
The council must have clear organization and leadership, and a clear mandate. That could be an advisory mandate, as is the case at Microsoft and Allianz (probably unavoidably for such large organizations). Or to encourage rapid action in a smaller or more agile company, it may be preferable for the council to have authority to implement responsible AI programs directly.
2. Design
Every company should craft its own approach to AI deployment—because every company is different, and because AI is so diverse in its capabilities. However, there are two general models that we are seeing to successful AI deployment: bottom-up and top-down.
In a bottom-up approach, the organization provides AI tools to a large number of employees and other stakeholders, and lets a thousand flowers bloom.
In a top-down approach, the AI council or other governing body identifies potential AI use cases within the organization, and selects a limited number of promising ones to be implemented and supported. The focus on ‘a limited number’ is crucial: too many projects managed from above tends to be a recipe for confusion, with none of them being particularly successful.
It may seem contradictory to suggest ‘letting a thousand flowers bloom’ in a bottom-up approach, and focusing on a limited number of use cases in a top-down approach. However, there are crucial differences are in the management approach (project management is dispersed throughout the organization in a bottom-up approach, but is led by an AI-focused organization in a top-down approach) and resourcing (see the Resource step below).
Let’s look at examples of each type of approach. Please understand that we mean these examples as illustrations rather than recommendations. We encourage you to go forth and design the best ways to get started with responsible AI in your own organization.
Bottom-up examples
Walmart in mid-2023 provided a generative AI tool called ‘My Assistant’ to 50,000 staff, and then expanded it to another 25,000 staff in early 2024. My Assistant uses a combination of Microsoft AI models and Walmart proprietary data, supporting a wide variety of tasks. This has led to roll-outs of other specialized AI tools to large user bases within the company, include a tool for merchants and coding tools.
McKinsey has deployed the Lilli platform to its entire organization, providing access to the century-long knowledge base of McKinsey work through an LLM-based interface. Lilli has had broad adoption—72% of the firm making >500,000 prompts/month at last report—and transformed the way that McKinsey consultants work.
For companies lacking the resources of Walmart or McKinsey, there are third-party tools that can offer similar capabilities. For example:
Numerous law firms are adopting legal-focused generative AI tools to transform knowledge-intensive legal work. Leading tools include CoCounsel from Thomson Reuters and Harvey from a start-up of the same name.
Any company can deploy the increasingly powerful AI tools that are available from leading software companies. These include Microsoft Copilot, Google Gemini and pure-play, broadly capable generative AI tools like ChatGPT, Claude and Perplexity.
Top-down examples
Because there are so many specific AI use cases across essentially every sector, we encourage you to look for inspiration in the lists that have been compiled by AI tool providers, such as these:
Google Cloud: 601 use cases
Microsoft: >700 ‘AI transformation stories’
Amazon Web Services: AI Use Case Explorer.
It is possible to combine top-down and bottom-up approaches in the same organization. For example:
Johnson & Johnson recently announced a pivot from a “thousand flowers” approach to AI to a focused set of use cases around generative AI.
Steve has built a concept for his clients that every project (bottom-up) should include a (top-down defined) ‘AI nutrition label’—like on a can of food. Such a label would measure the impact of AI on specific organizational domains, such as (1) jobs, (2) morale, (3) innovation, (4) culture and ethical values (e.g. creativity or inclusion). This naturally differs between organizations, but over time we would expect consensus to emerge as AI deployment matures.
For both bottom-up and top-down approaches, there are important constraints on AI program design:
limits of the capabilities of the technology (which are constantly expanding);
organizational capacity—when getting started with AI, it is important for organizations to keep focus, and not to bite off more than they can chew or run before they walk; and
management of risks (e.g. financial, privacy, legal), which may entail AI guardrails or techniques like AI sandboxes
The last constraint of risk management should not be debilitating at the Design step. By choosing a realistic, limited and responsible starting approach for AI, companies can avoid taking significant risks, and then focus more intensively on risk management in the Build and Iterate steps.
Understanding what is a suitable starting approach requires some knowledge of AI, which is a key focus of the next step in the RAID Framework.
3. Resource
Once an organization has created and empowered a team to deliver responsible AI, and designed an initial plan for it to do so, the next step is to make sure that there are sufficient resources to deliver the plan.
Every AI program requires at least three types of resources: people, knowledge and tools.
People requirements depend heavily upon the specifics of an AI program, with additional skills and experience needed as an AI program advances. And let’s also say the tough bit: although a key aspect of responsible AI is making the world better for people, it is inescapable that AI is affecting workforces in both positive and negative ways, changing or eliminating some jobs and creating new ones.
Knowledge usually means significant investment in AI literacy across the organization, plus deeper knowledge for those playing lead roles in AI deployment.
There are many companies delivering training on basic AI skills. For example, Maury has worked with start ups Mindstone and AI Academy.
Deeper knowledge typically requires both more extensive training.
One of the best collections of AI courses (many with a practical focus) is Deeplearning.ai, which is led by early AI founder and mentor Andrew Ng.
Coursera (also co-founded by Andrew Ng) has many excellent AI courses and instructors, with an extensive collection on technical aspects of machine learning.
Apart from formal training, an excellent way for those more deeply involved with AI to keep up developments is to follow the writing and speaking of leading mentors (some others are listed in the responses to this post by Steve):
Andrew Ng (mentioned above) publishes weekly newsletters The Batch (for deeper technical analysis) and Data Points (briefer news items) through Deeplearning.ai.
Ethan Mollick is a Wharton professor focusing on practical AI use in work, education, and productivity, writing in the Substack One Useful Thing.
Azeem Azhar writes in Exponential View about trends in AI and other key ‘exponential’ technologies with a broad focus on trends, and is widely respected among executives and policymakers.
Connor Grennan, Chief AI Architect at the NYU Stern School of Business, focuses on how non-technical people can understand and ethically apply AI in their lives.
Tools are discussed to some extent above under the Design step. Leading multi-purpose generative AI / LLM model providers (e.g. ChatGPT, Gemini, Claude, DeepSeek, Meta) offer a wide variety of capabilities, and there are many purpose-specific tools, to name a few:
Coding, e.g. Github Copilot, Cursor, Windsurf
Image generation, e.g. Midjourney, Stability.ai
Voice generation, e.g. Eleven Labs
Specialized applications, e.g. Shopify Magic (e-commerce), Stripe (transaction optimization and fraud prevention), AutogenAI (bid writing), Suno (song generation).
People, knowledge and tools are likely to be sufficient for bottom-up AI programs, with the expectation that individual projects which emerge from the bottom-up ferment will seek additional resources as they proceed. These additional resources, which are usually crucial for top-down AI programs from the outset, include at least:
Data: Although generative AI has reduced data requirements for many AI applications, data remains the lifeblood of machine learning (as recently examined by LearnTech). Sophisticated applications often require proprietary data, and these data can be expensive to collect, clean and organize.
Compute: As AI applications become more sophisticated, they require substantial amounts of computing power, sometimes requiring specialized hardware like that produced by NVIDIA. Tools delivered as a service (like those listed above) almost always come with their own compute, but sophisticated proprietary AI applications are likely to required powerful compute resources.
All or most of these resources will cost money—which is another reason to keep an initial AI program focused (as discussed in the Design step). Your organization should look for ways to procure the resources efficiently5.
4. Build
After ensuring an appropriate starting AI program design with sufficient resources, it’s time to go build. This is where the rubber hits the road. However, we won’t spend much time here on the Build step because, if your organization has taken the Empower, Design and Resource steps seriously and carefully, you will know what to do.
A key qualification though, as mentioned above in the Design step, is that the Build step is where you should start thinking carefully about risk management for AI projects, particularly for any projects that are launched externally to your organization.
Go for it!
5. Iterate
Unless your organization is very lucky, it is unlikely that it will get AI deployment 100% right the first time. And that’s OK.
The key thing about AI development is learning, a lot like in the Lean Startup methodology set out in Eric Ries’ excellent 2011 book:
The unit of progress for Lean Startups is validated learning—a rigorous method for demonstrating progress when one is embedded in the soil of extreme uncertainty.
We encourage you to get started, and to learn what works (building iteratively on it) and doesn’t work (learning lessons from failure). AI will transform business and society over the coming decades in hard-to-predict ways, and the way to be a leader in benefitting from that transformation is to dive in and learn.
Moving forward with deployment
We hope that the RAID Framework provides useful guidance and ideas for organizations and people who want to get started with using AI. And while our focus is on practical deployment, and relevant to most starting AI projects, we hope that the framework will mostly be used to create responsible AI (in the general sense that we explain above)—that’s the major reason why ‘responsible’ is in the name.
We’d love to hear what you think, ideas for improvement, and how you use the framework. Please reach out in the comments below, or you can find either of us on LinkedIn.
AI is having a large impact both on what work consulting firms do and on how they do it.
This cartoon is from XKCD, licensed under the Creative Commons Attribution-NonCommercial 2.5 License.
Jacobellis v. Ohio, 378 U.S. 184 (1964).
DBS Bank’s AI journey and PURE framework are the subject of a Harvard Business School case study.
Because resource requirements and providers are very diverse, we don’t attempt specific procurement advice here. A possible topic for a future post!