If AI Isn’t Your “Major,” What Will It Mean to “Minor” Well?

Image to use

I spend a fair amount of my time these days working with businesses for which AI is the defining force: powering their strategy, disrupting it, or both. These companies are “majoring” in AI. Some of them came into “freshman year” with nothing else on their minds, some of them have suddenly switched majors as “juniors.” Whatever challenges these organizations might have, they have zero risk of insufficient focus.

Most organizations, however, have some other major. This piece focuses on one rather quirky organizational type that I happen to spend a good deal of time with: philanthropic foundations. No category of institution is less exposed to existential threat than an endowed foundation. An endowed foundation can persist in approximately its current form for decades while the world it was built to act upon changes beyond recognition. That durability makes visionary, risk-taking philanthropy possible. And that durability makes drift a central risk: a widening gap between the vision to which a foundation has committed and its powers to deliver on that vision. Businesses face survival risk; foundations face the risk of quiet futility.

Mike Kubzansky, the outgoing CEO of Omidyar Network, threw down a gauntlet in his recent essay in Inside Philanthropy:

AI will reshape nearly every issue philanthropy cares about: jobs, democracy, economic mobility and inequality, mental health, education, information integrity, and more. Yet as a sector, philanthropy has been slow to engage at the speed and scale this moment demands…. If the sector continues to treat AI as adjacent rather than central, or as a set of discrete applications rather than a tidal force, it will lose its ability to shape societal outcomes at the moment when the course of AI is being charted and contested.

Toward the end of the essay, Mike writes:

If you are operating a philanthropy and you think your only engagement with AI needs to be in your internal practices or in “AI for XYZ” (e.g., better farming), I ask you to consider doing more…. If you “major” in something else, I would suggest that you need to “minor” in AI at a minimum.

In my experience working with Mike over the years, I’ve often underlined his remarks in my notebook. He has the kind of mind where a few sentences of his could be unpacked into a whole strategy. I feel that way about his frame of an “AI minor.”

As a way of exploring this frame, the body of this piece is an imagined letter by a foundation President who has read Mike’s essay over the weekend, addressing her executive team. While I’ve used this epistolary form as a vehicle to imagine what a good minor looks like with depth and particularity, the way of thinking here generalizes widely beyond the quirky specifics of an endowed foundation.

The President’s Letter

I read Mike Kubzansky's piece in Inside Philanthropy over the weekend. I want to share where it has left me, because I think it clarifies what this historic moment requires us to do.

I experienced Mike's argument as a powerful critique. AI isn't our mission. We have defined our major and we're keeping it. But while we haven't been idle on AI, I'm realizing all we've really done so far is taken a couple of distribution courses. We need to define a serious minor.

Here is the curriculum I think a real minor requires of us: five courses. I am not asking you to agree with all the details of the curriculum today. I am asking us to commit, together, to do the work of figuring out what each course looks like for this foundation, with the discipline that committing to a minor entails.

I know many of us have real ethical reservations regarding the harms AI is already creating – to name several: environmental impacts, impacts on national security, impacts on livelihoods, impacts on education and impacts on public discourse – and the greater harms that we can envision in the future. I am not asking you to set those concerns aside, simply to align with me that these questions are best wrestled with through deep, multi-dimensional engagement. This is a moment when we must make principled decisions about how to engage; and a moment in which it would be irresponsible simply to turn away.

Course #1: Using AI to sharpen our thinking, not just lighten our load

The easiest place to start with AI is the place where the stakes are lowest: summarizing the mountain of reading we get handed each week, drafting correspondence, automating the most routine of our supporting processes. There’s nothing wrong with opening pawn to Q4. It’s simply that there’s more to chess.

We won’t learn much about AI until we invite it into being a thought partner on the judgments that are uniquely ours: how to think about where in a system there’s leverage, how to assess impact, what most limits a grantee’s capacity, what might unlock accelerated development for a promising associate. I’m not suggesting that we outsource these judgments. I’m suggesting that part of the power of using AI is to make our own thinking more dialogic, more available to itself.

As I experiment in my own work, I am experiencing that AI is most useful when I am most explicit about what I am actually trying to do, most open about what I know and what I don’t, and most proactive in imagining the perspectives that might improve my thinking. I’m experiencing power both from having AI help distill the patterns underneath my own best thinking and identify alternate frames that help me escape limiting assumptions.

Those uses are most powerful when they connect to our human dialogue and human deliberation. Part of how we’ll know we’re using AI well in the core of our work is if it deepens and enriches how we think with one another.

Bringing AI into our core forces us to be clear about the standard that defines and infuses our best work. I suspect that for all our expertise and all our commitment, we often go only three-quarters of the way to establish what we mean by excellence. The best performers in any field create feedback loops to review the moves they make and evaluate alternatives. We too should use AI to sharpen what we see, not just polish what we say.

Course #2: Partnering with grantees as co-learners

There are few greater sources of peril in philanthropy than the illusion that we can learn anything important by ourselves, in isolation from the fields we support.

One of the requirements for our minor should be a shared practicum, learning together with our grantees how AI can enable their work. The grantees who stand to benefit most from AI as leverage are the ones with the thinnest staff, who are often the least equipped to invest in learning. A tempting reflex would be to assume that someone has a ready-packaged answer for grantees and to buy capacity building off the shelf.

Far more powerful would be to assemble a small cohort of grantees, support each in pursuing one focal project that pairs AI with skilled human expertise on a real mission-critical workflow, and sit in the room as co-learners. The grantee remains the protagonist of their own capability development. We get a window into the lived process, the opportunity to double down on our support and rich learning for our own application of AI.

We’re deeply committed to the principle of learning in partnership, with sensitivity to power differentials. AI gives us a chance to apply this principle on terrain where we and our grantees can mutually acknowledge the magnitude of our shared unknowns. The experience of figuring it out together that AI naturally invites here can spill over into other dimensions of our work.

Course #3: Seeing multiple futures

Peter Schwartz, one of the fathers of the discipline of scenario planning, argued that the value of scenarios lies not in predicting the future but in rehearsing it. Scenarios are stories about how the world might unfold. They build our capacity to recognize change as it arrives and avoid being captured by a picture of the future we expect.

AI demands the discipline of scenario thinking. The range of plausible futures over the next decade is enormous. To consider these futures well requires seeing the interplay of many forces: technology; business models and financial markets; the evolution of work; political currents and regulatory regimes; conflict, security and geopolitics. Different scenarios imply major differences regarding where philanthropic leverage lies.

If we aren’t explicit about scenarios, we risk betting all our chips on the default future our existing playbook implicitly assumes. We would never countenance that with our investment portfolio. We shouldn’t be any more casual with the grantmaking our investment portfolio exists to support.

I imagine our third course includes scenario work at two levels, with each level sharpening the other. We’d begin at the broadest level, building a small set of compact scenarios for how AI develops as a societal force over the next decade: perhaps four contrasting stories, each internally coherent and divergent from the others. These would be quick sketches, not finished paintings asking to be framed. Then we’d move to the program level, and ask how these scenarios connect to the most relevant futures specific to each of our core programmatic domains. What does the labor market look like for the populations our economic mobility work serves, under each of the broader scenarios? What happens to the information environment our democracy work depends on? What are the implications for education and for the institutions we have done so much in recent years to build? Each programmatic domain has its own AI story unfolding inside it.

The two levels feed each other: the macro scenarios lessen the risk that programs will neglect the forces beyond their domain that could reshape the logic of their field; program-level scenarios will both sharpen and complicate our broader view. It would be a minor miracle if any of our scenarios comes to pass as written. What we learn in this course is how to sharpen our engagement with the future as it emerges.

Course #4: Living connection to the frontier

As the science fiction writer William Gibson wrote, “The future is already here. It’s just not very evenly distributed.” We need both to learn and act at the frontier where the future is already being shaped.

A model I’d like us to think about is Climate Breakthrough. They identify extraordinary individual leaders pursuing high-leverage strategies at the frontier, make large multi-year unrestricted bets on them, and surround them with advisory and peer-learning infrastructure. Their edge is the ability to find people working on what could be the next-decade leverage points before those leverage points are obvious to the field.

What could be the equivalent for the fields in which we operate? Should we create our own equivalent to a portfolio of AI-native leaders at the frontier, a “mini Climate Breakthrough”? Or, more likely, should we join with others and fund an organization that has as its full DNA this kind of investment in individual breakthrough leaders? How can we draw on our participation in such an initiative to generate learnings that reach far beyond the direct impact of those investment dollars? What will enable us to learn from the how of these breakthrough leaders – the way they draw upon AI, the way they build new impact models enabled by new technical capabilities – and not just the what of their interventions?

Course #5: Endowment as a lever for influence

The future of AI is playing out in the capital markets, just as centrally as in the lab. Our default setting as an investor will give us a great deal of direct and indirect financial exposure to the future of AI – like every player in the market – and absolutely no influence. Couldn’t there be a better alternative?

The climate field offers a useful contrast. The move on fossil fuels that captured philanthropy's imagination was divestment – the endowment as a moral instrument used by withdrawing it. It was a powerful move, fitted to its moment. What move does this moment call for?

The questions that matter most about AI – what data trains it; what governance and regulation shape its development; how it is deployed, who benefits, who bears the cost – are being answered, explicitly or by default, every quarter, by a small set of private companies. Small as our pool of capital is relative to the oceans of capital flowing into the sector, could there be a way that we as investors can exercise outsized voice? This is certainly not work for us to do alone, but could we join with other like-minded investors to bring an enlightened perspective into the financial debate? Could the right kind of investment vehicle – return-generating, but first and foremost purpose-driven – establish expectations for AI companies that larger actors, such as pension funds and the more progressive sovereign wealth funds, would then put their weight behind? Could such an investment vehicle fund critical elements of the foundational infrastructure that establishes alignment and trust? There is a short window for effective vehicles along these lines to come together and to attract the magnitude of funding they will need – or for the most important industry of our times to be shaped without their participation.

The questions we have to ask to be thoughtful investors in the domain of AI are the same questions we have to ask of ourselves in order to be thoughtful adopters, partners, and program funders. What AI development practices do we believe in? What use cases would we refuse? What constitutes responsible deployment? We will answer these questions either by default or by design. Asking these questions in the stewardship of our endowment work and in the pursuit of our programmatic mission work will enable us to be the visionary pragmatists we always aspire to be. To hold both conversations together is a test of our depth of commitment to the minor.

Tying the courses together

Mediocre students simply fulfill requirements; the best students regard their curriculum as a springboard for entering into original work. These five courses represent foundational bodies of knowledge and experience. Their greatest significance will be in opening up opportunities that we can’t see from here, but will grow our way into if we stretch ourselves far enough, fast enough in this still early and formative moment.

Our work now is to design how we undertake each course, on what timeline, with whom – and how we dynamically integrate our learnings across the courses that we take. We are all experts in the evaluation of other people’s ideas. When we sit together to discuss the provocation I have offered here, let’s begin not with an evaluation of this letter but on the more fertile ground of individual introspection. What are the possibilities that inspire you? Where do you feel the ground shifting under you? What will it take for each of you and for all of us to confront our choices ahead with the best and fullest attention? How will we both meet this future and help shape it?

Reflecting on the President’s Letter

What the President's letter proposes, beneath its specifics, is a stance: that the foundation should enter into conversation with AI on every front where the institution operates — internally, with grantees, in program strategy, at the frontier, and through the endowment. In writing the President’s letter, I have imagined running the five courses, a commitment to dialogue, feedback and learning in action. Through the minor, AI deepens how the institution thinks with itself, with its partners, and with the field. And the minor narrows the gap I pointed toward in the opening of this piece, between what an institution intends and what it has the powers to deliver. The President recognizes the institution doesn’t have the capabilities to undertake all the work that matters and that it can only build new capabilities so fast. But while the foundation likely can’t create the equivalent of a Climate Breakthrough or the investment vehicle envisioned in course #5 by itself, it can join with others and operate as a founding investor in visionary endeavors. The limits of the President’s knowledge and capabilities don’t hold her back from thinking with a clean sheet.

As I experience it, the President’s stance is rare. Three traps, in particular, prevent leaders from arriving there.

The first is letting tactical engagement substitute for rather than expand strategic thinking. The leader takes the steps they can see — approves an AI training, makes a few grants where AI is part of the theory of change, joins a peer learning group — and lets the accumulation of moves stand in for the conceptual work the President did here. Each step has a valid tactical logic. But the steps together don’t add up to a strategic logic sufficient to meet the moment.

The second is treating AI primarily as a service function, with an IT team that takes other functions as clients. That is certainly one part of how any organization should operate, but cannot be the primary mode of engagement.

The third is treating the leadership role as primarily to set a process into motion. The leader convenes a working group, chartered to come back with recommendations. Working groups have their place. But a working group does not have the elevation to shape the minor envisioned here, let alone to carry it through to achieve transformational impact. No one but the leader at the top occupies the position where all the perspectives a serious minor requires originate and where they come together. This is work that cannot be undertaken at the top alone, nor can it move forward decisively without deep engagement at the top.

While the specific courses I’ve framed here are described through the lens of the endowed foundation, the fundamental strategic, organizational and leadership logic of the minor can be applied to any industry. The institutions that minor well will not be defined by their engagement with AI. They will simply become more imaginative, more rigorous versions of what they already were, with new leverage on the commitments they have always held.


Niko Canner Profile Cropped
Niko Canner
Founder

Niko Canner founded Incandescent in 2013. His work spans the firm’s three major areas of focus: serving as a thought partner to leaders of large enterprises on strategy, organization and innovation; advising founders on the development of their ventures; and partnering with foundations and non-profits engaged in systems change.


View Niko's profile


Previous

Frogs & Butterflies: Announcing Incandescent's Acquisition of Minds at Work