Skip to main content

Career-Bot 5000

“Many young people strangely boast of being “motivated”; they re-request apprenticeships and permanent training. It’s up to them to discover what they’re being made to serve, just as their elders discovered, not without difficulty, the telos of the disciplines.”

— Gilles Deleuze, “Postscript on the Societies of Control”

Claude for Education as Cybernetic Control #

The partnership between Dartmouth College, Anthropic, and Amazon announced last month will make the Generative AI system “Claude for Education” available to every Dartmouth student. One aspect of this partnership really caught my eye: the use of GenAI for student career planning. Claude for Education promises personalized coaching that will help students “plan for life after graduation” including assistance with evaluating job offers, articulating “strengths, interests, and values,” refining résumés and cover letters, connecting with “applied learning events designed in collaboration with employers,” and accessing “learning and networking opportunities hosted by Anthropic.”

The framing of the announcement presents this technology as a neutral instrument that matches student interests with career opportunities and disinterestedly coaches young people through tremendously consequential decisions. However, large language models are not objective conduits of information but are instead trained political‑technical artifacts shaped by their training corpora, reward modeling, and the professional deformations of their developers. To adopt such a system as infrastructure for career advising is not simply to streamline an existing process, but to delegate the very definition of what constitutes a viable career, a worthy aspiration, a successful graduate.

Career decisions are exercises in imagination—in envisioning what kinds of lives are possible, what forms of success are worth pursuing, what versions of oneself might flourish in the world. This imaginative work is difficult and uncertain and is limited by the categories and possibilities that are knowable and accessible to us. Students face genuine anxiety about making “wrong” choices, about paths not taken, about futures foreclosed. An AI system that offers personalized guidance promises relief from this anxiety: it appears to transform an overwhelming question into a manageable problem with data-driven answers. But this relief may be illusory. What the system actually does is shape the terms of the imagination itself—defining which careers appear viable, which aspirations seem realistic, which metrics of success feel authoritative. It doesn’t eliminate uncertainty; it renders certain possibilities invisible while making others seem inevitable. The student who feels guided may simply be constrained within a narrower horizon of thinkable futures—one determined not by their own wrestling with possibility or their own directed research, but by the training data, optimization functions, and commercial partnerships embedded in the system.

These aren’t paranoid hypotheticals. They’re basic questions about how influence works. If an AI system becomes the infrastructure through which students imagine their futures, then whoever shapes that system shapes those futures—quietly, at scale, and largely beyond scrutiny. The question isn’t whether Claude for Education will steer students toward certain paths, but whether anyone will notice when it does. The central questions, then, are these: Through what mechanisms does such a system establish the horizon of professional possibility? What content was the LLM trained on, and what assumptions about career success, intellectual value, and professional development are embedded within it? What are the financial arrangements between Anthropic and participating employers? And what does it mean for an institution ostensibly committed to liberal education to incorporate such a system into the formative experiences of its students? In what follows, I examine Claude for Education not as a neutral tool for career advancement, but as a cybernetic control system capable of subtly steering student aspirations.

The Theoretical Framework #

To understand this risk, we need a vocabulary for forms of power that operate without obvious coercion. Gilles Deleuze’s 1990 essay “Postscript on the Societies of Control” is a prescient vision of our current era which is characterized by big data, predictive analytics, and platform capitalism. Writing at a time when the internet was in its infancy, Deleuze foresaw how digital systems would enable new forms of governance—not through the direct surveillance and confinement he associated with earlier “disciplinary societies,” but through continuous data collection, algorithmic processing, and dynamic adaptation to individual behavior.1

Deleuze contrasted contemporary forms of power with prior “spaces of enclosure”—such as prisons, schools, or factories—where power operated by confining people within institutional walls and directly molding their behavior. Control societies work differently, operating through distributed gateways that manage access and channel movement. People pass through these gateways without friction, often without registering them as points of influence or regulation. Rather than directly disciplining behavior with corporal control, the power deployed in such spaces “modulates” individual action at a finely detailed level through continuously adapting feedback loops, choice architectures, and predictive analytics: data-driven nudging strategies that, as James Brusseau writes, “regulate through incentives” (2).

In their impressive new work The Ordinal Society, Kieran Healy and Marion Fourcade further develop and update Deleuze’s insights, arguing that social control today “is accomplished cybernetically rather than mechanically” (27). Every interaction—clicks, purchases, health metrics, search histories, social networks, geolocation data—is transformed by scoring engines into scalar or vector ranks (credit scores, platform reputations, recommendation lists). These ranks become the currency of access: they determine eligibility for loans, jobs, housing, etc. Because the rankings are continuously updated, the system exercises power through the very choices it appears to offer. While users have the experience of autonomous decision‑making, the algorithmic architecture pre‑structures the set of options along a fluid hierarchy. Essentially, power is exerted through freedom—with what Nikolas Rose calls “the mechanisms of the market and the imperatives of self-realization” (87). In this way, the ordinal logic of scores and rankings constitutes the computational substrate of modern control societies, turning disciplinary enclosures into pervasive, data‑driven flows that govern behavior in real time.

Finally, Shoshana Zuboff’s The Age of Surveillance Capitalism provides the economic foundation for understanding how these control mechanisms have become embedded in the business models of digital platforms. She describes “surveillance capitalism” as an economic system that claims human experience as free raw material for translation into behavioral data that is then processed, packaged, and sold as predictions of future behavior. This is not simply data collection for service improvement; it is what Zuboff calls “behavioral surplus” extracted for the purpose of prediction products sold to third parties who have interests in shaping or anticipating our actions. Crucially, Zuboff argues that this process operates through what she calls “instrumentarian power”: the capability to shape behavior at scale through personalized micro-targeting and predictive modification, all while maintaining the appearance of user autonomy.

A system like Claude for Education operates at the intersection of Deleuze’s gateways, Healy and Fourcade’s ordinal rankings, and Zuboff’s surveillance capitalism. It is positioned not as a disciplinary authority that commands students toward certain careers, but as a personalized guide that modulates their aspirations through continuous feedback and curated opportunity. It generates precisely the kind of behavioral data—about professional anxieties, intellectual uncertainties, career ambitions—that constitutes valuable “surplus” data for parties interested in predicting and shaping labor-market behavior. And it does so through an interface that students experience as a helpful and empowering tool for decision-making and planning. The question is not whether Claude will explicitly forbid a user’s choice, but whether its architecture creates conditions for subtle, accreting influence that operates beneath the user’s awareness. The student remains free to choose—indeed, believes they are choosing freely—while operating within an architecture that makes certain paths feel natural and others seem risky or impractical. This framework illuminates what a system like Claude for Education might become: not a prison, but a gateway; not a prohibition, but a structure of incentives.

How Modulation Might Work #

Will Claude present graduate study in philosophy as a legitimate intellectual calling, or foreground the field’s dismal job market statistics? Will it understand that humanistic study’s value cannot be captured in employment outcomes or will it render such inquiry illegible as a rational choice? When a student asks about whether they should join a union—a question with clear stakes for employers—will the model absorb anti-union assumptions prevalent in corporate discourse and present collective action as risky or unnecessary?

This risk is real. Large language models are trained on corpora that retain the biases of the individuals who created them. Although students know they are using Claude, the mechanisms by which the system weights variables, the influence of training data composition, and any commercial interests or considerations at play remain opaque. A system trained predominantly on business publications, corporate career advice, and mainstream professional discourse will absorb their embedded assumptions: certain careers are “practical,” others “risky”; some industries represent “the future,” others are “declining.” These are not neutral descriptions but ideological positions laundered into common sense through promotion, repetition, and institutional capture. These viewpoints might register in recommendations without anyone deliberately biasing the results.

I am not claiming that Anthropic has confirmed such commercial influence. The partnership announcement does not detail how training data is selected or whether corporate partners receive any preferential treatment. What I am describing is a structural risk inherent in the architecture. If models are trained on data that over-represents certain industries, or if commercial partnerships create incentives to foreground particular employers, their outputs will reflect those biases whether anyone intends them to or not.

Data Governance and Privacy #

The partnership places significant data in private hands. Student queries will be processed on Amazon’s AWS cloud and handled by Anthropic’s proprietary systems. The precise terms governing data retention, access, and potential sharing with other entities remain unclear from the public announcement.

This uncertainty is itself part of the problem. Students and faculty cannot evaluate risks they cannot see. If data from student use of these products is aggregated or deanonymized, they constitute a detailed record of a student’s professional anxieties, ambitions, and decision-making processes—valuable information for understanding (and potentially influencing) labor-market behavior.

Consider what such logs might reveal about individual students. Every draft of a cover letter, every revision of a personal statement, every question about salary negotiation or workplace conflict becomes part of a permanent record. The student who asks Claude repeatedly about managing difficult supervisors, or who workshopped multiple versions of an essay explaining a gap in their résumé, or who disclosed a health concern, or who sought advice about workplace discrimination, has created a behavioral profile far more granular than any traditional academic transcript.

Imagine these logs becoming accessible to future employers—whether through direct partnerships, data-sharing agreements, future corporate acquisition (as in the case of Canvas LMS), hacking, or the kind of mission creep that often accompanies initially well-intentioned data collection. The standard interview question “What is your greatest strength and weakness?” is rendered utterly obsolete when an employer can perform deep analysis of the patterns in a candidate’s interactions with AI tools over four years of college. The logs provide evidence that no candidate would voluntarily disclose and no traditional reference could supply.

If such logs are not retained, or if strict data-governance protections are in place that explicitly prevent commercial use and third-party access, these risks diminish considerably. The announcement does not clarify which scenario applies. Without transparency about retention policies, access controls, and the specific contractual terms between Dartmouth, Anthropic, Amazon, Instructure (Canvas), and other corporate partners, students are being asked to trust that their most vulnerable moments of intellectual struggle and professional uncertainty will not become assets in someone else’s deliberative process.

Inverting the Liberal Arts #

There are many appropriate educational uses for Large Language Models. But is career planning one of them? Dartmouth College’s identity is profoundly rooted in the liberal arts, so any tension between algorithmic career guidance and the ethos of liberal arts education deserves our attention. A liberal arts education is explicitly designed to expose students to a variety of disciplinary frameworks, diverse ways of thinking, and intellectual traditions they might never have encountered otherwise: the philosophy major takes a biology course and discovers an interest in bioethics; the economics student stumbles into an art history seminar and begins thinking differently about the meaning of value. Encounters and discoveries such as these are not incidental to liberal education, they are entirely its purpose.

Algorithmic recommendation systems operate according to a fundamentally different logic. They observe patterns in user behavior and optimize for engagement by offering more of what users have already shown an interest in. The student who asks about consulting receives more information about consulting; the student who expresses interest in technology startups receives a steady stream of technology startup opportunities. The system learns and reinforces existing preferences rather than challenging or expanding them, creating what Eli Pariser calls “a static, ever-narrowing version of yourself—an endless you-loop.”2 Anyone with a child who has contaminated the family Netflix recommendation algorithm with Thomas the Train episodes knows the dangers of which I speak.

Conceivably, an AI career-planning tool such as Claude may result in a structural conflict within the project of liberal education. Dartmouth invests enormous resources in general education coursework, distribution requirements, and interdisciplinary curricula precisely because students benefit from exposure to ideas and possibilities they would not have chosen on their own (simply because they are unaware of them). The assumption underlying these requirements is that young people do not yet know what they might find meaningful or what unknown aptitudes lie hidden within them. The liberal arts institution’s purpose is to engineer encounters with unfamiliar ways of thinking and knowing and doing—a project dedicated to discovery and human becoming.

An AI career adviser pushes the student in the opposite direction—toward confirmation of existing interests rather than their disruption, a narrowing rather than an augmentation of the self. While liberal education values broad exploration and serendipitous discovery, the AI tool instead provides a frictionless wormhole to wherever the student was already heading. In the process I fear we may lose our ability to “want what we want to want,” as Harry Frankfurt once quipped; or, to adapt a phrase from James Brusseau, we may lose the ability to want differently than we want right now.3

Conclusion #

I am not arguing that Claude for Education will inevitably become an instrument of corporate control. I am arguing that its architecture creates conditions under which such control becomes possible and likely, and that it is of a piece with the broad direction of our “ordinal society” and the educational-industrial complex.

The Claude for Education technology may offer genuine benefits—convenience, expanded access to information, scalable support for students navigating an uncertain labor market, etc. But those benefits arrive bundled with a subtle apparatus that may nevertheless exercise tremendous, but invisible, influence. Recognizing this duality is required for any serious evaluation of what the Dartmouth is gaining and what it may be giving up.

The safeguards that would address these concerns are not mysterious: transparency about training data and commercial relationships, clear data-governance policies, the right to be forgotten, and preservation of diverse human advisory relationships alongside algorithmic tools. Whether such safeguards will accompany Claude for Education’s deployment at Dartmouth remains to be seen. The questions, at least, deserve answers before the system becomes infrastructure—before the gateway becomes so familiar that we forget it is there.


Bibliography #


  1. Deleuze uses the term “dividual” (7) to represent the modern subject in a society of control whose existence has been broken down into data points, digital residues, and metrics that may be endlessly segmented, sifted, parsed, (re)combined. ↩︎

  2. Eli Pariser describes this concern succinctly in his 2011 work The Filter Bubble: “Ultimately, the filter bubble can affect your ability to choose how you want to live. . . . When you enter a filter bubble, you’re letting the companies that construct it choose which options you’re aware of. You may think you’re the captain of your own destiny, but personalization can lead you down a road to a kind of informational determinism in which what you’ve clicked on in the past determines what you see next—a Web history you’re doomed to repeat. You can get stuck in a static, ever-narrowing version of yourself—an endless you-loop” (16). ↩︎

  3. In a lecture concerning personal freedom within the context of widespread algorithmic recommendation, Brusseau describes what he calls the “right to discontinuity”—the “ability to sever ties with one’s digital past and reinvent one’s identity.” This right would “protect individuals from being permanently locked into algorithmically generated self‑pictures.” Brusseau credits this idea to Carlo Casonato whose broader project describes the need for new human rights stemming from the unprecedented “statistical-probalistic approach with which AI operates” (238). He describes how “the profiling to which all of us are subjected is based on what we could call our ‘historical self’, consisting of preferences, orientations, and decisions as we have expressed them in the past. An example of this is when platforms on which we book vacations, order meals, or choose movies suggest options that correspond to what we have booked, ordered, and chosen up to that point. The risk, therefore, is to become trapped in a past that is impervious to potential new interests, curiosities, and changes.” ↩︎