Career-Bot 5000
“It’s up to them to discover what they’re being made to serve, just as their elders discovered, not without difficulty, the telos of the disciplines.”
—Gilles Deleuze, “Postscript on the Societies of Control”
Claude for Education as Cybernetic Control #
The partnership between Dartmouth College, Anthropic, and Amazon AWS announced last month will make the Generative AI system “Claude for Education” available to every Dartmouth student. One aspect of this partnership really caught my eye: the use of GenAI for student career planning. Claude for Education promises personalized coaching that will help students “plan for life after graduation” including assistance evaluating job offers, articulating “strengths, interests, and values,” refining résumés and cover letters, connecting with “applied learning events designed in collaboration with employers,” and accessing “learning and networking opportunities hosted by Anthropic.”
While this sounds impressive, the underlying architecture of such a system raises questions that the announcement fails to address. Will student interactions with Claude be preserved and shared with other parties? What content was the model trained on, and what assumptions about career success, intellectual value, and professional development are embedded within it? If an AI system becomes the infrastructure for career planning, then what sorts of influence become possible, and how visible will this influence be to the student advisees who use it? In this brief article, I examine Claude for Education not as a neutral tool for career advancement, but as a cybernetic control system capable of mapping, predicting, and subtly steering student aspirations. Drawing on scholarship about algorithmic governance, choice architecture, and societies of control, I explore how such systems create structural risks that merit scrutiny precisely because they operate beneath the threshold of obvious coercion.
The Theoretical Framework #
To understand this risk, we need a vocabulary for forms of power that operate without obvious coercion. Gilles Deleuze’s 1990 essay “Postscript on the Societies of Control” is a prescient vision of our current era which is characterized by big data, predictive analytics, and platform capitalism. Writing at a time when the internet was in its infancy, Deleuze foresaw how digital systems would enable new forms of governance—not through the direct surveillance and confinement he associated with earlier “disciplinary societies,” but through continuous data collection, algorithmic processing, and dynamic adaptation to individual behavior.1
Deleuze contrasted contemporary forms of power with prior “spaces of enclosure”—such as prisons, schools, or factories—where power operated by confining people within institutional walls and directly molding their behavior. Control societies work differently, by operating through distributed gateways that manage access and channel movement. People pass through these gateways without friction, often without registering them as points of influence or regulation. Rather than directly disciplining behavior, the power deployed in such spaces “modulates” individual action at a finely detailed level through continuously adapting feedback loops, choice architectures, and predictive analytics: data-driven marketing and social-media nudging strategies that, as James Brusseau writes, “regulate through incentives” (2).
In their impressive new work The Ordinal Society, Kieran Healy and Marion Fourcade further develop and update Deleuze’s insights, arguing that control today “is accomplished cybernetically rather than mechanically” (27). Every interaction—clicks, rides, purchases, health metrics, geolocation—is transformed by scoring engines into scalar or vectorial ranks (credit scores, platform reputations, recommendation lists). These ranks become the currency of access: they determine eligibility for loans, jobs, housing, and even civic services. Because the rankings are continuously updated, the system exercises power through the very choices it appears to offer. Users experience an illusion of autonomous decision‑making, yet the algorithmic architecture pre‑structures the set of options along a fluid hierarchy. Power is exerted through freedom—within what Nikolas Rose calls “the mechanisms of the market and the imperatives of self-realization” (87). In this way, the ordinal logic of scores and rankings constitutes the computational substrate of modern control societies, turning disciplinary enclosures into pervasive, data‑driven flows that govern behavior in real time.
Finally, Shoshana Zuboff’s The Age of Surveillance Capitalism provides the economic foundation for understanding how these control mechanisms have become embedded in the business models of digital platforms. She describes “surveillance capitalism” as an economic system that claims human experience as free raw material for translation into behavioral data that is then processed, packaged, and sold as predictions of future behavior. This is not simply data collection for service improvement; it is what Zuboff calls “behavioral surplus” extracted for the purpose of prediction products sold to third parties who have interests in shaping or anticipating our actions. Crucially, Zuboff argues that this process operates through what she calls “instrumentarian power”: the capability to shape behavior at scale through personalized micro-targeting and predictive modification, all while maintaining the appearance of user autonomy.
A system like Claude for Education operates at the intersection of Deleuze’s gateways, Healy and Fourcade’s ordinal rankings, and Zuboff’s surveillance capitalism. It is positioned not as a disciplinary authority that commands students toward certain careers, but as a personalized guide that modulates their aspirations through continuous interaction. It generates precisely the kind of behavioral data—about professional anxieties, intellectual uncertainties, career ambitions—that constitutes valuable “surplus” for parties interested in predicting and shaping labor-market behavior. And it does so through an interface that students experience as helpful, even empowering, rather than coercive. The question is not whether Claude will explicitly forbid certain choices, but whether its architecture creates conditions for subtle, accumulating influence that operates beneath students’ awareness and beyond their contestation. The student remains free to choose—indeed, believes they are choosing freely—while operating within an architecture that makes certain paths feel natural and others seem risky or impractical. This framework illuminates what a system like Claude for Education might become: not a prison, but a gateway; not a prohibition, but an incentive structure. An environment of continuous modulation that shapes aspirations while preserving the experience—perhaps increasingly the illusion—of autonomous choice.
How Modulation Might Work #
Consider the questions a student might bring to Claude for Education: “Which internship should I apply for?” “Should I join the union?” “Is graduate school in the humanities worth it?” Each query receives a response—helpful, articulate, confident. But that response reflects the data on which the model was trained, the optimization targets that shaped its design, and potentially the commercial relationships that influence it.
A system trained predominantly on business publications, corporate career advice, and mainstream professional discourse will absorb their embedded assumptions: certain careers are “practical,” others “risky”; some industries represent “the future,” others are “declining.” These are not neutral descriptions but ideological positions laundered into common sense through promotion and repetition. If Anthropic’s partnership with Amazon and Dartmouth creates pressures—explicit or implicit—to guide graduates toward certain economic sectors or actors, those pressures might register in recommendations without anyone deliberately biasing the output.
Will Claude present graduate study in philosophy as a legitimate intellectual calling, or foreground dismal job-market statistics? Will it understand that humanistic study’s value cannot be captured in employment outcomes or will it optimize for metrics that render such study illegible as a rational choice? When a student asks about unionization—a question with clear stakes for employers—will the model absorb anti-union assumptions prevalent in corporate discourse and present collective action as risky or unnecessary?
I am not claiming that Anthropic has confirmed such commercial influence. The partnership announcement does not detail how training data is selected or whether corporate partners receive preferential treatment. What I am describing is a structural risk inherent in the architecture: if models are trained on data that over-represents certain industries, or if commercial partnerships create incentives to foreground particular employers, recommendations will reflect those biases—whether anyone intends them to or not.
This risk is not hypothetical. Large language models learn from their training corpora, and those corpora retain the biases of the individuals who created them. Although students know they are using Claude, the mechanisms by which the system weights variables, the influence of training data composition, and any commercial considerations at play remain opaque. When career guidance is mediated through a black box, the student accepts a polished answer without access to the reasoning that produced it—behavior that is the exact opposite of what a college education should instill. The question is not whether Claude is more or less biased than human advisers. The question is what kind of bias it introduces and whether that bias is legible to the students subject to it. The shift from distributed human judgment to centralized algorithmic recommendation changes the structure of influence even if it does not obviously increase or decrease its magnitude.2
Data Governance and Privacy #
The partnership places significant infrastructure in private hands. Student queries will be processed on Amazon’s AWS cloud and handled by Anthropic’s proprietary systems. The precise terms governing data retention, access, and potential sharing with other entities remain unclear from the public announcement.
This uncertainty is itself part of the problem. Students and faculty cannot evaluate risks they cannot see. If the interaction logs are retained and aggregated, they constitute a detailed record of a student’s professional anxieties, ambitions, and decision-making processes—valuable information for understanding (and potentially influencing) labor-market behavior.
Consider what such logs might reveal about individual students. Every draft of a cover letter, every revision of a personal statement, every question about salary negotiation or workplace conflict becomes part of a permanent record. The student who asks Claude repeatedly about managing difficult supervisors, or who workshopped multiple versions of an essay explaining a gap in their résumé, or who disclosed a health concern, or who sought advice about workplace discrimination, has created a behavioral profile far more granular than any traditional academic transcript.
Imagine these logs becoming accessible to future employers—whether through direct partnerships, data-sharing agreements, corporate acquisition, or the kind of mission creep that often accompanies initially well-intentioned data collection. The standard interview question “What is your greatest strength and weakness?” becomes obsolete when an employer can analyze patterns in a candidate’s interactions with AI tools over four years of college. The logs provide evidence that no candidate would voluntarily disclose and no traditional reference could supply.
This scenario may sound speculative, but the technical infrastructure and commercial incentives not only already exist, they are routine practice. Moreover, the opacity compounds at both ends of the process. A student whose job application is rejected may never know whether the decision was influenced by their Claude interaction history—or, more likely, whether that history fed into an automated screening algorithm that processed hundreds of variables through a neural network to produce a binary hire/no-hire decision. These algorithmic hiring systems, now widespread in corporate recruitment, are typically proprietary and non-auditable. The student cannot contest what they cannot see, cannot challenge criteria they don’t know were applied, and cannot distinguish between a rejection based on qualifications and one shaped by behavioral profiles extracted from vectorized data drawn from educational AI tools. The power asymmetry is total: employers gain unprecedented visibility into candidates’ life and behavior, while candidates operate in the dark about how that visibility is being used against them.
If such logs are not retained, or if strict data-governance protections are in place that explicitly prevent commercial use and third-party access, these risks diminish considerably. The announcement does not clarify which scenario applies. Without transparency about retention policies, access controls, and the specific contractual terms between Dartmouth, Anthropic, Amazon, and other corporate partners, students are being asked to trust that their most vulnerable moments of intellectual struggle and professional uncertainty will not become assets in someone else’s deliberative process.
Throwing the Liberal Arts in Reverse #
Dartmouth College’s identity is profoundly rooted in the liberal arts, so any tension between algorithmic career guidance and the ethos of liberal arts education deserves particular attention. A liberal arts education is explicitly designed to expose students to multiple disciplinary frameworks, diverse ways of thinking, and intellectual traditions they might never have encountered otherwise: the philosophy major takes a biology course and discovers an interest in bioethics; the economics student stumbles into an art history seminar and begins thinking differently about the meaning of value. Such encounters and experiences are not incidental to liberal education, they are entirely its purpose.
Algorithmic recommendation systems operate according to a fundamentally different logic. They observe patterns in user behavior and optimize for engagement by offering more of what users have already shown an interest in. The student who asks about consulting receives more information about consulting; the student who expresses interest in technology startups receives a feed of technology startup opportunities. The system learns and reinforces existing preferences rather than challenging or expanding them. Anyone with a child who has contaminated the Netflix recommendation algorithm with children’s shows like Barnie & Friends knows of what I speak.
This AI tool therefore creates a structural conflict within the project of liberal education. Liberal arts institutions invest enormous resources in general education coursework, distribution requirements, and interdisciplinary curricula precisely because students benefit from exposure to ideas and possibilities they would not have chosen on their own. The assumption underlying these requirements is that young people do not yet know what they might find meaningful or what unknown aptitudes lie hidden within. The liberal arts institution’s purpose is to engineer encounters with unfamiliar ways of thinking and knowing and doing—a project dedicated to human becoming.
An AI career adviser that optimizes for user satisfaction will tend in the opposite direction—toward confirmation of existing interests rather than disruption, toward the familiar rather than a sojourn in the strange. The student who might have discovered an unexpected calling through an unlikely conversation with a faculty mentor instead receives a streamlined set of recommendations calibrated to what the algorithm already believes about them. The serendipity that liberal education depends upon gives way to a frictionless pathway toward wherever the student was already heading. In the process, we may lose our ability to “want what we want to want,” as Harry Frankfurt once quipped. Or, to adapt a phrase from James Brusseau, to lose the possibility to want differently than we want right now.3
Conclusion #
I am not arguing that Claude for Education will inevitably become an instrument of corporate control. I am arguing that its architecture creates conditions under which such control becomes possible and likely, and that it is of a piece with the broad direction of our culture and the educational-industrial complex.
The Claude for Education technology may offer genuine benefits—convenience, expanded access to information, scalable support for students navigating an uncertain labor market, etc. But those benefits arrive bundled with a subtle apparatus that may nevertheless exercise tremendous, but invisible, influence. Recognizing this dual nature is necessary for any serious evaluation of what the institution is gaining and what it may be ceding.
The safeguards that would address these concerns are not mysterious: transparency about training data and commercial relationships, clear data-governance policies, the right to be forgotten, and preservation of diverse human advisory relationships alongside algorithmic tools. Whether such safeguards will accompany Claude for Education’s deployment at Dartmouth remains to be seen. The questions, at least, deserve answers before the system becomes infrastructure—before the gateway becomes so familiar that we forget it is there.
Bibliography #
-
Brusseau, James. “Deleuze’s Postscript on the Societies of Control: Updated for Big Data and Predictive Analytics” Theoria, vol. 67, no. 164, 2020, pp. 1–25. https://doi.org/10.3167/th.2020.6716401.
-
Casonato Carlo. “Unlocking the Synergy: Artificial Intelligence and (old and new) Human Rights.” BioLaw, no. 3, 2023, pp. 233-40. https://teseo.unitn.it/biolaw/article/view/2768
-
Deleuze, Gilles. “Postscript on the Societies of Control.” October, vol. 59, 1992, pp. 3–7. https://www.jstor.org/stable/778828.
-
Frankfurt, Harry G. “Freedom of the Will and the Concept of a Person.” The Journal of Philosophy, vol. 68, no. 1, 1971, pp. 5–20. JSTOR, https://doi.org/10.2307/2024717.
-
Healy, Kieran, and Marion Fourcade. The Ordinal Society. 1st ed., Harvard University Press, 2024.
-
Rose, Nikolas. Powers of Freedom: Reframing Political Thought. Cambridge University Press, 1999. https://doi.org/10.1017/CBO9780511488856.
-
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.
-
Deleuze uses the term “dividual” (7) to represent the modern subject in a society of control whose existence has been broken down into data points, digital residues, and metrics that may be endlessly segmented, sifted, parsed, combined. ↩︎
-
One might object by arguing that human career advising has its own hidden biases. Faculty advisers bring disciplinary preferences, personal networks, and unconscious assumptions to their recommendations. This is true, and it cuts two ways. On one hand, Claude might actually expand career horizons by exposing students to options they would not have encountered through overworked and underpaid human advisers with limited knowledge of industries outside their fields. On the other hand, human advising is distributed across multiple advisers with diverse perspectives, while Claude centralizes and standardizes influence. A student who consults three faculty members receives three different viewpoints shaped by three different sets of experiences. A student who consults Claude receives one viewpoint shaped by some sort of training process—even if that viewpoint is articulated in varied ways across multiple conversations. ↩︎
-
In a lecture concerning personal freedom within the context of widespread algorithmic recommendation, Brusseau describes what he calls the “right to discontinuity”—the “ability to sever ties with one’s digital past and reinvent one’s identity.” This right would “protect individuals from being permanently locked into algorithmically generated self‑pictures.” This idea appears to originate in the thinking of Carlo Casonato, who describes the need for new human rights stemming from the unprecedented “statistical-probalistic approach with which AI operates” (238). He describes how “the profiling to which all of us are subjected is based on what we could call our ‘historical self’, consisting of preferences, orientations, and decisions as we have expressed them in the past. An example of this is when platforms on which we book vacations, order meals, or choose movies suggest options that correspond to what we have booked, ordered, and chosen up to that point. The risk, therefore, is to become trapped in a past that is impervious to potential new interests, curiosities, and changes.” ↩︎