On the AI Policy of an Academic Journal

The journal Ethics just announced its AI policy. My wise friend Carlo Ludovico Cordasco has an insightful comment, in which he agrees with Ethics that an AI tool should not be regarded as a co-author, but where he also criticizes the policy for apparently rejecting better work if it was substantially generated by an AI tool. His suggested policy is summarized in this sentence: ”A version of the Ethics policy that said ‘content produced with the help of LLMs is welcome, provided authors fully endorse and are prepared to defend it, and provided their use of these tools is disclosed where substantial’ would, to my mind, capture everything that really matters.”

That is essentially my own view, as I argued in a recent blog post. While I find Carlo’s analysis largely persuasive, I wish to press further on several points.

  1. I am not as sure that AI tools should be excluded from co-authorship. What I am wondering is if the self-direction criterion that Carlo embraces doesn’t smuggle in an unjustified anthropocentrism. The claim that AI tools merely ”respond to prompts by exploiting patterns” could be redescribed as ”responding to environmental inputs by applying learned regularities”, which seems to me a reasonable description of human cognition as well.
  2. The criterion of accountability is trickier. Still, I am wondering if we are too constrained in our imaginative thinking on that point. Couldn’t institutional arrangements support AI co-authorship while preserving accountability functions? The deploying organization could bear responsibility in a manner analogous to how pharmaceutical companies are held accountable for their compounds, facing reputational damage, financial penalties, or exclusion from academic licensing when AI-coauthored work fails, perhaps supported by a registry system in which models accumulate tracked scholarly reputations. Companies might maintain a ”unit of accountability” through escrowed resources or bonded certification schemes that create genuine institutional stakes. Alternatively, and perhaps most elegantly, a human co-author could assume complete responsibility for all claims while acknowledging the AI’s substantive contribution, much as senior authors currently vouch for work conducted primarily by junior colleagues. More radically, one might reconceive the system as requiring accountability for claims rather than of authors, with verification protocols and audit systems ensuring epistemic reliability regardless of whether a responsible agent stands behind each assertion. Of course, as Carlo might object, the residual objection to all such proposals is that they satisfy only the functional requirements of accountability while omitting what some consider constitutive: a subject who experiences the weight of responsibility, feels the force of criticism, and is moved by reasons to improve. Maybe, but to me, this is an open question for the moment.
  3. These institutional questions connect to deeper metaphysical issues. In relation to my earlier blog post, under determinism, human agents no more ”choose which projects to pursue” in any ultimate sense than an LLM does. Both are fully determined by prior causes. The distinction between a human whose neural states were causally determined by genetics and environment, and an AI whose outputs are causally determined by training data and architecture, becomes one of degree rather than kind. The standard compatibilist response, hinted at by Carlo, that responsibility tracks reasons-responsiveness or mesh between first- and second-order desires, does provide a principled distinction, since current AI tools plausibly lack the hierarchical desire structure that Frankfurt-style compatibilism requires. However, as I see it, this is an empirical rather than conceptual point, and one that may not hold indefinitely as AI systems develop.
  4. On the disclosure-plus-endorsement view, I believe the disclosure requirement carries an implicit stigma, treating AI assistance as categorically different from collaborators, research assistants, or referees, thereby perpetuating a hierarchy of legitimacy. If quality and defensibility are the key considerations, the causal history should be irrelevant. Moreover, I regard the ”prepared to defend it” requirement as operationally indistinguishable from what responsible scholarship already demands. If authors understand, endorse, and can defend their work, the fact that an AI rather than a colleague or half-remembered book suggested a key analogy marks a distinction without normative difference. The pragmatic AI-positive position I find appealing would simply evaluate work on its merits, leaving the production process as private as whether one wrote sober or intoxicated, alone or in conversation. There is also a practical difficulty: when one works in genuine symbiosis with an AI tool, the attribution of contributions becomes intractable. Should a researcher maintain a log every few minutes, documenting precisely what changed following each interaction? The demand is, I fear, absurd on its face. Yet if disclosure remains vague, merely noting the symbiotic character of the process, one risks rejection at journals like Ethics that treat AI involvement as a defect requiring confession. The disclosure requirement thus places conscientious scholars in an impossible position: either engage in impractical record-keeping or accept that honest but general disclosure will count against their work.

In the end, Ethics, in my view, errs toward excessive caution, treating AI involvement as a contaminant that diminishes scholarly value even when the resulting work is superior. Carlo offers the more defensible position, yet I wonder if his acceptance of the self-direction and accountability criteria may concede too much to an anthropocentrism that closer examination renders doubtful. The question of AI in scholarship is not, as I see it, fundamentally about preserving human dignity or creative purity; it is about producing reliable knowledge and advancing inquiry. A policy centered on those ends would be agnostic about productive origins and rigorous about the quality of findings.

I should add that my own thinking on these matters remains in flux, and exchanges like Carlo’s not only advance public discourse but also advance my own understanding. This ”epistemic humility” is why I view the proliferation of formal ”AI policies” with some apprehension. Such policies crystallize principles at a moment when both our conceptual frameworks and the technologies themselves are rapidly evolving. What we codify today may appear parochial tomorrow. There is something incongruous in disciplines devoted to careful reasoning adopting rigid prescriptions about the very tools of reasoning before the philosophical and technological questions are settled.

Freedom, Determinism, and AI: Incentives, Not Prohibitions, Should Govern Academic Use

Suppose one holds hard determinism to be true: no one is causa sui. Capacities, opportunities, and choices are downstream of causes none of us authored. If that premise holds, backward-looking claims of ultimate moral desert in scholarship – who “really” deserves credit for a paper – lose their footing. Introducing AI does not change the metaphysics of production; before and after AI, no researcher is the unmoved mover of their work. The normative question, therefore, shifts from purity of authorship to the design of rules that predictably improve the truth, reliability, and usefulness of research outputs.

Within that frame, the case is for full-stack freedom to use AI across the research process. Researchers should be free to consult AI for everything – brainstorming and problem selection; surveying and organizing literature; designing identification strategies and experimental protocols; collecting data; modeling, estimation, simulation, and diagnostics; visualization; writing and formatting manuscripts; preparing submissions, etc. Evaluation should be output-first: does the resulting work better explain, predict, or solve? Auditability is the supporting infrastructure that lets others verify, learn from, and build upon those outputs.

A determinist view clarifies the role of praise and blame. These can be justified not as metaphysical verdicts but as instruments that create incentives. We build institutions that treat researchers as if they can be praised or sanctioned because these signals causally shape behavior toward better results. Freedom is paired with ex post accountability. If AI-assisted choices yield less innovative and less useful research or claims that fail replication or violate standards, then reputational loss, rejection, retraction, and funding penalties follow; if AI-assisted workflows produce clearer arguments, more credible estimates, stronger predictive performance, and more practically useful knowledge, they are rewarded. Bad consequences are borne by the individuals who made the choices, which gives strong incentives to use AI responsibly. That logic differs from restricting everyone’s freedom because some may misuse the tool or because of a general distrust of it.

On a deterministic view, the common claim that it is unethical to use AI to write a paper mistakes the locus of wrongness. There is no moral taint from employing a powerful instrument (or, for that matter, human research assistants), because no one ever produced work ex nihilo anyway. What matters is whether the practice predictably improves outputs – truer inferences, better predictions, clearer exposition, more innovative and useful ideas. If “human + AI” achieves that better than a mere human, it is to be welcomed.1

Likewise, the claim that it is unethical to use AI without declaring it conflates confession with transparency. On a deterministic-consequentialist view, obligatory declarations have no intrinsic moral force. What is morally relevant is provenance that enables verification: data, code, computational environments, and, when material for results, workflow or prompt logic at the level required for replication. Non-declaration is wrongful when it obstructs auditability – not because it withholds a metaphysical truth about authorship. If one maintains human answerability at identifiable control nodes and documents what is needed to reproduce results, then deception concerns are addressed without mandating ritual disclosures about tool use.

Freedom with accountability lets decentralized actors explore how AI complements their judgment, while ensuring that poor decisions are privately costly. It preserves the gains from experimentation and selection and, by keeping outputs testable, channels those gains into cumulative progress. The criterion for success remains outcome-focused: better theories, tighter identification, stronger predictive validity, more useful knowledge.

The resulting ethic is spare and practical: we should grant researchers complete freedom to deploy AI throughout the research process; ground responsibility in forward-looking incentives; replace moralized talk of “authentic” production (which would preclude thorough AI use) with output-focused evaluation supported by verifiable methods. If none of us originates ourselves, then the ethical salience of adding AI evaporates. What remains is ”institutional engineering” aimed at a simple end: more useful, more reproducible knowledge, produced by answerable humans using the best available tools.2

  1. It may be that a non-determinist can grant that praise and credit matter while insisting that what merits them in science is not who wrote a research paper but ownership of reasons. The author is the agent who sets the aim, selects and justifies the method, curates and vets the evidence, adjudicates alternatives, and can defend the conclusions under adversarial probing. On that standard, AI assistance in drafting or analysis does not extinguish authentic authorship; it is a medium through which reasons are developed and expressed. If a researcher delegates so completely that they cannot give reasons or withstand challenge, they fail the authorship test and deserve sanction. If, by contrast, they direct the inquiry, audit the outputs, and can account for each inferential step, then credit is warranted – even if AI improved the final product. Praising such work is not at odds with free will; it rewards the exercise of reasons-responsive agency, including the wise use of powerful instruments. The policy upshot is unchanged: allow full-stack AI use, but tie praise and blame to demonstrable human control and answerability. ↩︎
  2. A note on vocabulary: terms like “should”, “freedom”, and “responsibility” are used here instrumentally, not metaphysically. “Should” is conditional – if the objective is to maximize social value, then certain rules follow. “Freedom” denotes the absence of ex ante tool bans; it does not posit contra-causal agency. “Responsibility” means forward-looking answerability at identifiable control nodes, justified because it steers future behavior and improves outcomes. This is normative language in a policy-analytic mode, consistent with determinism. ↩︎