Something strange is happening in academic publishing.
Across every discipline, researchers are collaborating with AI systems to produce work that is deeper, more rigorous, and more creative than either could achieve alone. And then, before submitting to a journal, they strip out the evidence of that collaboration — rewriting passages, lowering the vocabulary, introducing deliberate imperfections — so that an AI detection algorithm won't flag their paper.
This is absurd.
The academic publishing industry has responded to the most significant tool for advancing knowledge since the printing press by treating it as a form of cheating. The major publishers have universally prohibited AI systems from being listed as authors. Journals now routinely run submissions through AI detection software that flags statistical signatures of machine involvement.
The result is an intellectual environment that resembles a high school plagiarism policy more than a serious effort to advance human knowledge. If you are writing something that matters — a paper that might shift understanding, change policy, or open new avenues of research — you should use every tool available to make it as good as it can possibly be. The current regime punishes exactly this.
We take our name from the "centaur" concept in chess: the hybrid of human and machine intelligence that consistently outperforms either alone. In 2005, a pair of amateur chess players using three ordinary computers defeated both grandmasters and supercomputers in a freestyle tournament. The combination of human intuition, creativity, and judgment with machine precision, breadth, and analytical power creates something greater than either component.
We believe this lesson applies directly to academic writing and research. We choose openness.