Charleston School of Business Faculty & Staff Updates

2

Co-Pilot, Not Autopilot: Changing How We Work, Not Who’s Essential

by R. Roy Martin, Assistant Professor of Accounting

“Anyone looking to contract these [consulting] firms should be asking exactly who is doing the work they are paying for, and having that expertise and no AI use verified. Otherwise, perhaps instead of [paying] a big consulting firm, procurers would be better off signing up for a ChatGPT subscription.”

That’s a quote from Deborah O’Neill, a member of the Australian parliament, speaking to the Financial Review. One of the biggest accounting firms, Deloitte, charged the Australian government nearly half a million dollars for a report on the federal welfare payment system. As it turns out, staff members at Deloitte used AI to generate the report and failed to catch any of the multiple errors and citations to non-existent academic research. Deloitte initially denied the allegations but eventually accepted responsibility. This type of scandal isn’t unique to accounting. While I agree with Ms O’Neill that we pay for human expertise, I disagree on the solution. The lesson here isn’t “never use AI”; it’s that we should use AI as a “Co-Pilot,” not an autopilot.

I was surprised to learn that Deloitte would release an unedited, unreviewed report. At a recent tax research conference earlier this year, I spoke with senior managers at a few different large accounting firms (including Deloitte), and what I learned was consistent: these firms are using AI to do some, but not all, of the work they deliver to their clients. The model they described made sense: use AI to complete a large portion of a project’s first draft. The accountant is expected to use her professional judgement and knowledge of the client to effectively prompt the AI, evaluate the AI output, and ensure a polished final product. They’re looking for students who are comfortable with AI and could build on what AI delivers. I decided then and there to incorporate that workflow into my tax courses.

Thanks to a grant from the Dreyfus family, I was able to spend time this summer creating a project for my tax students that mimicked the way accounting firms are using AI. The AI project consists of three parts.

  • First, the students complete a short AI Prompting course. Students learned various prompting strategies, when each is preferred, and some ethics about AI use. I want to thank the accounting firm Ernst & Young for generously providing access to this course for my students.
  • In the second part, I presented the students with a complex tax scenario to analyze using AI. Students learn a complex topic over a series of lectures and homework assignments. They are then tasked with asking the AI to analyze a topic-specific scenario. They submit a document that includes their chosen prompting strategy, the reasoning behind that strategy, their exact prompt, the unedited AI output, and their analysis of the AI output. I ask students to comment on what the AI gets correct and what it gets incorrect. They should analyze the technical output, references, and also the clarity of the response. Specifically, they must assess whether the AI is correct in its analysis, but also speak to the AI’s ability to effectively communicate the results.
  • Finally, the third part tasks students to edit the AI output into a corrected, polished, and client-ready report. I specifically ask them to edit the AI’s work, rather than start from scratch.

Initial feedback from the students is positive. Their AI experience varied, but most reported that they thought using AI in this manner not only saved time but also allowed them to focus on what’s important: the content. This was a late-semester project, so I’ll report back next semester on the project’s results.

My firmly held belief is that AI will never replace a knowledgeable professional. AI will, however, change how professional work is completed. AI can accelerate writing, explore possible solutions, and provide creative ideas, but it cannot supply professional judgment or accountability. The workable model is straightforward: use AI to move faster, but humans apply professional judgment, check sources, resolve ambiguity, explain results, and deliver client-ready documents. After reviewing this project with my students, I believe it’s a viable model for the classroom as well.

Image created with Microsoft Co-Pilot

GenAI Subcommittee

Editor • December 4, 2025


Previous Post

Next Post

Comments

  1. Robert Hogan December 4, 2025 - 12:44 pm Reply

    Great article!!
    Thanks for sharing and I hope your willingness to craft something new will inspire others!

  2. Jennifer Barhorst December 5, 2025 - 1:22 am Reply

    Great article and no surprise to me at all! Having worked with Deloitte in London, where I led crisis communications strategy for the largest divestment of a bank in UK history (a result of the financial crisis), Deloitte was brought in to help the bank through the transition. They frequently used boilerplate PowerPoint decks that they used for other clients, not bothering to modify them to our brand.

Leave a Reply to Robert Hogan Cancel reply

Your email address will not be published / Required fields are marked *

Skip to toolbar