Fri, Feb 27, 2026 06:18 GMT
More
    HomeContributorsFundamental AnalysisYou Will Comply With the AI

    You Will Comply With the AI

    AI can speed up the drudge work and allow humans to redirect effort to higher-value tasks. But if the drudge work is demonstrating compliance with regulation, will the regulators allow this?

    • Breathless commentary on the job-destroying effects of AI partly reflects how quickly the technology is changing, making its implications hard to assess. But it also misses important nuances: when assessing the job impacts, it assumes jobs are made up of disconnected tasks.
    • Academic research points to better ways to think about the issue. One recent paper suggests that firms will automate tasks where AI can achieve current output quality, and not those where it does not. Worker time is freed up as some tasks are automated, and reallocated to other tasks, increasing the quality of output for these tasks. This raises the quality bar for automating them later, and could make human labour more valuable, not less.
    • Some tasks will be ripe for automation, because they require significant effort just to achieve bare-minimum quality. Compliance-related tasks are a good example of this, provided the regulators allow it. The result could be a better-quality product from the perspective of customers. But new issues arise, including compliance issues raised by the technology, and the possibility that regulatory expectations rise as automation makes compliance cheaper.

    From breathless essays to suspiciously AI-style social media posts, the (potentially disruptive) effects of AI are front of mind for many people. The disruption narrative has become so much part of the zeitgeist that “software developer who lost his job to AI and now has no health insurance” was the backstory for a patient in a recent episode of Grey’s Anatomy.

    One reason for the angst is that things are moving so quickly that the humans are struggling to make sense of things. Reasoning models are only a year and a half old, after all. The capabilities of the leading models leapt ahead in just the past three months. Any intuitions you had about AI’s impact based on what the models could do six months ago are obsolete. Add in the policy and geopolitical chaos of the Trump administration, and it is no wonder that investors throw up their hands and sell.

    Aside from its own speed of development, one of the things that make it hard to assess the implications of AI is that people are using a simplistic ‘bag of tasks’ view of work. If too many of your tasks are automated rather than augmented by AI, goes the thinking, then your job is at risk.

    A better approach was suggested in a recent paper by two University of Toronto professors, Joshua Gans and Avi Goldfarb. Instead of a bag of disconnected tasks, they model production as a set of tasks with varying quality. The quality of the final output is the product of each task’s quality, that is, multiplying quality across tasks. If any task has zero quality, the whole output has zero quality. (These are known as “O-ring” models of production, after the parts failure that was the proximate cause of the Challenger disaster.)

    In this setup, firms will automate tasks that the technology can produce to the required quality and reallocate the human effort to the remaining tasks. Worker time is reallocated rather than ‘saved’ in the form of layoffs, because there is a return to improving the quality of the non-automated tasks. It is this variable quality of tasks and output that the ‘bag of tasks’ literature ignores. A concrete example of this time reallocation effect comes from a recent speech by Fed Governor Chris Waller. He noted that AI-based coding tools reduced time spent on routine tasks. Developers at the Fed can instead focus on enhancing security and quality of the end product. This has also been our experience: faster coding means more time for thinking and writing.

    Gans and Goldfarb’s model has some interesting implications beyond the lack of layoffs. Once some tasks are automated, it becomes harder to automate the remaining tasks, because they are now being done to a higher quality than before. This means that automation will not be a smooth process but could happen in fits and starts. It could also make human labour more valuable than before, by focusing it on remaining high-value tasks. The barrier to automating everything might therefore be higher than people realise, and the question of what gets automated when depends on what was automated first.

    The O-ring model also allows for some useful extensions that the Gans and Goldfarb paper does not mention (and for which the mathematical workings will be available on request). In the standard O-ring model, zero effort on a task translates to zero quality, but any time spent on a task produces positive quality and thus positive final output. Consider the case where significant effort must be spent on a task to reach even zero quality. An example of this might be the effort involved in achieving and demonstrating compliance with legal or regulatory requirements. Expending positive but less than this minimum threshold effort still results in the zero-quality output disaster of a finding of non-compliance.

    If one task involves this fixed-effort ‘compliance tax’ and the others do not, it will be one of the tasks firms seek to automate first, because it frees up much more time to do other things than automating the other tasks would. Again, the ‘bag of tasks’ view misses this possibility.

    The issue then becomes: will the regulators permit automation of compliance activities or demonstrating that compliance? Is it enough for an AI to check against a list or rules, or are costly manual sign-offs needed for accountability? Will regulators who are themselves taking a cautious approach to the new technologies be over-cautious? And will automation just shift compliance tasks to other forms, such as managing vendor risk, or ensuring model results are explainable and that the systems do not hallucinate? Another complication is that different regulators could take different stances on this issue.

    A further wrinkle arises if effort is required to meet bare compliance, but there is also a ceiling beyond which further effort on the compliance task is fruitless. Think of a five-star or ‘fully compliant’ rating, but no sixth star to shoot for. If the manual effort to comply is large enough, the firm will choose to automate even if that means going from five-star to bare pass, if that raises the quality of other activities enough.

    That choice will be most attractive if the over-compliance is insurance rather than an expression of risk appetite. If manual effort has uncertain outcomes, firms will choose to over-comply to avoid occasional non-compliant outcomes. An aspect of this uncertainty, as Westpac has previously highlighted in its submission to the productivity roundtable process, is that regulators in Australia tend not to confirm that what the entity is doing complies, but rather, wait for those moments of non-compliance and enact consequences. Over-compliance is the natural response to this regulatory approach; a more reliable, less-manual process might not need as large a buffer. If automated outcomes are more reliable and certain as well as faster and cheaper, then the firm may rationally choose not to over-comply the way it did when everything was done manually. Firms will need to ask themselves: are they over-complying to ensure reliability of the process, or is it part of the brand and culture that they want to retain?

    Regulators might not be pleased to see regulated entities remaining compliant, but no longer five-star. They will need to ask themselves if their assessment criteria emphasise process rather than outcome. On the other hand, once real-time automated data checking becomes possible, periodic manual sampling will seem inadequate. The bar on compliance could inexorably rise as compliance becomes faster and cheaper. The reliability and explainability of the regulated entities’ processes will also matter.

    So you probably will comply using the AI, and you will certainly want to, given the payoff of freeing up time for other things. Key questions for Australia’s future technology adoption and productivity growth are: what other issues does it raise, and will the various regulators agree to the change?

    Westpac Banking Corporation
    Westpac Banking Corporationhttps://www.westpac.com.au/
    Past performance is not a reliable indicator of future performance. The forecasts given above are predictive in character. Whilst every effort has been taken to ensure that the assumptions on which the forecasts are based are reasonable, the forecasts may be affected by incorrect assumptions or by known or unknown risks and uncertainties. The results ultimately achieved may differ substantially from these forecasts.

    Latest Analysis

    Learn Forex Trading