If a firm makes a harmful error, a role for AI/machine-learning tools in the chain of events that lead to the error should be an aggravating rather than mitigating factor, like drunkenness for car accidents.

At first it seems unfair ("I wasn't myself!" or "The AI did it!") but the point is it's your responsibility when you create the circumstances under which inadequate or harmful or insufficiently accountable choices are likely to be made.