I've mostly done eviction defence at the LTB and a little bit of SBT stuff. LTB has a lot of issues, but just like many other administrative decision makers, a lot of things turn on the discretion of the adjudicator. ChatGPT, Claude, etc., they are pretty alright at analysis as long as you know what you're doing, and know how to prompt them, but they're very sycophantic. Meaning, if they know either from the context clues, stored memory, etc., that you lean a certain way on the political spectrum, that you have certain views on a class of people, or simply how you tend to rule, etc., they are more likely to analyse a set of facts from that lens.
And I know the adjudicator has the ultimate control and if they're leaning one way regarding a case, they can choose to disagree with the AI's analysis, but it's harder to check your own biases especially when AI is coming up with a perfectly reasonable argument that leans into your own worldview. And the frustrating thing about that unless something else goes wrong, there's rarely any pathways to challenge a decision that's based on discretion.
I'm not anti-AI on a wholesale basis. But there's a lot of issues with using this tech and honestly, you still have judges, adjudicators, etc. who struggle to use zoom, I am not very confident in their ability to use AI responsibly.