2023 BCLP Annual Arbitration Survey on AI in IA Results

2023 BCLP Annual Arbitration Survey on AI in IA Results

The annual Bryan Cave Leighton Paisner arbitration survey this year addressed the topic of AI in arbitration.

The 221 respondents to the survey came from a variety of common and civil law backgrounds. Over half were lawyers in law firms, 33% were arbitrators and 12% were in-house counsel. Together their views and responses present a useful snapshot of current attitudes to the use of AI tools across the arbitration community as practitioners continue to grapple with practical implications from AI usage and its impact on the arbitration process.

The survey purposely used a broad definition of ‘AI tools’, covering ‘systems using technologies such as text mining, computer vision, speech recognition, natural language generation, machine learning and deep learning to gather and/or use data to predict, recommend or decide, with varying levels of autonomy, the best action to achieve specific goals’, in order to cover as wide a range as possible of AI-related tools in use in the arbitration process..

The survey first canvassed views on what tasks AI tools are currently being used for, what tasks AI tools may be used for in the future, and where parties involved in the arbitral process (the survey respondents) would draw the line on the use of AI tools to perform tasks.

Interestingly, 28% of respondents to the survey indicated they had used Chat GP in a professional context. The top 3 areas of current use were using AI tools for document review and production (30%), for translation (37%) and text formatting/editing (30%). In terms of potential future use, 80% of respondents indicated they would use an AI tool to detect whether AI has been used to generate materials, 73% would use it for the generation of factual summaries, and 65% for document analysis. Unsurprisingly, however, respondents drew the line on the use of AI tools to generate text, whether for use in arbitral awards (62% were uncomfortable with this), in expert reports (58%) and in legal argument and submission (53%). The key takeaway from these results is the fact that AI is already being used in arbitration. This enhances the urgency of addressing potential risks, an issue that the survey explored as well.

When asked about the benefits and risks of using AI tools, respondents overwhelmingly ranked time saving as the most important benefit of AI, but displayed a good understanding of the risks inherent in using of such tools, including cybersecurity, hallucination, confidentiality and risk of deepfakes and tampering with evidence all ranking as concerns for 86-88% of respondents.

The latter is particularly pertinent as while almost half of respondents to our survey were concerned about the integrity of evidence being affected by the use of AI tools, a small but significant minority (3%) already had experience of this occurring.

60% of respondents agreed that there is a need for greater transparency over the use of AI tools by parties in arbitration. However, on disclosure, responses varied depending on the nature of the task for which an AI tool is being used, with 72% agreeing that parties should be required to disclose the use of AI tools for drafting expert reports, 65% agreeing with respect to document review and production, and only 40% with respect to legal research.

This seems to underline a perception among practitioners that while it feels sensible that some level of disclosure may be needed for transparency, there is no one size fits all answer. Perhaps it is simply that this issue hasn’t arisen yet in practice: not one respondent had yet experienced a tribunal refusing to allow the use of an AI tool in arbitration.

Questions on disclosure then moved onto survey respondents’ attitudes to regulation of AI tools in arbitration, on which the key takeaway was that there is, as yet, no clear answer. The majority of respondents (63%) were unsure of whether the use of AI in arbitration should be regulated. That is reflected in the spread of responses to the question of how the use of AI tools should be regulated in arbitration, with the largest group (39%) thinking it should be regulated through soft law guidance from bodies such as UNCITRAL or the IBA suggested as appropriate bodies. A smaller group (26%) considered that arbitration rules were the appropriate mechanism, and it remains to be seen how this will develop. As noted in the survey, defining ‘AI’ is a difficult task on which there is little consensus, but will inevitably be of the utmost importance when crafting disclosure obligations.

Overall, it appears that AI tools are here to stay in arbitration – respondents considered that the use of such tools was inevitable and hopefully the survey contributes to highlighting the issues that may arise.

Download the results of the 2023 BCLP Annual Arbitration Survey on AI in IA here:


Latest Articles:
About the author:

Siobhan is a Senior Associate in the International Arbitration team at Bryan Cave Leighton Paisner. Her practice focuses on commercial and investment treaty arbitration and she has acted on multiple complex cross-border disputes under a range of institutional rules and governing laws, with a particular focus on the LCIA, ICC and ICSID. She has advised clients in various sectors, including property and construction, pharmaceuticals, space technology, M&A, private equity and financial services and regularly advises clients from a range of jurisdictions. She also sits as tribunal secretary and has experience of commercial litigation in the High Court in London.

In 2019 she completed a three-month secondment to the LCIA Secretariat’s casework team as counsel.  She recently qualified as a solicitor advocate in the English courts.