Shearman & Sterling LLP 7 | Leveraging the Broad Potential of Artificial Intelligence While Mitigating the Risks
• Surveying employees about their use of AI.
Companies should begin by understanding how
their employees are currently using AI tools
in their work. This allows them to tailor the AI
applications—and the applicable policies and
procedures—to the needs of the business. Further,
by receiving regular feedback from employees,
companies can ensure that the use of AI aligns with
the company's overarching business goals.
• Evaluating AI tools. The task force should consider
establishing processes and guidelines for identifying,
vetting, and approving AI tools and providing
recommendations regarding how the tools should
be used within the company. Keeping abreast
of the development in AI technologies allows
companies to harness the latest advancements
and gain a competitive advantage.
It is equally important for the task force to assess
the numerous risks that may be associated with
AI tools. Among other things, the task force should
consider the confidentiality, data privacy, and
in-depth cybersecurity risks associated with each
tool. For example, if the AI is public/open-source,
rather than proprietary, then allowing it access to
company data could compromise the confidentiality
of that data. This concern is particularly acute for
AI tools that may be used within the legal function,
where it is imperative to ensure that applicable
privileges (including the attorney-client and work
product privileges) are not waived.
The task force should also consider the risk of
"hallucination," which is when AI produces false
or made-up information. It can be difficult for users
to detect such hallucinations, which can seem
plausible. Indeed, there have been instances
in which AI generated fake New York Times
articles to support their false assertions.
The consequences of such hallucinations can
be significant. For example, two New York attorneys
were recently sanctioned by a federal judge for
submitting a legal brief with fictitious case citations
that were generated by an AI chatbot tool.
1
Task forces should also be aware that AI are prone
to other human fallacies, including implicit bias. An AI
hiring tool was found to be favoring male applicants
over female counterparts by penalizing resumes that
include the word "women's"—as in "women's chess
club captain"—as well as applicants who graduated
from all-women colleges.
2
• Developing training programs and best practices.
The task force can design targeted training and
educational programs to ensure efficient and
optimized use of the AI tools. It will also be beneficial
to outline best practices when using AI tools.
For example, employees should refrain from
inputting company's data when using third-party
AI tools. Also, employees should avoid using
trademarks, logos, and brands in the prompt to avoid
generating content that could violate the rights
of third parties. Additionally, employees should
confirm the accuracy of the AI-generated work
to avoid hallucinations and outdated answers.
• Implementing and enforcing policy. The task
force should also evaluate existing policies
(e.g., privacy policies, employee handbook, data
use policy, confidentiality policy) to ensure they
address the use of AI. Companies should also
consider implementing new policies that are
tailored to AI usage and provide guidelines
on when and how AI technologies may be used for
work, how employees should disclose their use
of AI to their supervisors, and whether they can be
used to manage personal and sensitive information.
Companies can also establish a security framework
to combat threats both physical and digital. Having
robust policies can mitigate legal liabilities arising
from incidents related to the use of AI because
companies can show that they took reasonable
steps to prevent such incidents.
Companies should also consider disclosing the
use of AI to their clients and other stakeholders.
The disclaimer could take the form of a general
disclosure that some of the company's work
may be generated using AI tools and services;
alternatively, more specific disclaimers could be
added to any material that was partially or entirely
generated by AI.
1
See Sara Merken, "New York Lawyers Sanctioned for Using Fake
ChatGPT Cases in Legal Brief," https://www.pleiadesstrategy.
com/state-house-report-bill-tracker-republican-anti-esg-attacks-
on-freedom-to-invest-responsibly-earns-business-labor-and-
environmental-opposition (June 26, 2023).
2
See Jeffrey Dastin, "Amazon Scraps Secret AI Recruiting Tool That
Showed Bias Against Women," https://www.reuters.com/article/
us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-
recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/
(October 10, 2018).