Artificial Intelligence, commonly known as AI, has been all over the news recently, from goofy uses (George Washington with a mullet) to apocalyptic fears (AI systems with the power to launch a missile attack without human intervention). Forms of AI, like predictive texting, have been around for many years, but generative AI applications are expanding rapidly.
Generative AI is capable of generating text, images, or other media using models that review patterns to generate similar results. AI has the capability to quickly process large volumes of information and handle many mundane and repetitive tasks. The potential beneficial applications in the workplace are limitless, but so are the concerns.
One major concern is that generative AI is prone to “hallucinations” where it creates fictitious results that appear to be responsive. We’ve all heard about the attorney who filed a brief written by AI that cited cases that were completely invented by ChatGPT. It’s not always clear how generative AI determined the answer to your question.
There are also concerns about the data sources incorporated into generative AI programs. First, many of these data sources include copyrighted, trademarked, or other confidential or proprietary information without attribution or payment to the owners of the information. There are a number of litigation matters pending about this issue. Second, it is unclear exactly what information is incorporated into many generative AI models. If the underlying data is unreliable, inaccurate, or biased, the generated results will also be flawed. This is a great example of the adage “garbage in garbage out.” Furthermore, with public generative AI data bases, any information submitted as part of inquiries becomes part of the model. This creates significant confidentiality concerns for attorneys and other practitioners who deal with confidential or proprietary information.
An additional concern is how AI might be used to further human actions that are already viewed as problematic and deceptive. AI tools may be especially adept at implementing dark pattern marketing and sales practices or implementing deep fake technologies that will help fraudsters evade even the most sophisticated cybersecurity systems.
Legislative and regulatory bodies are racing to understand the potential applications and implications of generative AI. Congress has held hearings on AI and is considering a wide range of regulations, as are a number of state legislatures. A bevy of alphabet soup agencies, including the National Institute of Standards and Technology, Federal Trade Commission, Department of Justice, Securities Exchange Commission, Equal Employment Opportunity Commission, Department of Labor, Consumer Financial Protection Bureau, Department of Health and Human Services, and Food and Drug Administration have issued regulatory guidance, proposed regulations, or other statements about AI matters.
Sherman and Howard’s AI Task Force is working to stay abreast of these new developments and the implications for our clients. This is the first in a series of client advisories that will discuss the potential benefits and drawbacks of generative AI in the workplace, relevant statutory and regulatory guidance, litigation risks, and other critical issues. If you’d like to subscribe to these advisories, click HERE.