Does Your Company Need a Generative AI Usage Policy?

Chief Information Officer

The number of employers using generative artificial intelligence (AI) is skyrocketing: Nearly one in four organizations reported using automation or AI to support HR-related activities, including recruitment and hiring, according to a 2022 SHRM survey. In recruiting, there are some risks that companies should be aware of when it comes to using AI in the hiring process or writing job descriptions.

As more workers enjoy the benefits of AI tools like ChatGPT, some company leaders are growing concerned about employees inputting sensitive information into the bot. It’s led companies like JPMorgan Chase, Accenture, and Amazon to limit or ban employees from using it.

Samsung Electronics Co. is banning employee use of popular generative AI tools after discovering staff uploaded sensitive code to the platform. The company is concerned that data transmitted to such AI platforms including Google Bard and Bing is stored on external servers, making it difficult to retrieve and delete, and could end up being disclosed to other users, according to the document.

Additionally, Verizon stated that “ChatGPT is not accessible from our corporate systems, as that can put us at risk of losing control of customer information, source code and more…as a company, we want to safely embrace emerging technology,” in a public address to employees.

Should companies trust their employees are using this new tool in a way that doesn’t put important information at risk? A poll of 62 HR leaders in February 2023 by consulting firm Gartner found that about half of them were formulating guidance on employees' use of ChatGPT, Bloomberg reported.

What Does an AI Policy Look Like?

While some companies are banning the use of OpenAI’s ChatGPT and other generative AI tools, there are also ways to be more specific about the restrictions around the use of sensitive information that could be exposed by entering it (code, for example) into an AI tool.

Creating a generative AI policy for a company involves establishing guidelines and principles for using the language model effectively and responsibly.

Here’s what to consider when building a policy for using AI tools:

  1. What is the purpose? Define the intended use of generative AI within your organization. Will it be used for customer support, internal communication, content generation, or any other specific purpose? Clarify the objectives and goals to align the policy accordingly.
  2. What are the limitations and risks? Understand the limitations and potential risks associated with using generative AI. Consider issues like bias, misinformation, inappropriate content generation, or overreliance on the model. Acknowledge the challenges and outline how to mitigate them.
  3. Define ethical guidelines that promote the responsible use of generative AI tools. This may include avoiding discriminatory language, maintaining user privacy, ensuring transparency about the AI nature of the system, and adhering to legal and regulatory requirements.
  4. Who has user access? Determine who within your organization will have access to generative AI and under what circumstances. Establish guidelines for training and knowledge sharing to ensure proper understanding and usage of the system.
  5. Define user interface and integration. Outline how generative AI will be integrated into your company's systems and processes. Consider factors like user interface design, integration with existing tools or platforms, and technical support requirements.
  6. Establish procedures for monitoring and evaluating the interactions and outputs generated by AI tools. Regularly review the system's performance, identify areas for improvement, and implement measures to maintain quality and accuracy.
  7. Encourage users to provide feedback on the system's performance, including identifying problematic outputs or areas for enhancement. Incorporate user feedback into ongoing training and improvement processes.
  8. Communication and transparency. Develop guidelines for communicating the use of generative AI to users and stakeholders. Ensure transparency about the limitations of the system, its AI nature, and how it should be used.
  9. Provide training and education to users about the appropriate use of generative AI tools. Promote understanding of the model's capabilities and limitations to ensure responsible and effective usage.
  10. Have regular policy reviews. Recognize that technology evolves rapidly, and regular policy review is necessary. Schedule periodic reviews to assess the policy's effectiveness, update guidelines as needed, and stay aligned with changing best practices and industry standards.

Remember that developing a generative AI policy requires input from stakeholders across different departments, including legal, IT, HR, and customer support. It's important to strike a balance between leveraging the capabilities of the model and ensuring responsible and ethical use.

In addition to the areas outlined above, the Society for Human Resources Management (SHRM) has an excellent resource on How to Create the Best ChatGPT Policies.