Adapting corporate policies: Nurturing the next generation of AI-driven employees 

By implementing adaptable policies to safeguard their company, organizations can harness the benefits of GenAI while mitigating risks, positioning themselves as responsible AI adopters.

As the workplace continues to evolve with advancements in artificial intelligence, it is crucial for HR, legal, security and IT departments to collaborate in adapting policies to nurture the next generation of AI-driven employees.Whether you have explicitly allowed the use of AI in your workplace or not, at least some of your employees are likely using it. A recent employee survey revealed that 43 percent of employees had used tools like ChatGPT to do their jobs, most of which did so secretly.

The use of generative AI technology comes with several risks companies need to be aware of, such as hallucinations, veracity, security, privacy and a lack of source of truth which are currently hindering enterprises from leveraging this technology in a brand safe way today. As of this writing, several lawsuits have made their way into the courts, and their resolution regarding the legality of these AI use cases could take several months or even years to ascertain. However, outright banning AI use within your company is impractical and will create missed opportunities for increased efficiency, creativity and more. 

Knowing the risks, how can you protect your brand, your customers and employees while enabling them to try new technologies, to learn and grow? By aligning HR, legal, security and innovation (IT and/or R&D) strategies, organizations can create a robust framework that fosters responsible AI adoption while maintaining a focus on employee well-being and customer satisfaction and safeguarding the company’s brand reputation and core assets. The HR, legal, security and IT departments should already possess a comprehensive understanding of developing tech usage and security policies. By collaborating effectively, they can proactively adapt to the evolving landscape of AI and ensure readiness for any changes it may bring to the workplace.

The risks of AI

To develop effective risk-mitigating policies, addressing risks, a thorough understanding of the associated risks is crucial. These risks primarily revolve around valid concerns such as:

Data privacy: Information put into AI tools is shared with one or more third party AI providers and, depending on the AI tool, may be used to “train” AI or even become part of the output of someone else’s future AI query. Such transfer and use may potentially violate the legal purposes for which the data was originally collected or instructions provided by relevant stakeholders.

Ethics: The bias in AI is still an issue experts are wrestling with today and will continue to explore for some time.

Information integrity: AI is known to generate hallucinations or fabricate false information. Without proper safeguards for verifying AI output, you could disseminate harmful or false information which can lead to legal consequences or damage to the brand reputation.

Intellectual property: When sharing ideas, brand logos and proprietary or confidential information within AI tools, there is a risk of potentially diminishing the original value of your intellectual property and associated protection. Moreover, considerable uncertainty remains regarding the copyright regulations governing AI-generated content across multiple jurisdictions and the ownership of AI output. 

Creating AI workplace policy

The size and industry of your company can impact the obligations and restrictions that should be incorporated in your company’s AI policy. However, certain best practices can be universally implemented by all organizations.

1.       Establish ownership

While HR, legal, security and IT will need to work together to create a corporate policy, designate early a responsible team or individual for assessing risks as the technology evolves. Forming an ethical AI or security committee ensures you have the right expertise to guide your teams in their use and procurement of AI solutions.  

2.       Define boundaries

Mitigating risk entails careful consideration of data types permitted for use with AI by employees. It is important to consider both the data used as input and the utilization of the output generated by the AI tool. Certain categories of data, like financial information or protected health information, may pose too high of a risk when input into an AI model. Conversely, the risk associated with AI-generated content depends on the specific context of its use. For instance, public-facing websites and product development require a higher level of caution and scrutiny, while activities like translating internal documentation or providing email drafting assistance may involve less risk. The delineation of these boundaries should be guided by your company’s ethics, objectives and industry, taking into account existing regulations.

3.       Create buy-in

HR and compliance teams play a significant role in educating employees on how to use technology, protect the company against cyber-attacks and address security concerns. Integrating AI into workflows will require additional training and upskilling opportunities. This presents not only a chance to educate but also to foster employee buy-in, ensuring they comprehend and align with your organization’s stance on AI usage. Employees may be unaware of how their AI prompts can potentially jeopardize data security and privacy. Take the initiative to explain the associated risks and emphasize the importance of individual actions. By promoting understanding and awareness, employees can actively contribute to safeguarding data rather than simply doing the bare minimum to comply with rules.

4.       Audit policy

Once a tool has received approval, it is important to monitor and conduct audits to ensure compliance with the established guidelines. This is an opportunity to assess the practicality, customization and enforceability of the guidelines in the teams’ daily operations. If they are found to be impractical or difficult to implement, adjustments should be made to enhance their effectiveness.   

5.       Adjust as needed

Implementing workplace policy has never been a one-and-done, and it is especially accurate in the evolving landscape of AI. Stay abreast of the latest trends in the field and maintain close collaboration with various business units to proactively understand their requirements and anticipate the adoption of upcoming tools. Adjustments may be required, such as limiting or expanding AI tool usage or aligning policies with the training materials provided by an enterprise AI solution.

AI workplace policy suggestions

There’s a delicate balance between promoting employee growth and mitigating risk. With that challenge in mind, here are some sample points you may want to adapt for your employee handbooks and AI policy:

1.       Require employees to use their personal email address, not company email, when logging into non-corporate AI tools.

2.       Caution employees against inputting personal information, customer data, confidential information or any sensitive information into a GenAI tool. Limit usage to publicly available information. 

3.       Protect all forms of proprietary information and assets. Avoid using company-owned text, audio and visual graphics in AI prompts.. Preserve intellectual property rights by refraining to incorporate protected answers into core assets, such as product or source code. 

4.     Maintain human oversight. Always be mindful that ChatGPT and other similar/related tools can produce incorrect answers and make up facts. Therefore, review and edit all answers to ensure accuracy and quality before utilization.

5.    Consider pre approving trusted applications and use cases that have undergone appropriate checks and balances, tailored to your specific situation. 

Moving to an enterprise AI solution may help set clear boundaries for employees. Many of these enterprise tools have been exploring innovative ways to leverage AI for years and integrate them into the workflows familiar to your employees to offer great user experience and adoption. When seeking a technology partner, look at the tools early adopters are leveraging. 

By implementing adaptable policies to safeguard their company, organizations can harness the benefits of GenAI while mitigating risks, positioning themselves as responsible AI adopters.