News & Updates

person using computer

Using chatGPT in Workplace – Issues to Consider

March 07, 2023

By now, almost everyone probably has heard of chatGPT, well almost all. Many certainly have used it. It is no exaggeration to say that as we entered 2023, chatGPT has been an Internet phenomenon. It is also said that chatGPT can get many things done in a few seconds that would take us human non-trivial efforts, often hours or even days to finish. Such examples include:

  • turning a few bullet points into a well-versed and comprehensive e-mail message
  • providing quick answers to simple questions such as not-to-miss places to visit in a tourist city
  • creating software code for specified problems
  • troubleshooting (i.e., debugging) software code
  • summarizing a long article
  • providing in-depth explanations of complex issues with citation of information sources

It can also be asked to create new materials, such as to create website, generate promotion brochures or marketing presentation, or even write a poem. For example, when asked to write a Tang-dynasty style poem in Chinese, modeling after the style of a famous poet, Wang Wei, it produced the following:

夕阳斜照树阴斜,水面微波归鸟啼。

一片秋风吹落叶,谁家砧杵震寒枝。

This is actually not a bad poem! 

Aside from some recreational and personal uses of chatGPT, it certainly can be a good tool used in workplace, such as preparing drafts, summarizing articles, conducting research. It is very tempting for a business to take advantage of chatGPT’s capabilities to improve productivity and workplace efficiencies.

However, if any business (or anyone) plans to use chatGPT in the workplace, it is important to understand its limitations and potential risks before taking the plunge.

Much has been said about chatGPT, from many different perspectives, including its inherent bias, lack of current knowledge or limited knowledge, cybersecurity risks, privacy concerns, academic integrity, computational cost and power consumption, among others. This blog will discuss three selected issues related to using chatGPT in workplace, namely confidentiality and trade secrets, improper reliance on chatGPT output, and intellectual property concerns.

Confidentiality and Trade Secrets

So, what is chatGPT? According to chatGPT itself: 

“ChatGPT is a chatbot, also known as a conversational agent or virtual assistant, that is powered by OpenAI's large language model. I am designed to understand natural language and generate human-like responses to help answer questions, provide information, and engage in conversations with users like you. My programming allows me to learn and adapt over time, so the more I interact with people, the more I can improve my responses and accuracy.”

The chatbot takes input (or prompt) from a user in natural language and generates a textual output written in human-like manner. For example, the input can be a few bullet points for chatGPT to produce a client memo or draft letter, or the input could be a piece of software code for chatGPT to analyze and troubleshoot, or it could be a long settlement agreement for chatGPT to produce a concise summary for inclusion in a report to the client.

What can be the problem with the examples above? Many. For example, potential loss of client's confidential information or even privileged information when a settlement agreement was fed into chatGPT. Or, potential leak of trade secrets or proprietary information, when software code segments are fed into chatGPT. Not to mention that, to a lawyer, there also may be a failure of meeting a lawyer’s professional and ethical standards. 

Why? According to chatGPT, any inputs or outputs (defined as "Content" by OpenAI) in a conversation with chatGPT may be used by it in its model training 1. Worse, the Content even may be reviewed by an employee at OpenAI to improve its models 2.

OpenAI did state that any such use or human review would be subject to its confidentiality and security requirements. However, there is a lack of transparency as to its contractual agreements with its employees and contractors. There is also no disclosure as to the actual steps OpenAI takes to protect confidentiality and security. Even if there were perfect transparency, it is nonetheless doubtful that OpenAI's confidentiality and security requirements would meet the legal tests for such obligations for all situations in all jurisdictions in the world. Further, how the "click-wrap" agreement between a user of chatGPT and OpenAI would be interpreted and whether the confidentiality provisions would meet the legal requirements in a user's own jurisdiction also would require careful analysis. 

Therefore, care must be taken before any confidential information or proprietary information is provided to chatGPT. This may limit the use of chatGPT to only certain tasks or may place a restriction on textual inputs that may be supplied to chatGPT. However, chatGPT still may be used for tasks not involving confidential information or proprietary information. The improvement in productivity still could be substantial, even when some appropriate restrictions are imposed.

Reliability of chatGPT Output

Output from chatGPT is known to be incorrect 3. This is also acknowledged by chatGPT. ChatGPT warns users at its start-up screen:

Limitations
May occasionally generate incorrect information

If a business relies on such incorrect information in its operations, or if a legal professional provides such incorrect information to the clients, the results could be more than causing some embarrassment. Here is an actual example of a dialog with chatGPT to illustrate how we may be misled and "induced" to infringe other’s copyright:

Human: What do you know about internet downloading and copyright infringement in Canada?

AI: In Canada, downloading copyrighted content without permission or payment is generally considered a violation of copyright law. However, the exact legality of downloading copyrighted content from the internet is somewhat unclear.

In 2012, Canada passed new copyright legislation that included provisions for downloading copyrighted material for personal use, as long as no compensation was involved. This means that individuals can legally download copyrighted content for personal use, such as music or movies, as long as they do not share or distribute the content. [emphasis added]

This advice is simply wrong. Just imagine what would happen to an innocent individual who followed this advice by chatGPT without further consultation with a lawyer or doing further research. When specifically asked about this "legally download[ing] copyrighted content for personal use" advice, chatGPT immediately corrected itself and stated that "I apologize for any confusion that may have caused. My previous response was incorrect. In Canada, it is illegal to download copyrighted content without permission or payment, regardless of whether it is for personal use or not." 

What is even more problematic is that chatGPT not only can produce incorrect information, as acknowledged by chatGPT and demonstrated by this example above, it also may fabricate information. Such fabricated information may be presented in a plausible way together with other information which may be correct. This makes detection of such fabrication very difficult, unless one is sufficiently knowledgeable in the subject area.

For example, when asked about cases in Canada involving the use of works in public domain and copyright law, chatGPT provided three examples as follows:

"Society of Composers, Authors and Music Publishers of Canada v. Bell Canada: In this case, …. [summary omitted] .

Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright): In this case, … [summary omitted]

Ontario (Ministry of Government Services) v. Bell Canada: In this case, the Ontario government sought to obtain copies of public domain government documents from Bell Canada under freedom of information laws. Bell Canada claimed that it owned the copyright in the documents and refused to release them. The court ruled that the documents were in the public domain and that Bell Canada did not have the right to control their use or distribution.”

The case citation for the third case in this example provided by chatGPT was: Ontario (Ministry of Government Services) v. Bell Canada, 2011 ONSC 1171.

Of course, the problem is: this third case does not exist. The case reported at 2011 ONSC 1171 (ignoring the parties for now) is Yan v. Todrov, 2011 ONSC 1171. As is clear from the decision 4, it has nothing to do with copyright. Additionally, the case has nothing to do with the purported parties, Ontario or Bell Canada. The term "Bell Canada" does not even appear in the decision 5! If any reliance was placed on this fabricated case in a legal argument, clearly a judge would not be persuaded (nor pleased). 

The example above highlights the importance to "check your source". ChatGPT can be a good tool for finding information and collecting information. Its summary of “research” also may be a good starting point for identifying areas or issues for further investigation. Used appropriately, it can be a good tool in information gathering and research. However, when using any information generated by chatGPT, one must exercise extreme caution. What presented plausibly as "useful" information could have no factual basis whatsoever. It should be mandatory that such information be confirmed through independent research and verification. The further source checking and investigation would be necessary to ensure accuracy in the chatGPT-generated information and for making any necessary corrections.

Intellectual Property Concerns

There are some intellectual property (IP) concerns when chatGPT is used in the workplace. These concerns may include ownership of generated content, loss of IP protection to the user provided content, and (unknowingly) committed infringement of third-party IP, among others.

In general, authors of an original work (such as texts) are the owners of copyrights in the work (or the author’s employer if produced during course of employment). There has been debate whether an AI may be an author or capable of owning copyright. In a recent decision by the United States Copyright Office, recognition of authorship was given only in a limited scope to work produced with AI-assistance and only to the part involving human post-processing (i.e. human authorship only). Even though OpenAI purportedly have assigned all its right, title, interest in and to the AI-generated content to the user whose prompts caused the generation, such assignment is only "to the extent permitted by applicable law". Therefore, what OpenAI may be able to assign to the user is very uncertain, to say the least. Any claim to ownership of purely AI-generated work likely will be challenged on the basis that such work lacks (human) originality, a threshold requirement for copyright authorship.

Additionally, chatGPT does not promise it would not generate identical (or at least very similar) content in response to identical or similar user prompt. ChatGPT may produce the same or very similar content for different users. In the traditional situation, if identical or similar content is used by others, proving infringement may be quite straightforward. Not so if the content is chatGPT-generated. It could be that the same content was provided by chatGPT to different users. One could lose control over the use of chatGPT-generated content for lack of IP ownership. To preserve IP protection (and therefore over your content), it may be necessary to rewrite (or at least revise) anything generated by chatGPT, to add the human touch.

The other IP concern is in providing user content to chatGPT. In general, a user would have IP rights in the user’s own content, such as copyright or trade secrets, or both. Yet, when a user provides the user content to chatGPT, the user would have granted OpenAI a license to use such content to improve its models. In other words, such user content may find its way into generated text of chatGPT in the future. For example, a user may provide some software code to chatGPT for debugging. Segments of this code may later appear in code examples provided by chatGPT to other parties. The confidentiality in such user content also may be lost. A business will need to ask itself whether these are the intended and acceptable consequences when providing its user content to chatGPT.

Finally, because chatGPT relies on its training data to generate its outputs, there is a small, but still real, possibility that the generated output could be too close to the source text. If a human actor generates certain text that is substantially taken from the source texts, likely there would be copyright infringement. Would it be a defensible excuse simply because the text was generated by a machine? Probably not. The potential IP risk likely may outweigh the benefits gained from any efficiency derived from using such generated text. For this reason, i.e., to avoid infringement risk, it also would be necessary for a business enterprise to rewrite (or revise) anything generated by chatGPT before using it in its business.

To conclude, chatGPT (or any similar AI chatbots) can be a very good tool in workplace if used appropriately. A business enterprise (or anyone) using it in workplace will need to know when not to use it or how not to use it in order to avoid legal liabilities or losses, while still may maximally leverage its capabilities to improve productivity and efficiency.


1 Information on how "Content" will be used by OpenAI is scatted in various documents provided by OpenAI. Here are some examples:

  • In its latest "Terms of use" dated March 1, 2023, it states that "You may provide input to the Services (“Input”), and receive output generated and returned by the Services based on the Input (“Output”). Input and Output are collectively “Content.” ... OpenAI may use Content as necessary to provide and maintain the Services". 
  • In an article "How your data is used to improve model performance", incorporated by the Terms of use, it states that "When you use our non-API consumer services ChatGPT or DALL-E, we may use the data you provide us to improve our models." 
  • In its "Data Usage Policies" dated March 1, 2023, it references to this article, "Data usage for consumer services FAQ" (https://help.openai.com/en/articles/7039943-data-usage-for-consumer-services-faq), which explains that "Does OpenAI train on my content to improve model performance? For non-API consumer products like ChatGPT and DALL-E, we may use content such as prompts, responses, uploaded images, and generated images to improve our services"

2 According to the article “Data usage for consumer services FAQ" quoted above, "Do humans view my content? A limited number of authorized OpenAI personnel may view and access user content only as needed for these reasons: ... or (4) when we use de-identified content to improve our models and services".

3 NBC recently reported several incidents of incorrect information provided by chatGPT to its reporter, where chatGPT was reminded to have “a refresher on the lessons of Journalism 101.” See “Fake News? ChatGPT Has a Knack for Making Up Phony Anonymous Sources”.

4 As per Perell, J., in Yan v. Todrov, 2011 ONSC 1171, at [1]-[4]: 

“[1] This is a continuation of a motion for a default judgment. 

[2] The Defendant, Evgueni Ivanov Todorov, did not defend this action and was noted in default, and on November 12, 2010, Justice Allen granted a default judgment against him for breach of contract. Justice Allen directed that the Plaintiff Guotai Yan’s claim for fraud and the quantification of his damages be heard by further motion.

[3] By way of summary, the factual background to Mr. Yan’s further motion for a default judgment is as follows.

[4] Mr. Todorov was an unregistered investment advisor and the principal of 1045742 Ontario Ltd., which operated an investment club known as the Dow Jones Industrial Average Club. The Club traded in securities, and Mr. Yan signed four investment agreements, the pertinent terms of which I will set out below.”

5 The first two cited cases also were not about use of works in public domain and the summaries provided also were incorrect, even though these two cases do exist.

Tags: AI, AI authorship, artificial intelligence, chatGPT, confidential information, copyright, Intellectual Property, IP, proprietary information, software, technology, trade secrets