Exploring the Business Risks and Challenges of ChatGPT

May 23, 2023

You’ve undoubtedly heard about the growing number of commercial use cases for AI tools like ChatGPT as eager organizations seek productivity gains across functions ranging from IT and security, to marketing and HR. However, for business and security leaders, these types of innovative, and perhaps even revolutionary technologies also pose new risks and challenges. 

This piece examines legal risks along with financial and ethical challenges that ChatGPT and AI present for organizations leveraging these new tools. 

Legal Risks of ChatGPT 

At a legal level, there is an open question about who owns the code ChatGPT generates. Its terms and conditions are at play, but just as it is often not sufficient to claim ownership of employees without having them specifically assign ownership as a work for hire, it may not be sufficient to assume code generated by an employee for a company is owned by that company.

Also, it is common for services to change their terms and conditions on a regular basis. What happens to code ownership if the terms and conditions change? Is it possible for the terms and conditions to change in a way that negatively impacts already-generated code? Any major company’s license change can impact a significant number of existing businesses. It can certainly be possible to have a significant business impact from seemingly minor changes to ChatGPT terms and conditions.

Financial Challenges of ChatGPT 

As of this writing, ChatGPT can still be used for free to generate content for commercial purposes (albeit with limitations based on application usage). However, the obvious direct financial threat is ChatGPT changing its pricing model. This is something already introduced by way of the ChatGPT Plus option which offers enhanced benefits to paid subscribers. While impossible for us to know how the pricing model will change overtime, as the usage cost increases, it becomes less advantageous to use ChatGPT vs. hiring entry-level personnel. However, there is also the concern of valuation. If a startup chooses to leverage AI-generated code, will that negatively impact valuation in the same way using third-party code and, to a lesser extent, open source code does?

We will not know the answers to these questions for quite some time.

Ethical Challenges of ChatGPT 

AI systems work with training data. We are already seeing a large public discussion of the ethics of training AI models on the work of artists—usually, without direct compensation and almost always without any possibility of royalties—even though the use of such AI can take commissions away from those artists.  However, we haven’t even begun to scratch the surface of ethical issues with ChatGPT in business context.

If ChatGPT is used to generate policy and procedure documents, do we know the policies and procedures it uses as “inspiration” are properly licensed? What happens if a key phrase is replicated? Can that be used as evidence of copyright violation?

What about the ethics of using AI-generated code? If the code generated includes implicit bias, who is at fault? ChatGPT provides an understandable, but not fully satisfactory, answer on this topic:

Figure showing Tricking ChatGPT to Write Phishing Emails


When exploring wider ranges, ChatGPT is a bit clearer, but it is obvious this view barely scratches the surface of widespread use of this technology.

Figure showing 4 Core Functions of NIST AI Risk Framework


Practical Business Challenges of ChatGPT 

At a practical level, it is important to consider the fundamental tradeoffs of any new technology. ChatGPT provides the ability to generate a significant amount of code, albeit at varying levels of quality. While quality will improve over time, there is still a question of whether or not it can meet the quality level of a person (or dedicated AI system) who understands the business context in which it is operating. It is significantly cheaper to use than hiring human developers, but the cost benefit analysis is entirely dependent on how long it takes your people to review, adjust, test and vet the generated code.

Much like we saw with the outsourcing of development to offshore resources many years ago, cheaper development does not always mean the entire program is less expensive or more efficient. The hidden costs can often be significantly higher than expected, even by experienced professionals with significant knowledge in this space.

Navigating Business Risks and Challenges of ChatGPT 

Looking a bit further ahead, constantly hiring and training entry-level programmers or security engineers can be a drain and the appeal of replacing that business process with automatic code generation is understandable. However, that is also the process that creates experienced security professionals with enough cyber- and company-specific knowledge to troubleshoot problems and develop new features and functionality. In this example, how a CISO would retain top cybersecurity talent without a feeder pool remains to be seen.

To avoid legal and ethical concerns do not use the technology for anything (e.g., loan origination, etc.) that could be problematic in the near future. Wait for the courts to figure out the bigger issues before taking that risk.

Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.


Access time-saving tools and helpful guides from our Faculty.


IANS + Artico Search

Our 2024-2025 CISO Compensation and Budget Benchmark Survey is Live!

Get New IANS Blog Content
Delivered to Your Inbox

Please provide a business email.