ChatGPT and Getting our Minds Around Artificial Intelligence

With so many competing headlines proclaiming AI as humanity’s next great technology savior while others prophesize it will be our doom, it can be hard to get a sense for where we’re at.

Here’s what we know.

By now, most people have heard of leading-edge AI technologies, most notably Open AI’s ChatGPT.

In November, OpenAI released ChatGPT 3 with a splash. With multiple headlines, news stories debating the technology’s emotional competence and influencers taking to popular social streams to share their opinions.

ChatGPT has been marketed for a variety of purposes, from automating contract creation in the legal field to aiding in medical diagnosis. It’s been used by professionals, students, and curious individuals to answer questions, solve problems, generate code, have conversations and more.  But, as with most technologies, people can and do find a way to exploit it which can result in damaging consequences.

Recently, more than a 1,000 academics and industry leaders have called for a pause on development of more powerful AI technologies like ChatGPT as questions on the impact these new technologies may have on the economy and society circulate.

So how should organizations and colleagues adapt to this new AI?

In this analysis, we’ll explain what organizations and colleagues need to know about ChatGPT and provide some recommendations for how to proceed with this new technology.

How ChatGPT Works

ChatGPT 3, which was first released in 2020, was trained on content taken from across the Internet that could be publicly accessed, i.e., wasn’t password protected. This included news sites, academic websites, commercial websites, public social media and more. The Internet offers searchers an almost infinite supply of content, and with that comes harmful material and hurtful biases.

It included the best of what the Internet has to offer, as well as the worst.

Since ChatGPT isn’t human, it has no way to distinguish between right or wrong and has an extremely limited approach to context. Put another way, it has no way of telling how it should respond to different people who have different perspectives, opinions or beliefs posing it questions and simply shoots off a response and hopes it hits its target based on mathematical predictions that each sequence of words is probably correct.

ChatGPT can only create responses based on the information it has available to it and what it has been able to pick up from the questions it has been asked. Which means the more people misuse the AI, the more accurate it will be at creating responses to satisfy cybercriminals.

ChatGPT is not intelligent in the way humans are. This kind of artificial intelligence doesn’t have true cognition, or self-awareness. There’s no “I think therefor I am.”

As Bender, Gebru, McMillan-Major and Shmitchell describe in their ground-breaking 2021 paper, “On the Dangers of Stochastic Parrots: Can Language Models be Too Big,” generative AI using large language models (LLMS) are a:

“ …system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”

Is ChatGPT Accurate?

ChatGPT can only provide answers based on the information that it pulls from its resources on the Internet. And while it has access to a wide array of resources, the fact is that it may not be able to pull the correct (or what the individual posing the question may interpret as correct) answer.

When challenged, ChatGPT will hold its ground and defend the answer it provided. Some people may interpret this as being stubborn or argumentative, but it could also mean that, when faced with confrontation, ChatGPT searches its resources for instances of confrontation and pulls from those experiences.

There is also the tricky concern of plagiarism. When searching the internet for content to create responses, some of that content may not have been gained with proper consent according to copyright laws and could be a violation of various countries’ privacy laws.

A research team from Penn State University reviewed the language model source fed into ChatGPT 2, some eight million documents, and found that the outputs of that language model constituted known forms of plagiarism. They also found that the larger the language model used to train, the more likely plagiarism would occur.

Both ChatGPT 3 and ChatGPT 4 are significantly larger than ChatGPT 2, so we can expect that plagiarism is an issue that continued with the later AI models. 

ChatGPT and Organizational Concerns

There’s also the risk of sharing confidential information, as some companies have already found out when employees shared confidential software code with ChatGPT to have it find errors and even loaded an entire meeting into the system to have it write a summary note.

The issue with uploading confidential company or personal information into ChatGPT is that there are no clear guidelines for AI regarding how that information will be stored or re-used. For example, if you ask ChatGPT to find errors in your software code and then someone else asks it the same thing, ChatGPT may use the information you provided to answer the questions of others.

ChatGPT is always learning based on the content it can find on the Internet and questions folks ask it. Unfortunately, cybercriminals have already learned about this feature with ChatGPT and are exploiting it to create malicious software code. Additionally, ChatGPT’s access to high quality writing will no doubt improve phishing email effectiveness, particularly by criminals who may be writing messages in second languages they likely weren’t as proficient in the past.

How Organizations Can Manage AI Risk

First, organizations need to assess the pros and cons of this technology carefully. Can they use it in a responsible, helpful, and ethical way that adds value to customers?

Second, organizations need to develop policies and educate employees about what is and isn’t appropriate to input into AI technologies like ChatGPT. Your organization may want to look at setting up authorized AI providers, like using existing trusted cloud providers such as Microsoft’s Azure that help with data privacy concerns.

What Can Colleagues’ Do to Manage AI Risks?

First, be mindful of the information you’re sharing. Whatever you ask or input to ChatGPT or other AI services may not private and could be accessed or used in ways you didn’t intend. Be sure not to share personal, work or any other confidential information. Make sure you understand the terms of service for any AI service provider.

Second, if you’re not sure if something is confidential, ask your manager or IT team. They can guide you on best practices and explain what information is and isn’t allowed to be shared based on your organization’s policies.

Previous
Previous

How to Protect Yourself from ChatGPT Scams

Next
Next

Phishing Deep Dive: How to Effectively Phish Employees