Artificial Intelligence and Human Writing and Thinking

by John Bandler

Artificial intelligence, machine learning, and other software programs and algorithms are powerful tools that are here to stay.

Like any tool, they can be used for good, for bad, or just misused.

Every person writes, some write more than others, and for some writing is their livelihood (the traditional "writer"). But for the purpose of this article, "writer" means every person who writes, and that includes most students, most employees, and most everyone.

The evolution to generative AI

It is always helpful to about the latest technological tool or advancement within the context of what has come before, even long ago. I do this with cybercrime, virtual currency, and it fits here too.

Humans have been able to improve itself thanks to advancements over time in the way we communicate, think, store and convey knowledge. Once upon a time, we did not even know how to speak, so we could look at these milestones:

  • Non-verbal communication
  • Verbal communication
  • Cave writings and stone writings
  • Writing on papyrus and paper
  • Hand copying to reproduce written work
  • The printing press
  • Typewriters
  • Newspapers
  • Early computers and early word processing
  • The internet and search engines ("Google")
  • Internet instant communication (news, Twitter, YouTube, etc.)
  • Today's word processing tools
  • Chat GPT, Artificial Intelligence (AI), and "Generative AI"

At each stage was the opportunity for great advances and good, but also risks and negative consequences.Artificial Intelligence Writing Thinking 2023-10 1 The Hope.jpg

As computers and software have gotten more and more sophisticated, the hope is they will do things better and make life and business work better.

As a prosecutor, I remember the software tools that were touted to manage case information, legal documents, analyze voluminous data, make cases, and handle complex exhibits for trial. The marketing materials sure made it seem like there was an "easy button" you could push, once the software was installed.

The Internet

Since people have been doing research, there has been at least one or two who do sloppy research, or even copy the material of others without attribution. This copying might constitute plagiarism, copyright violation, or violation of other rules.

With computers and the internet, vast troves of materials became available for all, from the comfort of their home or office.

For good researchers and investigators, they now had a new ability to access information, then it was up to them to synthesize it, form their own thoughts and words, and cite appropriately.

For those less diligent, this was a new opportunity to copy and paste.

Then there was the need for software tools to detect that copying, and analyze it (e.g., plagiarism checking software). This created the similar need for a software tool to help evade plagiarism detection software -- to check it, or maybe even affirmatively change enough words to escape detection.

AI to write? (generative AI)

Eventually, software writers, with the help of AI, wondered if the software itself could "write" on its own. What if it could digest all of the writing that is available on the Internet (or on a particular platform) and write based upon that?

Instead of just copying someone else's writing, or rearranging a few words, what if it could "learn" enough to be an expert in any field, analyze any document or group of documents, and write like an expert.

Writing is a process, it is not just about the destination

Artificial Intelligence Writing Thinking 2023-10 25

Writing, done properly, is a process which is valuable for creating an excellent final product and also for the writer's personal growth.

The problem with shortcuts, whether it is copying or relying upon generative AI like Chat GPT, is that it can be used to eliminate the process of researching, writing, editing, and thinking.

A person who takes these shortcuts may never develop the skills and confidence in the subject matter, and in their own writing and thought process.

The student example

Imagine a student in high school, college, graduate school, or law school. The course requires a paper, which requires research, thought, writing, editing and submitting a final product.

A student that does all the right things will expend effort and time to arrive at a result, the final paper. Throughout the process the student will experience some frustration, some exasperation, and spend many hours working on the task.

There may occasionally be a student that looks for short cuts.  They could do any number of things that have traditionally been available for students (and which violate school rules and other rules), including copying, having someone else write the paper, buying or obtaining another paper from elsewhere. But now Chat GPT (and other similar tools) provide another shortcut.

The main problem with any of these shortcuts is the student will not have put in the effort, and will not learn. The student will not have improved their abilities, and will not gain any confidence in their skills. They may even think of themselves as someone that is not good at doing their own research and writing, that they need to rely on someone else -- or some other tool -- to perform the task. And they will not be able to judge whether that task they outsourced (to a tool or another person) was done well or poorly.

That is why my final paper project assignment in my formal teaching emphasizes the process. This better ensures students will put in effort throughout the semester, because only with that effort can they learn and build the important skills they will need in life. And they will need those skills no matter how good software AI tools become.

The organization policy example

Now imagine an organization that needs to create or update its own internal policies, standards, or procedures. Policies for cybersecurity, physical security, human resources, or anything.

That organization has employees in it, and at least one of them needs to work on this task of creating and updating policies. If the employee copies something from the internet, or use Chat GPT to create the policy, they might not know why that particular document was written that way, or who it was written for, or whether it is of good quality.

Imagine that employee who decided to copy a policy from who-knows-where on the internet. That became their organization's policy, without proper vetting and thought.

There will be instances in the future where an employee does the same with Chat GPT, creating their organization's new policy, potentially without knowing the ramifications.

That is why my policy building concept emphasizes a process that assesses five important components to build and improve organization management documents.

The AI black box

When one uses AI to generate something, one rarely knows what sources the tool used and how it weighted them to arrive at the "result". That is one layer of uncertainty and unknowability.

Further, some who use AI to generate content will not admit it, and will lead others to believe that they crafted it. They will pretend it is their own creation after diligent research and much effort. That further obfuscates the connection to AI and the original sources.

Does AI cite its' sources?

If AI does not quote or cite its' sources, is it doing something improper?

Separately, suppose an individual copies from that AI output, which failed to quote or cite sources, that person would be carrying on that failure to cite original sources, and would probably be doing something improper. Their defense might be: "I didn't mean to plagiarize from X, I was just copying from ChatGPT" which is not a good defense.

Similarly, suppose a student copied another student paper, and that copied paper contained it's own failure to quote and cite (e.g., plagiarism). That student's defense would probably not be good either: "I didn't mean to plagiarize from X, I was just submitting a copied/purchased paper". Not a good defense either.

Even if AI cites its own sources, is their citation reliable? Recent examples indicate not.

Should individuals cite to AI?

If individuals use AI, yes, they should cite to it. After all, it is not their own work, it is the work of a tool, a tool created by someone else, that has incorporated other work. That should be properly noted.

OpenAI acknowledges its output might not be reliable

Open AI includes a disclaimer about accuracy, or lack thereof. See their terms of use, section 3(d):

(d) Accuracy. Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.
OpenAI terms of use, https://openai.com/policies/terms-of-use

Whether this warning is designed to protect the user from incorporating inaccurate material, or to insulate OpenAI for providing false or defamatory information does not matter.

How do we know what is reliable in life anyway?

When one researches, one is investigating. Reviewing references and sources, assessing their reliability and credibility and weight.

It is always challenging knowing what is reliable. We look to the source of the information, their credibility, their reliability. We can look to the statement itself, does it seem to make sense, can it be corroborated. And we look to analyze any conclusions or opinions, to see if they make sense.

Perhaps the main lesson is we still need to think and assess. We cannot blindly accept statements, including from AI tools.

AI will get "better", but the risks will still remain

Generative AI has had a rough start.

  • Some attorneys have improperly relied upon it and gotten into hot water with the Judge. Citing to fictitious cases and more.
  • Generative AI tools have been sued for defamation, for making things up that allegedly harmed people.

AI tools will get better, but the risks will remain.

We humans need to keep our thinking caps on. Much as we would like computers to do it for us.

Conclusion

This short article is not tailored to your circumstances and is not legal or consulting advice.

If you want to learn more about writing, whether for your academic institution or for your organization's policies, there is lots of material on this site, see below.

If your organization needs help with improving its internal documentation and compliance with laws and regulations, including regarding cybersecurity and protecting from cybercrime, let me know.

Additional reading

This article is hosted at https://johnbandler.com/artificial-intelligence-writing-thinking/, copyright John Bandler, all rights reserved.

Originally posted 9/29/2023, updated 3/16/2024.