Thursday, January 04, 2024

Musing on Generative AI (ChatGPT, Bard) - 4 January 2023

Generative Artificial Intelligence (Generative AI) has hit the big time.  As evidenced by ChatGPT by OpenAI or Bard by Google, the massive growth of available compute resources has allowed large language models (LLMs) to start working their way into daily use.  Some people use LLMs to generate product descriptions or speeches, some publications use LLMs to generate articles, and others are using LLMs to create novel images and movies.  The Internet is being slowly poisoned by the creeping influence of LLM-generate data.

Leading commentators are claiming this is the end of work.  Everyone from journalists to artists to writers to physicians will be replaced by LLM-driven AI.  In my professional area, people will describe what they want and AI will write the code, displacing millions of programmers.  Poof!  On the dole they go by the millions.  

This is silly and people should know better.  To borrow a concept from (I think) the Gartner Group (GG) consultanting group, LLMs are approaching the Peak of Inflated Expectations in the GG Hype Cycle, and we will soon see the Trough of Disillusionment.  For more details, the reference is in the Gartner Group Hype Cycle chart.  How can I, a mere pleb, a common person, claim this to contradict the great Gartner Group geniuses and the wisdom of the great commentators of our time?

One word:  Hallucinations.  The LLMs we use today commonly inject massive errors into their results, errors that are commonly called hallunications in polite company, and 🐄💩 in more direct company.  You can feed good stuff into an LLM and get garbage out (aside, the new meaning of GIGO for AI is Good Input Garbage Output).  So hiring poor programmers or non-programmers to run your great LLM Programming Machines will tend to generate garbage that only experienced programmers will be able to detect and fix.  Programmers are not going away, at least not because of LLMs.



No comments: