Throughout history man has tended to be fascinated by what he fears. Several ancient cultures venerated that which haunts them. The King Cobra is one of the most dangerously beautiful venomous snakes in the world preying on other snakes and mice. Its position is joined by many in history, from the Gorgons of Greek mythology to Indian Cobra worship. This uneasy relationship between man and snake allows one to draw a simile between the fear of beguiling, misunderstood creatures and technology. We have tended to fear everything we cannot wrap our heads around before incremental adjustment and acceptance. The snake simile ends there. Snakes have no impact on our social development, war strategy or quality of life progression, while technology is literally a foundational method to measure human progress that is easily observable. Meanwhile, the King Cobra simply kept on eating other snakes and biting people.
Man has always exerted himself intellectually to understand the world around him and harness it as a necessity for survival. As that paid off, Capitalism has pushed us towards the pursuit of comfort and pleasure. We went from sharpened stone tools to steel swords in millennia and happily gored each other for differing reasons as dictated by either primal greed, hubris, religion, environmental impact or a mixture of these.
During the Industrial Age women used to go down to the river or the laundry room and exchange local gossip to distract from the drudgery. The invention of the electric washing machine in the 1920s not only eased women’s laborious existence, it accorded them more time to take up productive use of such time in study and business, launching the women’s rights movement anew. This also shifted the development of socialization.
In the Information Age the 80s and 90s signalled the real advent of the Internet and the democratization of computing through the personal computer, giving more impetus to the hue and cry of people losing jobs to computers. At the turn of the millennium and with the panic of the Y2K bug over, attention turned to the effect of nascent social media platforms on society, led by a pimply-faced Mike Zuckerberg in 2004 with the launch of “The Facebook”.
Fast forward to November 2022 and the introduction of OpenAI’s ChatGPT and Dall-e large language artificial intelligence (AI) model systems and the hubbub approaches a shriek. I want to plug my ears for a moment against the cacophony and concentrate instead on the impact of AI manifested in ChatGPT in particular on the originality of human thought and creativity.
This article isn’t exclusively about ChatGPT but more about its parent in the field – artificial intelligence. For more than a century science fiction writers have conjectured about man-made machines overtaking humanity in intelligence, from Talos of Greek mythology – a machine created by gods to protect the Island of Crete, to Isaac Asimov’s I Robot and the more contemporary Terminator movies by James Cameron. The portrayal of AI as either utopian or good to humanity in short, or dystopian where super-intelligent artificial intelligence decides humans are too puny to continue messing up the earth and occupying more space. Besides, it’s basically natural evolution that Machine Intelligence should take over by either enslaving or eliminating the unworthy human race.
AI today is quite sophisticated. Six months ago Blake Lemoine, a Google engineer working on its chatbot LaMDA claimed in an interview with the Washington Post that the natural language processor was sentient. He was promptly fired by the 1.2 trillion-dollar company. If a software engineer can be fooled by a machine he had access to the innards of, what about us ordinary souls?
What is Artificial Intelligence for the Rest of Us?
Without getting into a philosophical argument about what intelligence itself is, artificial intelligence combines fields that attempt to basically mimic the human brain’s ability to learn and produce an opinion or assessment from input or information. Several paths are followed by researchers to achieve this. In one model brute force is employed where the computer system utilizes its speed and greater calculation ability to cycle through all probable outcomes of a problem until it arrives at the verified correct answer. The other and more in line with human brains is reinforcement learning whereby the system iterates through a problem millions of times, finding the optimum strategy through a process of points awarded for accuracy.
Artificial Intelligence is finally having the impact foreseen by the likes of Alan Turing, the father of modern computing and AI. He had proposed a procedure to ask machines a set of questions in 1950 that was later christened the Turing Test, to determine if a computer can think, if it had intelligence. The test has been employed for almost 70 years even though detractors have pointed out that it is based on trying to deceive the interrogator into believing the machine is human and not a measure of true intelligence.
Besides, AI is not only about natural language processing. Nonetheless, the test offered a simple way to evaluate AI systems.
ChatGPT and DALL-E 2
Both ChatGPT and DALL-E 2 from Openai based in San Francisco, California, are based on what is called natural language processing, allowing the former to produce comprehensive textual responses to queries while the latter produces realistic images from straight forward textual descriptions.
Originality
The meeting of different peoples, whether amicable or not, has resulted in an exchange of knowledge. In all the travails of Man towards inevitable progression and development, he has copied and shared knowledge, constantly adding and improving upon it. Muslim conquerors learnt from Greek and Roman architecture and created their own distinctive domes that are now recognized as Islamic architecture. Universities award PhDs or Doctor of Philosophy epithets to acknowledge specific cerebral effort, recognized for addition to the compendium of knowledge. The point to all this; man being inspired by the sweat of another is still strictly – man’s originality. All this is about to change.
The Challenge to Journalism
As if the manipulation of the 2016 US elections reportedly to favour Donald Trump by Russia and giving rise to the term ‘fake news’ wasn’t enough of a jolt to journalism, it is probably one of the first victims of natural language processing AI systems now heralded by ChatGPT. In fact Yahoo.com purchased a news summarizing app from 17-year-old Nick D’Aloisio in 2013 for a whopping $30 million. Nick had developed Summly to summarize news into snapshots for easy sharing. Although this is not a good example of AI at work in the present where we mostly take Apple’s Siri and Amazon’s Alexia for granted, 10 years ago it was big news – besides making D’Aloisio a millionaire with a job offer from Yahoo. Today I get content crafted to my likes – mostly that is. With Google News filching from my offhand search on water pumps for a proposed home project then inundating me with news on the latest industrial water treatment system adverts for a whole month.
The Challenge to Education
A few decades ago during my primary education in an American School run by the US Embassy of a West African country, I recall winning the annual spelling bee thrice in a row. Now, writing on an iPad I lazily allow the word processor to complete words like peripheral for me because I cannot be bothered to kickstart the neurons responsible for spelling recall. I just misspelt neurons which was promptly autocorrected by the way.
To borrow from a journalist, Walter Cronkite is quoted to have said “Whatever the cost of our libraries, the price is cheap compared to that of an ignorant nation.” It now seems our libraries – now basically large databases residing on the Internet and accessible to AI systems as machine learning fodder are turning Cronkite’s saying on its head. While journalism worries more about facts of the news, the education sector worries about plagiarism and AI-induced stupidity. With ChatGPT and its siblings, dystopian leanings are showing from educators. Some Australian universities are rethinking their assessment tests due to the discovery that students were crafting dissertations with the assistance of AI platforms while the New York school system banned access to ChatGPT in public school IT infrastructure and devices. The sector’s concerns are not misplaced though; the simple database-based Autocorrect invented in the 90s by a Microsoft engineer allowed us to stop flexing our spelling muscles was apparently the tip of the iceberg. Students using AI systems like ChatGPT to ‘write’ homework and dissertations are drop-kicking research and attempting to redefine erudition.
To quote the Guardian; “In London, one academic tested it against a 2022 exam question and said the AI’s answer was ‘coherent, comprehensive and sticks to the points, something students often fail to do’, adding he would have to ‘set a different kind of exam’ or deprive students of internet access for future exams.”
What to do about this? Throw more AI at it of course. A plethora of AI content generator detectors are already springing of course, with varying degrees of results.
As the Artificial Intelligence debate rages on, it will not be the last time we express fear of new technology but it certainly is the first time the subject of our trepidation will come to understand that fear and possibly have the ability to use it. Recently Demis Hassabis, the CEO of DeepMind, warned in an interview for CNET; “I would advocate not moving fast and breaking things,” referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later, in true disruptive fashion. DeepMind is a foremost AI company whose AI system, AlphaGo famously beat Lee Sedol, a South Korean Go international champion, in the game that is much more complicated and demanding than Chess. Now under Google’s wing, it has accelerated several fields of research by giving scientists free access to its system.
Man’s cerebral hubris creates a blindspot for him that in hindsight exposes the limitations of his one and a half kilo brain. To demonstrate this premise; in 1996 Intel proposed the now ubiquitous Universal Serial Bus or USB, as an open connection standard to replace a plethora of ports found at the back of personal computers, including the all-present serial and parallel ports on PCs. It quickly became a standard, with the likes of IBM and Microsoft jumping on board. Now we connect almost all consumer hardware with a USB cable for both power and communication. In the late 90s it took some time before design engineers wised up to placing the USB port in more accessible positions on the PCs and other devices – like the front for instance. Initially, they were placed at the back like their predecessors and you had to get on hands and knees to plug or unplug the office printer or portable scanner. By the simple act of ergonomically placing the USB, the inconvenient callisthenics was avoided. This demonstrates man’s slow and incremental empirical progress towards a hard-earned knowledge we almost always unapologetically take for granted only after the fact.
Journalism and other content sectors will have to come up with new standards for disclaimers to accommodate the new reality of AI shoving its way into the field. I have decided to dip my toes – er – fingers into that by issuing one of my own at the end of this article.
The impressive performance of AI systems in the news should not take away from the amount of factual errors and training data biases these systems generate. That alone gives grounds for throwing in the hat with the Dystopians in this case.
As I hammered out the draft for this article, a headline popped up from forbes.com declaring Jobs In Artificial Intelligence – How to Make A Career In AI and I thought to myself – how much artificial intelligence was behind the unlikely randomness of receiving that particular post due to my recent research? I hope we won’t be getting on our hands and knees to serve robot overlords because we allowed them to write their own algorithms in the near future.
Artificial Intelligence Content Disclaimer:
No portions of this article were written by ChatGPT or any other AI platform and it conforms to the platform’s content policy. The image used for this article was originally generated using Dall-E before editing (Content policy | DALL·E (openai.com).
– Mahmud a technical adviser to the Hon. Minister of Education, Abuja, Nigeria.