Full disclosure: This article was not written by AI, and it's not for the lack of trying. Multiple attempts have been unsuccessfully dispatched, but it kept responding in a way that felt inhuman, and that's before mentioning the false assertions, interesting parts of the original story inexplicably left out and a lack of descriptive ingenuity, literary nuances and human conceptualization. You are left with an empty husk of a story that feels like it was rushed through by a person who was too busy TikToking to pay attention.
That said, rumors from the grapevine have claimed that Goggle's new AI tool, called Genesis (also the name of the first book of the Jewish bible, which might not be a coincidence), is able to write this article much more eloquently and cogently. According to the New York Times, Genesis has already been unveiled to senior staff of the publication, as well as to several others, as a technological tool that does not simply form theories out of thin air (looking at you, ChatGPT), but rather examines the latest news and global events, processes them and provides a more robust, well-rounded output.
In order to quell the predictable wave of concerns and apprehensions, Google has dismissed the idea that Genesis is here to replace human content writers, stressing that a journalist's role when it comes to delivering content, reporting in real time and checking facts will not be affected. Rather, it is a tool to aid humans in their work when it comes with literary elements like style, pacing and headlines creation.
One of those who saw Genesis up close and personal has backed up Goggle's claim, saying it could be a very useful tool to serve as a journalist's personal assistant, if you will, automating certain rudimentary tasks in order to give the journalist time to focus on the bigger picture. Additionally, it has been claimed that Genesis is a "responsible" automation tool, here to prevent generative AI from replacing genuine human output from the industry at large.
On the other hand, several officials described the demonstration as worrisome, saying Google takes the human effort exerted to create an accurate, dependable article for granted.
Goggle officials told Ynet they are in the initial stages of examining ideas to support journalists in their work, and their goal is to give journalists the option of using emerging technology to facilitate their work. They have further claimed Genesis is neither created for or able to replace a human's vital role in accurate reporting, conceptualizing and writing their own article. They've equated Genesis' role to that of Gmail and Google docs.
Regardless of Genesis' precise functions, it does reinvigorate the debate when it comes to AI's role in content creation, whether fictional or journalistic. Publications such as NPR, Insider and others have stated they intend to examine how AI can be implemented responsibly in an area where each second counts.
The Associated Press has admitted that the agency has been using AI to write certain articles touching on more technical areas, such as financial reports and corporate profits, though these still remain merely a fraction of the total output.
Is this the dawn of a new age of fake news?
There are notable concerns as to what would happen if unvetted articles flood the web, containing what could be an avalanche of inaccurate assertions people would take as facts. Popular tech hub CNET experienced exactly that, letting AI check its articles, creating multiple stories that were egregiously erroneous. CNET's editor admitted the problem and every article went though a human audit, if you will, to set the record straight.
Through the mishap it became obvious that the process of correcting all of the mistakes ended up costing more than if humans had been solely responsible for creating the articles in the first place. Despite pulling back from letting AI run their articles, they're still using it on a smaller scale, as some articles on CNET contain disclaimers saying they're using AI to aid in the creation of several stories.
In the early 2000's, Buzzfeed was the most happening website on the internet, and now they've slowly begun producing AI-made articles. Lo and behold, those articles also were plagued with inaccuracies and full-on falsehoods. Last April, Buzzfeed announced it was shutting down the news division. This could fuel the assumption that while AI doesn't necessarily kill journalism outright, it does come across as a catalyst for those fading out of the profession.
Early grim and dark assumptions not withstanding, it seems AI is not going to kill off journalism anytime soon, since journalism still is invested in sniffing out a story before writing it, and there's no algorithm to replace that.
Not yet, anyway.