The rapid advancement of the capabilities of artificial intelligence raises questions about the risks involved in using such technology and the red lines required to be set. Many states around the world are already considering whether to restrict the use of OpenAI’s chatbot, while others prohibit its use. It seems that only in Israel, the Start-Up Nation, nobody is concerned.
Read More:
Artificial Intelligence has been a part of our lives for quite some time now, being integrated into many aspects of our lives. However, until 2021 AI was predominantly the domain of a handful of technology companies and skilled developers who knew how to incorporate AI into specific tasks, today it has become a popular and widely discussed field.
In 2021, OpenAI released a tool called DALL-E, which decodes user-requested text and generates images based on it. As a result, the company became prominent in the field of artificial intelligence. The chatbot ChatGPT, which OpenAI launched at the end of 2022, further increased its popularity in the AI field and even posed a significant threat to Google’s search engine, helping establish OpenAI as a major player in the industry.
In January, the number of ChatGPT users surpassed 100 million, and the tool’s growth rate exceeded that of Google and Facebook. Last week, Google unveiled a range of AI-based products, including its self-developed chatbot named “Bard AI”, which is expected to be the main competitor of ChatGPT.
The sharing of artificial intelligence engines with the general public allows the system to learn from a wide array of users, improving its capabilities on a daily basis. Since ChatGPT was launched and until this exact moment, millions of users have used the tool in a variety of fields: regular searches, coding, data analysis, and many others.
Among the users of the system, there were researchers at leading universities, testing the tool’s capability to successfully pass exams in exact sciences. There were also writers who used the system to generate content, and there was even an attempt to create a full ‘SouthPark’ episode.
Alongside the incredible advancement in the AI field, many questions have arisen. Some are concerned that AI could take over their jobs, while others wonder what the red lines of the technology should be, to the extent such red lines even exist. Initial regulations dealing with such questions have been proposed in the European Union, raising privacy-related concerns, the use of AI systems by minors, and data protection issues.
To our surprise, in the Start-Up Nation, there haven’t been many voices that are capable to understand or can see the problematic aspects of using ChatGPT and transferring sensitive data to OpenAI – whether it’s medical, personal, or even military and/or government-related data. This is despite cases already being documented in the past six months where sensitive data has been used within the ChatGPT system.
Who is in Charge?
With regard to the responsibility of the use of AI systems, at least in Israel, no one is in charge of one of the most pressing concerns of humanity nowadays. The main question is who should be responsible for monitoring and dealing with the risks posed by artificial intelligence? Who should provide proper solutions and most importantly, reassure citizens and employees in light of the dramatic changes brought about by artificial intelligence? It is unclear whether the authority to do so lies within the National Cyber Directorate and the Prime Minister’s Office, the Ministry of Innovation, Technology, and Science, or perhaps within the Ministry of Defense or other governmental offices.
In Italy, the Data Protection Authority (Garante) recently blocked the use of ChatGPT and then reinstated it after its requirements were met. The regulatory investigation in Italy began after a security breach caused certain users to witness other users' ChatGPT conversations, which may have included their financial information. In Germany, the authorized regulatory body responsible for data protection has expressed its opinion on the inherent risks of the AI service, while in Sweden the privacy regulator has announced that it has no plans to prohibit ChatGPT’s activities. Ireland’s data protection officer is currently examining this matter and follows the activity of the Italian regulator. Similar statements have been given by French officials.
In contrast, in places like China, Hong Kong, Iran, Russia, and parts of Africa, there is no access to ChatGPT, and the residents of these countries can’t open OpenAI accounts. It seems that only Israel has no official opinion on the matter, and there is no official Israeli governmental body dealing with it.
The rapid advancement of technology is leaving archaic regulatory bodies and bureaucratic processes behind. Life-changing technologies such as artificial intelligence should be addressed and dealt with as soon as possible in order to establish clear boundaries and red lines before it’s too late.
Osher Assor is the CEO of the Cyber Security & Technology Division at Auren Israel and a cyber consultant for the Ministry of Defense.