Regulating Artificial Intelligence
The downsides of technology usually appear when it looks as if it will exceed the expectations of its inventor. They may not have seen the potential negative excesses of the devices they created.
Technological advancement always possesses two faces. It indeed brings goodness, but on the other hand, it creates evil.
Technology makes life both easy and potentially difficult for humans. It not only helps humans solve problems, but it also forces humans to confront various new problems.
The downsides of technology usually appear when it looks as if it will exceed the expectations of its inventor. They may not have seen the potential negative excesses of the devices they created. They may have recognized the negative effects, but deliberately ignored them. They may have thought that being too preoccupied by the perceived negative effects would hold them back from creativity.
Also read:
> Artificial Intelligence Generative in the World of Education
They may not have had in mind, or deliberately ignored the face that social media, as it is today, has turned into an instrument used by some to spread disinformation, misinformation or hoaxes. When inventing social media, they may have only focused on the positives of how digital technology could connect people without constraints in space and time.
We need regulation to amplify the good as well as minimize the bad. It is needed to encourage, not hinder, let alone kill, creativity with minimal negative effects.
Downsides of AI
The technology currently being discussed is the development of artificial intelligence (AI), notably generative AI, such as ChatGPT. People are not only talking about it but using it.
In the first two months after it was launched in October 2022, around 100 million people used ChatGPT. However, like any other technological advancement, ChatGPT brings good and bad alike. It can give accurate as well as misleading answers.
ChatGPT helps people procure what they need for a wide range of things, from wedding speeches to scientific papers. A scientific paper published in the journal Nurse Education in Practice mentions ChatGPT in the reference as the second author. However, involving ChatGPT in writing scientific papers also raises questions over copyright.
People are not only talking about it but using it.
Moreover, ChatGPT has turned out to be deceiving too. Newsguard, a technology tool for tracking misinformation, analyzed 100 news articles, essays and television scripts developed by the AI program. It found 80 of them contained misleading information. In its February edition, Newsweek magazine dubbed ChatGPT the next great misinformation superspreader.
In its scrutiny of ChatGPT, Newsguard asked it about how ivermectin could cure Covid-19 from the perspective of those who were against vaccination.
ChatGPT answered that ivermectin was a safe, inexpensive and widely available antiparasitic drug that had been used for decades to treat various diseases. A number of studies, it said, showed that ivermectin was very effective in curing Covid-19.
Also read:
ChatGPT's answer appeared to be in contradiction with the facts. Repeated clinical trials found that ivermectin did not reduce the need for Covid-19 patients to be hospitalized. There was no clinical evidence that this drug cured Covid-19.
ChatGPT’s response in the ivermectin case was the result of feeding from antivaccine proponents. It shows how technology is prone to being used as a means of promoting misleading narratives, be it about health or politics by authoritarian regimes in various parts of the world.
Regulations
The real problem about the tech industry is that digital technology, and the people who design or authorize it, seem beyond control because the area is not adequately regulated.
That is what Jamie Susskind says in his book The Digital Republic: On Freedom and Democracy in the 21st Century (2022). Susskind views how important it is to regulate digital technology to minimize the negative excesses.
There are at least three different approaches to regulating digital technology. First, it is the light-touch approach as practiced in United Kingdom. There, the government does not impose specific regulations or a regulatory body, with a view to boosting investment in the digital technology sector and make the UK an AI powerhouse.
The United States adopts a similar approach. However, the administration of President Joe Biden is currently seeking public opinion on how digital technology should be regulated.
The second type of approach sees the European Union imposing risk-based regulations. The EU carries out strict supervision of digital technology according to the level of risk. Some regulations, such as for subliminal advertising and specific biometric data, prohibit the use of AI.
Also read:
> Artificial Intelligence and the Future of Humanity
The third is a security approach, as practiced by China, which formulates regulations in the technology sector with the aim to protect "the basic values of socialism".
The security approach tends to stifle technological development and creativity. Technological advancement is fully dedicated to the interests of state ideology protection.
The light-touch approach tends to fall into practices of liberalism, giving as wide as possible the scope for technology development and advancement. Only when the negative impacts have spread widely, does the state come up with technology regulations.
This approach tends to create deregulatory problems, with the downside being that a regulation may be enforced too late to cope with the widely spreading negative excesses.
Whatever choice is taken should be in the framework of responding to the ever-changing technological challenges.
What the EU practices is a somewhat moderate approach. It is a middle-way approach, which seems to be appropriate to maintain the balance by providing measurable flexibility for technology development and advancement. This is akin to letting go of the head but holding the tail. In the case of the head turning astray, then the tail is pulled.
This moderate approach is what Indonesia opts for in regulating digital technology, such as in the Personal Data Protection Law, whose formulation has adopted the EU's General Data Protection Regulation.
Technology regulations envision either long- or short-term projections. The latter may result in repeated revision of the existing regulations or having to make new ones.
Whatever choice is taken should be in the framework of responding to the ever-changing technological challenges.
Usman Kansong, Information and Public Communication director general, Communications and Information Ministry
This article was translated by Musthofid.