Undoubtedly, people have been overwhelmed as new AI-based applications hit the market on a mass scale. This story shows that it’s only a tiny step from anxiety to taking on the role of the housekeeper of the old world order.
Without question, artificial intelligence is making its way into our homes. People like Stephen Hawking warned of the hazards of developing AI long before it became noticeable. He shared his fears about how it would affect our lives and even end out being the worst thing that could happen to us. Possibly even the last one.
In recent weeks, there has been much discussion in the media about the petition to suspend the training of powerful AI systems, which has been signed by almost 19000 people, including many notable individuals such as Elon Musk, Steve Wozniak, Yuval Noah Harari, and business and academic authorities around the world. In truth, most of the concerns about developing far more complex AI than the well-known ChatGTP are very reasonable.
The pace of the commercial race towards the top spot on the podium may be alarming, and the history of mankind has shown that progress is widely perceived not only as a great chance to improve our lives but also represents a significant threat to many.
AI development may impact many consumers and professions and, in the worst-case scenario, take total control of our lives. Each layer of the outlined problem is complex and needs to be discussed separately. Undoubtedly, many new risks arise with technological progress, particularly when people are so immersed in digital life and how we live now is governed on many levels via software. With the expansion of Google and social media, consumer data has become a commodity, but there is always the danger that hiring AI could make users even more vulnerable. Regarding professionals across many fields, the AI revolution can potentially replace some individuals with machines. Worse, AI tools may deliver certain jobs faster and more cheaply. Consequently, if you sell extremely high-priced services that may be substituted with even not perfect form, which gives the same results you deliver but for a fraction of the price, it may be very frustrating.
It is only a small step from being cautious for a good reason to becoming antagonistic for ideological purposes. On the World Economic Forum website, Dambisa Moyo notes, “Our ability to address the world’s challenges and ultimately create economic growth to advance human progress crucially rests on recognizing this fact; that ideology is the enemy of growth.” Unfortunately, in the case of AI development, critics frequently get to be anti-progress ideologists desperate to preserve the status quo.
The lessons of a startup called SyntheticUsers are perhaps one of many examples of how skepticism can turn into offensive speech. This company is building AI-based software that could imitate real users and generate research outcomes comparable to human audience research. It sounds like a grand idea that it’s now undergoing beta testing. At this stage, it is simply one of the many emerging technologies on its development path on the market. It aspires to serve as a business-oriented product; eventually, the market will pick whether it will achieve commercial success or not.
So far, as SyntheticUsers attempts to break into the UX, user research, or product development professions, Twitter users’ reactions have been harsh. Many arguments in the discussion about the product’s actual usefulness at this stage of advancement are fair and well-argued. But on the opposing extreme, significantly more comments reflect prejudices about what the AI solution should or should not cause and why it shouldn’t even be created. Consequently, many experts simply use insults and emotional language or diminish any potential value attached to SyntheticUsers. And all of this is happening while this software is still in testing and not commercially viable yet.
But the icing on the cake is that somebody initiated a petition, allegedly created by a group of pro-researchers, that went viral. It illustrates how any form of progressivism could be described as ideologically dangerous.
Anyone who has read any of the historically noteworthy manifestos will recognize the language and tone. “Today, we find ourselves grappling with a force that threatens to undermine the very essence of our work: the rise of synthetic users.”. The goal of the initiators is to “oppose the insidious spread of virtual humans that imitate the rich tapestry of human experience” and, therefore, to “not be blinded by the shiny veneer of technological innovation, for it is evident that their true aim is to supplant the beating heart of our profession .”
It is pretty suggestive that technological advancement is also being characterized as threatening someone’s profession. The reason why someone is inventing this specific technology is explained ridiculously simply by “sinister motives.” “For it is only through genuine human connection and understanding that we can create a future that honors the richness and complexity of the human experience.” the petition concludes. This reasoning couldn’t even better exemplify how this battle for respecting the so-called human experience might be a more human-like endeavor. Is organizing cruciate, witch hunting, or isolating tribes with a wall to protect them from hostile novelties worth defending humankind’s legacy?
“Now, class, can anyone tell me why this might be a bad idea?” one researcher posted to Twitter, referring to SyntheticUsers. She clarified her reasoning in bullet points when posting this comment and started single-directed discourse. When it comes to AI in general, anyone can claim that all it can offer is only negative. Yet, the opposite could also be declaring that AI brings only good. But such polarization is wrong, and it doesn’t make all these professionals making public opinions look reliable. In the case of SyntheticUsers, it is still an open topic if it is possible to imitate human research feedback closely enough. Such software may be helpful in some instances but entirely irrelevant in others. And that’s also fine. But, the story of this startup demonstrates how controversial AI is and how intense emotions it may provoke.
When social order is threatened, policymakers have an opportunity to intervene. Yet, when politics is involved, there will likely be a clash of viewpoints, and in most probable scenarios, ideological beliefs and biases may affect the future direction of AI development. When considering the present situation of efforts to establish a new tech world order, all stakeholders should realize that the AI revolution cannot be stopped, and societies should accept it as quickly as possible. Thus diminishing or erasing it is not a viable solution. The balance of power may be established when all stakeholders start talking about how to effectively employ AI rather than how to defend against it.