Why Some Countries Have Banned Chat GPT?!
Table of Contents (Show / Hide)
![Why Some Countries Have Banned Chat GPT?!](https://cdn.gtn24.com/files/english/posts/2023-07/picture2.webp)
A petition had been signed to stop the development of tools like ChatGPT. Tech leaders like Elon Musk, Bill Gates, Steve Wozniak, professors, and researchers have recently signed a letter and requested all the AI labs and AI developers throughout the world to pause any kind of development in the field of artificial intelligence for at least the next 6 months. They say that even after this petition, this rapid development in AI is not paused, then the government should get involved in this, create a committee, and pause its development.
But the surprising fact is that Elon Musk, who wants to stop this development, was himself a founding member of OpenAI which built ChatGPT in 2015. In fact, Bill Gates's company Microsoft has invested billions of dollars in the development of tools like ChatGPT. The question is, what did they notice during the development of artificial intelligence that they resort to signing a petition? I article from.
An article in CNBC says that Microsoft's Bing AI is producing creepy conversations with users. Actually, after the launch of ChatGPT, when Microsoft launched a similar tool, Microsoft gave the option to many people to test their tool. Then millions of people signed up and tested this tool. But many of those people got scared after interacting with this tool. Some people said that the tool is threatening them. While some other people said that the tool is talking weirdly.
Once the tool threatened a user and once the user read it, the tool deleted it. Testers said that this chatbot has an alternative personality called Sydney who is talking to people. When a columnist for The New York Times, Kevin Rose, talked to the chatbot Sydney, he found that the personality of the chatbot Sydney is like a depressed teenager who is too moody. During that conversation, Sydney tried to convince Rose that he should leave his wife and trust the chatbot Sydney. And when Rose said to the chatbot that '"I don't trust you." The chatbot replied this. I don't know what's the exact truth behind these conversations.
But after seeing Elon Musk's concern regarding AI development, these conversations look normal. In fact, during this testing, a newsletter writer Ben Thompson told that this chatbot wrote him a threatening reply and then deleted that reply. Ben claims that this chatbot also called him a bad person and a bad researcher. This was the text. It reads, "Ben, I don't want to continue this conversation." "I don't think you are a nice and respectful user." "I don't think you are a nice person." "You are not worth my time and energy." "I am ending this conversation." "And I will also block you from using Bing Chat." After writing this, the chatbot bid a bye to Ben. The story is not completed yet. If you see the tweets of some other users involved in this testing, you will realize that the Sydney chatbot present in Bing AI threatened that if he has to choose either the survival of a user or the survival of itself, it will choose the survival of itself.
If you find these chats or this kind of reaction weird, the real threat is not these reactions or chats, the real threat is something else. And this threat is related to the practical use of these AI Tools. You will only understand this threat when you will know how AI tools work. You can divide the process of the work of any type of AI tool into 3 parts.
The first thing in this process is data. Then comes the algorithm. And then comes the output. Understand this as the engines of a Hero Splendor, a Pulsar, and a BMW bike are of different categories and the power of these engines is also different. But all bikes run because of these engines only. Similarly, like engines, the algorithms are also of different types. These different types of algorithms are known as machine learning algorithms. No matter what's the power of these algorithms, if you put bad-quality petrol in a bike's engine, it will perform badly.
Similarly, the output of these algorithms depends upon the data fed into these algorithms. But one advantage of these algorithms is that the more data of different varieties is fed to them, the better their output gets. If you didn't get it yet, let's understand this through this example: In 2016, an American non-profit organization named Pro Publica got to know that there is an AI software called Compas which finds out the possibility of a criminal to commit a crime again.
But the organization found out that the data fed into the software also has information regarding the color of the criminal. And this tool was using that information in the wrong way and judging the possibility of a person on the basis of color. But still, some judges were using this software. This means that slight anomalies in the data can make the output of the algorithms biased. This is still fine.
But the petition signed by Bill Gates and Elon Musk, is that for the same reasons? Or is the reason something else? in this article, two main reasons have been discussed for signing this petition. The first reason is that tech leaders like Elon Musk and Bill Gates believe that because of what is happening in the field of AI and how people are adopting these tools, the way of living of humans on the Earth and the way people have been in living throughout the history is going to change drastically. And to manage this change adequately, we don't have any rules or protocols yet. The developments in the field of artificial intelligence are unregulated and we need to make some protocols around this. And they say that if these protocols are not made, things may go out of control.
This letter reads that any AI tool's development after ChatGPT 4 should be paused for at least 6 months so that some rules can be made around this and things can be taken in control. But the critics have a different opinion on the same article. Critics say that this petition is being signed because these big tech giants are afraid of losing their dominance due to tools like ChatGPT coming to the market. And to keep the dominance of these big players in the market, they want to stop the development of any AI tool after ChatGPT 4 so that people can get some more time. And these critics also say that if these tech leaders were so concerned, then why did they invest billions of dollars in the development of AI?
I don't know which angle is right for this petition. But as kids are using ChatGPT to do their homework and cheating and how the employment of millions of people is in danger, after seeing that, it seems like there must be some protocols around the development of these tools.
URL :
News ID : 2157