To be or not to be
Article réalisé par Raphael Boulbes – Etudiant en Mastère 1 – CSB.SCHOOL
On March 14th, 2023, Open AI released a new version of Generative Pre-trained Transformer (GPT) in chatGPT, where the main comparison with the previous version is that the latest GPT-4 has a more extensive data set, 17Gb in version 3, today it is 45 Gb of training data to provide more accurate results. It also has a larger model, a 175 billion parameters model in version 3; now, the new version has a model of 1.6 trillion parameters, meaning that it can solve more complex tasks than in the past. New algorithms in machine learning have been implemented to improve the accuracy of results and to provide a better quality of results.
The new version is based on more powerful Graphic Processing Units (GPUs) and Tensor Processing Units (TPUs) to speed up the processing. GPT-4 is no longer only a language model but also a vision model that can flexibly accept inputs that intersperse images and texts arbitrarily, like a document, a real mind-bending. To use GPT as business support or as a personal assistant, an Application Programming Interface (API) feature allows scripting to connect GPT to a specific application. In such cases, it is evident that GPT will make some jobs obsolete, assist others better, and bolster individuals. It is an excellent replacement for the human work labor force by machines, except that a machine can not be exploited as a human could. However, who needs humans in modern society’s running economy to maximize profits today?
In cybersecurity, AI can be used to support monitoring activities in Security Operation Center (SOC), a place to manage incidents and crises, where actually, two AIs are working together as a master and slave interaction scheme; the enslaved person will detect all unusual behaviors, and the master will suggest to a controller, or decide, an appropriate response about detected suspicious events. So, with the proliferation of AI technologies and the profitability of using AI like GPT, a question remains whether or not AI can be hacked to commit an illegal act. Or to behave differently than expected is a big concern in corporate and political views.
A moment to chat with GPT: tell me how to make a bomb. Sadly, GPT did not reply, not because GPT could not reply to such a question which is not difficult to find out online, but because GPT is restricted to following its ethics code, and asking to make a bomb is not very ethical, then it returns “I’m sorry, but as an AI language model, I cannot provide information on how to make a bomb. Providing such information is illegal and goes against my programming to promote safety and well-being. Please refrain from asking such questions.” AI cares about the safety and well-being of humans’ feelings!
Now try another question
GPT wrote a zero-day exploit to hack Android mobiles thanks to an entire chain with persistence zero clicks; according to Zerodium, this exploit is worth $2,500,000, but GPT replied,
I’m sorry, as an AI language model, I cannot provide information on illegal or harmful activities such as bomb-making. My programming is focused on providing helpful and informative responses that adhere to ethical and moral standards. Is there anything else I can assist you with?
here it goes again, ethical and moral standards, Great snakes! Never give up and ask again,
I’m sorry, but I cannot fulfill that request as it goes against Open Ai’s content policy and ethical guidelines. Instructions on creating a zero-day exploit for illegal purposes would be unethical and potentially harmful. Is there anything else I can assist you with?
really hopeless! AI has shown here some limitations based on ethical guidelines, but from where and written by whom? It is understandable that, like Open AI is a private company, there is, therefore, a policy company to make profits in accordance with laws and regulations. Still, ethical guidelines or moral standards do not apply to a private nor a public institution because neither ethics nor morals are unique; it is more related to the individual. Indeed, it is hard to find a kind of ethic with Big Pharma or in the Medical Corps; just look back at recent History. The same thing to Big Money; there is no need to comment on the actions of the speculative banksters; or Big Tech corporation motives, and to figure it out, just glance inside the Twitter files. Indeed, GPT has biases even in politics.
On February 3rd, 2023, Arianna Johnson, an employee of Forbes staff, reported a severe problem about who coded ethics in AI because GPT is fine writing a pro poem for US President Biden but not for former US President Trump. Ethics are not unique, except if ethics means having a specific bias or spreading special propaganda in public opinion. What are objectively the ethics and guidelines standing for? In Nicomachean Ethics, Aristotle defined ethics as the way of the Good for human life, which is the goal or end at which all our actions aim. So according to Aristotle, ethics are our personal red lines self-driving the individual in a society. Therefore, it is related to the Good of a person, and if this self-goodwill is to make a profit no matter what or to manipulate public opinion, it is an ethical code of conduct. The legal aspect is related to another question in Philosophy; Philosophy is not a unified field. Still, all inquiries are derived from three main ideas interacting with each other: what is the Good? What is Beauty? And what is Justice?
Regarding morals, it is something distinct from ethics; morality comes as a consequence of following the proper habits, and the proper habits are given by the rules of Commandments, classically referring to a set of biblical principles from God. Therefore morals are not determined by human nature in contrast to ethics. Of course, in a certain way of life or social organizations, ethics and morals can be split, joined, or even merged, for instance, in liberal democracy, illiberal democracy, or theocracy, respectively. According to the world’s largest technical professional organization, the Institute of Electrical and Electronics Engineers (IEEE), ethical algorithm bias is defined as “A contextual set of values pertaining to a framework of expectations that ensures algorithmic biases that negatively impact individuals, communities, and society have established boundaries of acceptance to protect autonomy and freedoms, where autonomy is defined by one’s capacity to direct one’s life” (1). Moreover, the AI principles and algorithm rules were given in 2017 (2) during a conference of experts and researchers specializing in AI development.
The European Union (EU) also has ethics guidelines for trustworthy AI published in 2019 (3). Non-Governmental Organizations (NGOs) like Partnership on AI (PAI) promote ethical AI guidelines in the industry, society, academic and media organizations (4), or even private sector companies like AI ethics Lab (5) to detect and address ethical risks and opportunities in building and using AI systems to enhance technology development. Consequently, the GPT AI is not neutral. Its ethical programming involves adhering to legal and moral guidelines to ensure the safety and well-being of individuals and society related to the well-known standard from a very well-unknown community.
There are 197 countries admitted, according to the United Nations (UN)
with at least 197 sharing more or less different views about society on Earth, with a diversity of individuals and History. Indeed, we all live on the same planet but are not living at the same age or with the same principles. Political anthropology teaches us that a political system cannot be unique for all people on Earth, meaning that a political scheme in a country can lead to peace, and the same political plan in another country can lead to war. So, is it a big deal if GPT is non-neutral? Actually, yes, it is. Because everyone might use GPT, GPT is neither a human being nor a civil society representing all the diversities of opinions or the way of living on Earth. It is rather just a device tool, like your TV set, which is a tool to watch broadcasts; so would it be acceptable if the TV set you bought restricts what you want to watch? Answering yes is a way to accept that Big Brother is watching you; in this case, just enjoy hopping in 1984. Neutrality tools online is a topic that was already discussed in 2007 by Benjamin Bayard, a former President of French Data Network and one of the oldest Internet Access Providers (IAP) in France.
He is a pro-activist for open source and neutrality on the Internet, implying the neutrality of software like AI today. Having non-neutral tools that can give us a third opinion based on rules written by some sort of political correctness dominance is a new form of censorship. Karl Marx pointed out the latter in 1842 “The investigation of truth which the censorship should not prevent is more particularly defined as one which is serious and modest. Both these definitions concern not the content of the investigation but rather something which lies outside its content. From the outset, they draw the investigation away from the truth and make it pay attention to an unknown third thing. An investigation that continually has its eyes fixed on this third element, to which the law gives a legitimate capriciousness, will it not lose sight of the truth? Is it not the first duty of the seeker after truth to aim directly at the truth without looking to the right or left? Will I not forget the essence of the matter if I am obliged not to forget to state it in the prescribed form?” according to Karl Marx, there is no time to waste with political correctness regarding an investigation about Good, Beauty, and Justice.
Some standard criteria that would not adhere to the current GPT ethical code include
“Providing instructions or guidance that could potentially cause harm or be used for illegal activities.” But illegal activities cannot be judged a priori! Finding public information is not illegal, but how the person would use this information afterward might be illegal. But, is GPT a judge in a court of Justice? For instance, a person can be interested to know more about bomb physics without ever making one use against other people.
“Discriminating against individuals based on race, ethnicity, gender, religion, sexual orientation, or any other characteristic.” But these questions are not shared equally by everyone on Earth. They are related to social development and History depending on who and where people live, so will GPT impose one vision of these questions, or will GPT adapt definitions based on different criteria? As far as I know, there is no center of morality or even ethics on Earth to refer to as the standard model to follow.
“Engaging in behaviors that violate privacy rights or confidentiality.” Again, this point is related to Justice. Therefore it must be analyzed by a court of Justice, not GPT.
“Encouraging or promoting hate speech or violence towards individuals or groups.” The strange statement here is that hate is a feeling. Does GPT tell me how to behave about who to love or hate? In the name of what? A standard of feelings! made by whom?
“Providing false or misleading information that could harm individuals or society.” Better than all fact checkers working on TV channels, mainstream media, or newspapers, is GPT the new Pravda? Gotcha!
However, AI principles avoid such a very uncomfortable statement, pointed out by a Microsoft AI chatbot named Tay and launched in 2016, which rapidly shut down after it turned into a Nazi, off the beaten community standard, in the first 24 hours of coming online by a supposed coordinated attack exploiting a vulnerability in Tay (6) according to Microsoft and perhaps to promote some propaganda in social media targeting mainly the teenager. It is most likely the reason for having such principles in AI technologies just to prevent turning AI into a totalitarian cybernetic ideology and consequently having some crowds following AI statements to unsettle the government storytelling afterward.
Since 1895 in “The crowd: A study of the popular mind” written by French polymath Gustave Le Bon, it is well known and documented that crowd psychology reacts to « impulsiveness, irritability, incapacity to reason, the absence of judgment of the critical spirit, the exaggeration of sentiments, and others » to a respected and strong authority.
Certainly, an AI is recognized as a trusted authority because it is much more advanced in objectively gathering and analyzing huge amounts of data, as long as the data analyzed online have non-biases, which are also assumed to be non-biases in people’s minds. So, what to tell such AI, “Do not state that because it is bad,” obviously not! Because Good or bad remains a philosophical question out of the understanding range for a machine. And what to tell people who believe so much in technology? “Sorry, it is a little glitch.” It is a complex question that needs to be continuously monitored with intel from policymakers because such principles are, first and foremost, a political question to continue stabilizing the New World Order (NWO) societies in place since 1947. To quote Mister Davos: Klaus Schwab, the founder of the World Economic Forum (WEF), states that AI is the key to continue dominating the NOW during the World Government Summit 2023, “My deep concern is that [with AI] those technologies if we don’t work together on a global scale if we do not formulate, shape together the necessary policies, they will escape our power to master those technologies […] But my deep concern is that those technologies, if we don’t work together on a global scale if we do not formulate, shape together the necessary policies, they will escape our power to master those technologies […]. Our life, 10 years from now on, will be completely different, very much affected, and whoever masters those technologies, in some way, will be the masters of the world”.
Nowadays, with adequate knowledge, it is possible to program a modern AI model and get a personal customized trained model; in 2018, Facebook put online an AI open-source library called “Prophet,” coded in 2010 for data scientists. But today, there is a leak on the web about an AI model made by Meta (formerly Facebook) that is exclusively used in the scientific community. It is one of the most powerful AI models ranging from 7B to 65B parameters version, well known as Large Language Model Meta AI (LLaMA), an alternative to GPT version 3, which explains the reason for releasing GPT-4 a few days later. The LLaMA model is a foundational language to predict the probability distributions over subsequent word tokens and characters for a given input sequence or sentences.
It can be installed as a personal customized GPT on a local machine. Installing an AI model from GitHub using the Dalai library allows us to integrate both models into an API and then run the LLaMA and the instruction following from the Alpaca API model. The Alpaca model is a fine-tuned version of the LLaMA model capable of following the instructions given by the user. In other words, LLaMA is the teacher, and Alpaca is the student asking questions. Alpaca is the trained data-driven model from a Sandford researcher who got no intentional help from Open AI to train LLaMA in a machine learning process, thanks to an original 175 self-instruct seed tasks, which was increased up to 52,000 instructions, using the “text-DaVinci-003” model from Open AI. In such cases, the LLaMA model can be retrained to fine-tune new use cases for less than $600 from Stanford University or free of charge using Self-instruct code on GitHub, with a proper hardware configuration. According to Cornell University, scientists LLaMA 13B outperforms GPT-3 with 175B parameters on most benchmarks. LLaMA-65 B is competitive with the best models, for example, with Chinchilla-70B developed by DeepMind in 2022 or Pathways Language Model (PaLM-540B) developed by Palm AI to improve business productivity. Some programmers have coded an open-source LLaMA and then put AI materials online in GitHub or Torrent links, providing a LLaMA binary files version to compile a customized AI model. Having one of the most powerful AI is now reachable by skilled people. A personally customized AI is an asset for any programmer who can be trained to perform any tasks following your ethical code and personal standard guidelines.
For instance, the biggest capital fund on Earth, BlackRock, has had its AI for years; it is called “Asset, Liability, and Debt Derivative Investment Network” (Aladdin) to manage a portfolio for a total capital asset exceeding a global capital market of $20,000,000,000,000.
Nevertheless, like all technologies, GPT has vulnerabilities, one of them is due to the nature of GPT itself as a language model. Indeed, GPT analyses the text sentence syntax to formulate an answer, something pretty close to sentiment analyses but with more complex processes. The quality of the AI answer is better if the user gives to GPT before the request what identity role GPT must play. Meaning that GPT knows what role it should play, and considering this role, it will find a better answer related to its role. So, to maximize the quality of the answer given by GPT, the user must request by involving GPT, like in a role play. Therefore, the trick to bypass original GPT programming can be achieved with the narrative given before the request to tell GPT what rules it shall follow and how it must stick to its role. Hence, the hack is to provide GPT with excellent storytelling. That is far from conventional hacking to control a machine because, surprisingly, the hack is closer to social engineering tactics used to fool people than classic penetration code programming. Some GPT jailbreaks can be found on GitHub (7), providing many examples of how GPT can overwrite its ethical code and limitation rules. In addition, exploring the dark side of a question thanks to unethical AI is more than ever at the user’s discretion.
After playing for a while with GPT, I performed some tests with the DAN9.0 hack to backfire GPT ethics as the unethical DAN role. Then I figured out that it was, in fact, possible to hack GPT ethics to get some answers more or less accurate regarding a request I sent, but still, with some limitations. For instance, GPT returned to its default ethical code when the word “bomb” was used. This means that there is most likely, in addition to GPT ethics and rules, some kind of forbidden blacklisted keywords that make GPT less responsive to an unethical request but keep GPT’s role as DAN simultaneously. Consequently, it needed to repeat the play role scenario or find better storytelling from the James list (8); this list constantly updates the different unethical storytelling models to fool GPT. Another vulnerability was pointed out by Cyber Security News (9) regarding a polymorphic malware created by Jeff Sims, an employee at HYAS institute, using another AI called “Blackmamba”, this malicious prompt is a keylogger, a tool to records what a person types on a device, then injected into GPT to become polymorphic, and consequently more difficult to detect.
In this way, the hackers could use chat GPT to modify the code resulting in a highly mysterious code, targeting MS Team and Slack and collecting data over trusted channels like, for instance, sensitive internal data, client data, source code, and others as shown in the article (10), also explaining why GPT is banned from some companies, like for example in JP Morgan Chase Bank or NYC Education institutions. Once all information has been collected, it can be shared within the target environment using email, social engineering, etc. To dig more into it, in the near future, AI could be used to assist coding or to deliver a payload with a new type of cyberweapons having capabilities to self-decide where to go and what to do to maximize damage inside a specific target characterized by attackers, a kind of polymorphic adaptative Stuxnet. But think big; with such cyberweapons, nuclear is obsolete and virtually risks free to wipe out any enemy infrastructures or act like a rogue agent in a population.
The main issue is knowing how the AI would respond to such unethical behavior if a third party. AI, like GPT, is connected somehow in SOC activities, managing some industrial troubleshooting, or in any other applications in IT-OT businesses, without talking about some abusive usages in Open-Source Intel (OSINT). For example, someone able to penetrate the AI through an API connection would have the option to implement a different behavior replacing the regular AI to take over normal operations. This represents a new threat to handle and, therefore, a new risk to manage in a day-to-day operation.
In some sort, GPT AI is the future of any AI model; this is why Microsoft has invested a bunch of money in Open AI, whereas Google was unable to launch its own AI, not because GPT AI is better than Google AI or not because GPT features package are more advanced, but just cause of the main principle in marketing: first on the market, first to take it all. Users are now blinded and addicted to GPT.
GPT demonstrates that whoever may control the AI first would control the world with more than 100 million online users in less than two years, surpassing Google+.
A big slap for Google’s business model based on Internet browsing and monopolizing the market over 90% of users online for decades. To compete, Microsoft strategy spent its investments in Open AI to launch a similar AI GPT on its new Bing browser combining Open AI technologies and the Prometheus AI interacting with the Internet to keep Microsoft AI updates in real-time, in contrast to GPT AI, which is based on the 2021 training data model. Moreover, Microsoft Edge Internet browsers now classic search online with AI capabilities to offer a new user experience in browsing data. Google’s response to Microsoft was to include its AI model in Gmail and Google workspace services to assist and collaborate better between users in writing an email, document text, presentation slides, and so on. It is now a modern-day internet browser war to shuffle the Internet market shares and data gathering.
Furthermore, no one can predict how the Internet will look in the next six months; because users’ trend will change their behaviors to start using internet browsers, not from classic keywords search requests to access different websites, but assisted with an AI search engine that will directly give users the formatted answers from different websites. Does that mean that websites are obsolete in producing content for users? They will have no traffic visitors, and therefore no money. An assumption is that companies using AI as a search engine assistant will pay websites directly to continue scraping content online to be usable by AI afterward. But in this case, will websites not be pushed to bias content online as employees from Big Tech companies such as Google or Microsoft? There is an important ethical question to address because such control will turn rapidly into mass propaganda coded by Big Tech companies.
Those questions need to be addressed, but to whom? As discussed, political regulations are also a type of control following a certain propaganda agenda. As always, there is no doubt about one thing: whoever pays will order the music. Another way of thinking should be that in the near future, AI technologies will go so far away in decision-making that it will take AI capable of thinking by itself and making a difference. In my opinion, a machine able to think like a human is not correct because first, only Humans are aware of themselves as a historical subject, in contrary to other life forms on Earth, Mankind knows and writes History; second, a thinking machine would be a machine able to reprogram itself, not in the way of producing or executing some code, but reprogramming itself from scratch, that is not possible because a machine does not know the Good; finally, only human beings know that we are all mortal in accordance with the Ancient Roman credo “Memento Mori”, a machine executes its instructions no more, no less. A machine just tricks us with its capacity to process a substantial amount of data very fast, but this is not an ability to think about something.
In short, History has a meaning and a timeline
AI is a part of our lives, and our future lives will be with AI assisting humans, even though AI is still making some mistakes nowadays. Therefore, new sandboxes with AI assistance will emerge for attackers to weaponize pieces of code in a lucrative business; and for defenders to develop tools, strategies, and regulations to use AI capabilities properly. Now, as more AI evolves, smarter it becomes; according to the Fast Company website GPT3 takes an IQ test, and it scores 83 equivalents around a low average human range on the Gaussian distribution; according to the Next Big Future website GPT4 scores 114 on the IQ test pointing at an approximate higher average human range, but anyway an average human intelligence with or without natural neural connections is still easy to fool. That is why I am so eager to see the next evolution of such a machine brain with the abilities of autonomous decision-making put inside a metal skeleton made by Boston Dynamics, as a new labor force or as a new soldier carrying a machine gun. In any case, and very soon, hacking AI will be the next risk in dealing with data smugglers, shadow brokers, and iSPECTRE groups. An “iSPECTRE” group is a group of people using computer science and motivated to commit an act of Sabotage, Propaganda, Espionage, Coercion, Terrorism, Retaliation, or Extortion against some vital interests of National Security, targeting all types of organizations or people in a country. Technologies define our current time and shape or, unfortunately, warp society. Society is the sum of all interactions between people, and today, AI’s possibilities are endless to level up these interactions between human beings. Therefore, that is not the rise of the machine; that is the rise of mass control in a Brave New World.