OpenAI CEO says AI to reshape society, acknowledges risks

    WORLD  19 March 2023 - 15:46

    The CEO behind the company that created ChatGPT believes artificial intelligence technology will reshape society as we know it.

    He believes it comes with real dangers, but can also be "the greatest technology humanity has yet developed" to drastically improve our lives, ABC News reports.

    "We've got to be careful here," said Sam Altman, CEO of OpenAI. "I think people should be happy that we are a little bit scared of this."

    Altman sat down for an exclusive interview with ABC News' chief business, technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 -- the latest iteration of the AI language model.

    In his interview, Altman was emphatic that OpenAI needs both regulators and society to be as involved as possible with the rollout of ChatGPT — insisting that feedback will help deter the potential negative consequences the technology could have on humanity. He added that he is in "regular contact" with government officials.

    ChatGPT is an AI language model, the GPT stands for Generative Pre-trained Transformer.

    Released only a few months ago, it is already considered the fastest-growing consumer application in history. The app hit 100 million monthly active users in just a few months. In comparison, TikTok took nine months to reach that many users and Instagram took nearly three years, according to a UBS study.

    Though "not perfect," per Altman, GPT-4 scored in the 90th percentile on the Uniform Bar Exam. It also scored a near-perfect score on the SAT Math test, and it can now proficiently write computer code in most programming languages.

    GPT-4 is just one step toward OpenAI's goal to eventually build Artificial General Intelligence, which is when AI crosses a powerful threshold which could be described as AI systems that are generally smarter than humans.

    Though he celebrates the success of his product, Altman acknowledged the possible dangerous implementations of AI that keep him up at night.

    "I'm particularly worried that these models could be used for large-scale disinformation," Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks."

    A common sci-fi fear that Altman doesn't share: AI models that don't need humans, that make their own decisions and plot world domination.

    "It waits for someone to give it an input," Altman said. "This is a tool that is very much in human control."

    However, he said he does fear which humans could be in control. "There will be other people who don't put some of the safety limits that we put on," he added. "Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it."

    President Vladimir Putin is quoted telling Russian students on their first day of school in 2017 that whoever leads the AI race would likely "rule the world."

    "So that's a chilling statement for sure," Altman said. "What I hope, instead, is that we successively develop more and more powerful systems that we can all use in different ways that integrate it into our daily lives, into the economy, and become an amplifier of human will."

    Concerns about misinformation

    According to OpenAI, GPT-4 has massive improvements from the previous iteration, including the ability to understand images as input. Demos show GTP-4 describing what's in someone's fridge, solving puzzles, and even articulating the meaning behind an internet meme.

    This feature is currently only accessible to a small set of users, including a group of visually impaired users who are part of its beta testing.

    But a consistent issue with AI language models like ChatGPT, according to Altman, is misinformation: The program can give users factually inaccurate information.

    "The thing that I try to caution people the most is what we call the 'hallucinations problem,'" Altman said. "The model will confidently state things as if they were facts that are entirely made up."

    The model has this issue, in part, because it uses deductive reasoning rather than memorization, according to OpenAI.

    "One of the biggest differences that we saw from GPT-3.5 to GPT-4 was this emergent ability to reason better," Mira Murati, OpenAI's Chief Technology Officer, told ABC News.

    "The goal is to predict the next word – and with that, we're seeing that there is this understanding of language," Murati said. "We want these models to see and understand the world more like we do."

    "The right way to think of the models that we create is a reasoning engine, not a fact database," Altman said. "They can also act as a fact database, but that's not really what's special about them – what we want them to do is something closer to the ability to reason, not to memorize."

    Altman and his team hope "the model will become this reasoning engine over time," he said, eventually being able to use the internet and its own deductive reasoning to separate fact from fiction. GPT-4 is 40% more likely to produce accurate information than its previous version, according to OpenAI. Still, Altman said relying on the system as a primary source of accurate information "is something you should not use it for," and encourages users to double-check the program's results.

    Precautions against bad actors

    The type of information ChatGPT and other AI language models contain has also been a point of concern. For instance, whether or not ChatGPT could tell a user how to make a bomb. The answer is no, per Altman, because of the safety measures coded into ChatGPT.

    "A thing that I do worry about is ... we're not going to be the only creator of this technology," Altman said. "There will be other people who don't put some of the safety limits that we put on it."

    There are a few solutions and safeguards to all of these potential hazards with AI, per Altman. One of them: Let society toy with ChatGPT while the stakes are low, and learn from how people use it.

    Right now, ChatGPT is available to the public primarily because "we're gathering a lot of feedback," according to Murati.

    As the public continues to test OpenAI's applications, Murati says it becomes easier to identify where safeguards are needed.

    "What are people using them for, but also what are the issues with it, what are the downfalls, and being able to step in [and] make improvements to the technology," says Murati. Altman says it's important that the public gets to interact with each version of ChatGPT.

    "If we just developed this in secret -- in our little lab here -- and made GPT-7 and then dropped it on the world all at once ... That, I think, is a situation with a lot more downside," Altman said. "People need time to update, to react, to get used to this technology [and] to understand where the downsides are and what the mitigations can be."

    Regarding illegal or morally objectionable content, Altman said they have a team of policymakers at OpenAI who decide what information goes into ChatGPT, and what ChatGPT is allowed to share with users.

    "[We're] talking to various policy and safety experts, getting audits of the system to try to address these issues and put something out that we think is safe and good," Altman added. "And again, we won't get it perfect the first time, but it's so important to learn the lessons and find the edges while the stakes are relatively low."

    Will AI replace jobs?

    Among the concerns of the destructive capabilities of this technology is the replacement of jobs. Altman says this will likely replace some jobs in the near future, and worries how quickly that could happen.

    "I think over a couple of generations, humanity has proven that it can adapt wonderfully to major technological shifts," Altman said. "But if this happens in a single-digit number of years, some of these shifts ... That is the part I worry about the most."

    But he encourages people to look at ChatGPT as more of a tool, not as a replacement. He added that "human creativity is limitless, and we find new jobs. We find new things to do."

    The ways ChatGPT can be used as tools for humanity outweigh the risks, according to Altman.

    "We can all have an incredible educator in our pocket that's customized for us, that helps us learn," Altman said. "We can have medical advice for everybody that is beyond what we can get today."

    ChatGPT as "co-pilot"

    In education, ChatGPT has become controversial, as some students have used it to cheat on assignments. Educators are torn on whether this could be used as an extension of themselves, or if it deters students' motivation to learn for themselves.

    "Education is going to have to change, but it's happened many other times with technology," said Altman, adding that students will be able to have a sort of teacher that goes beyond the classroom. "One of the ones that I'm most excited about is the ability to provide individual learning -- great individual learning for each student."

    In any field, Altman and his team want users to think of ChatGPT as a "co-pilot," someone who could help you write extensive computer code or problem solve.

    "We can have that for every profession, and we can have a much higher quality of life, like standard of living," Altman said. "But we can also have new things we can't even imagine today -- so that's the promise."


    Subscribe to our Telegram channel

Read also

Iran reaffirms commitment to JCPOA, slams US & E3/EU non-compliance

25 June 2024 - 20:39

Federal cabinet approves national counter-terrorism plan amid opposition concerns

25 June 2024 - 20:22

Israel’s top court rules ultra-Orthodox Jews must be drafted into military

25 June 2024 - 18:30

Iran’s new nuclear threat Weaponizing threshold status

25 June 2024 - 22:03

Estonian prime minister to become Europe's top diplomat

25 June 2024 - 18:16

Unusual details emerge in pro-Armenian US senator's bribery trial "Weird" planning for Egypt, Qatar trip

25 June 2024 - 16:49
Latest news

    Iran’s new nuclear threat

    Weaponizing threshold status

    25 June 2024 - 22:03

    Azerbaijani Foreign Ministry congrats Slovenia on National Day

    25 June 2024 - 21:14

    Iranian presidential candidates discuss foreign policy in penultimate TV debate

    25 June 2024 - 20:59

    Baku hosts 11th consular consultations between Azerbaijani & Russian Foreign Ministries

    25 June 2024 - 20:42

    Iran reaffirms commitment to JCPOA, slams US & E3/EU non-compliance

    25 June 2024 - 20:39

    Azerbaijani foreign minister meets Russian envoy to discuss peace process with Armenia

    25 June 2024 - 20:30

    Federal cabinet approves national counter-terrorism plan amid opposition concerns

    25 June 2024 - 20:22

    Albanian parliamentary commission holds hearing for ambassador candidate to Azerbaijan

    25 June 2024 - 20:08

    Top official: Russia, Azerbaijan forge robust partnership

    25 June 2024 - 19:30

    ANAMA, EU delegation hold conference on combating mine threat


    25 June 2024 - 19:18

    SOCAR, Equinor explore future collaboration

    25 June 2024 - 19:06

    Azerbaijan, IEA working on new nenewable energy, efficiency commitments

    25 June 2024 - 18:54

    Erdogan condemns Dagestan attacks in call with Putin

    25 June 2024 - 18:42

    Turkish, Armenian foreign ministers hold telephone conversation

    25 June 2024 - 18:39

    Israel’s top court rules ultra-Orthodox Jews must be drafted into military

    25 June 2024 - 18:30

    Estonian prime minister to become Europe's top diplomat

    25 June 2024 - 18:16

    Turkish FM contrasts BRICS, G7 in terms of scope and objectives

    25 June 2024 - 18:04

    Azerbaijan, Czech Republic sign MoC on space sphere

    25 June 2024 - 17:52

    Strategic impasse in Armenian-Azerbaijani peace process

    Tensions highlight need for a peace deal

    25 June 2024 - 17:40

    ICC issues arrest warrants for ex-Russian defence minister, military chief

    25 June 2024 - 17:28

    Georgia’s pro-EU parties unite in bid to oust government

    25 June 2024 - 17:16

    Russian deputy PM: Armenia's EU aspirations incompatible with EAEU membership

    25 June 2024 - 17:04

    US probing China Telecom, China Mobile over internet, cloud risks

    25 June 2024 - 16:52

    Unusual details emerge in pro-Armenian US senator's bribery trial

    "Weird" planning for Egypt, Qatar trip

    25 June 2024 - 16:49

    France’s election threatening Western alliances and European unity

    Caliber.Az on YouTube

    25 June 2024 - 16:40

    Baku-Yerevan peace impossible without changing Armenian constitution

    Russian expert analyses

    25 June 2024 - 16:28

    Turkish FM warns Southern Cyprus against involvement in Gaza ops

    25 June 2024 - 16:16

    France could trigger the next euro crisis

    Opinion by Financial Times

    25 June 2024 - 16:04

    Head of Dagestan says foreign forces behind terror attacks in Makhachkala, Derbent

    25 June 2024 - 15:52

    Meeting Gallant, Blinken calls for "avoiding further escalation" on Lebanon border

    25 June 2024 - 15:40

    Azerbaijan's Zangilan hosts COP29 discussions on global green initiatives


    25 June 2024 - 15:28

    "The West wants to turn Georgia into its vassal"

    Expert insights on Caliber.Az

    25 June 2024 - 15:16

    ICESCO chief: Youth Forum in Shusha - step towards peace between nations

    25 June 2024 - 15:04

    Azerbaijani minister meets Turkish Gendarmerie Air Force chief to bolster strategic alliance

    25 June 2024 - 14:52

    Hungary refuses to block Ukraine's EU accession  

    25 June 2024 - 14:36

    Azerbaijan embraces carbon credits  

    A new era in climate action

    25 June 2024 - 14:20

    Israeli airstrikes kill Hamas chief's sister in Gaza attack

    25 June 2024 - 14:00

    Macron warns of "civil war" if far left or far right wins

    25 June 2024 - 13:48

    EU seeks to insulate itself from Viktor Orbán’s vetoes

    25 June 2024 - 13:36

    Türkiye detains over 1,400 illegal migrants across six provinces

    Operation Kalkan-24 / VIDEO

    25 June 2024 - 13:24

All news