OpenAI CEO says AI to reshape society, acknowledges risks

    WORLD  19 March 2023 - 15:46

    The CEO behind the company that created ChatGPT believes artificial intelligence technology will reshape society as we know it.

    He believes it comes with real dangers, but can also be "the greatest technology humanity has yet developed" to drastically improve our lives, ABC News reports.

    "We've got to be careful here," said Sam Altman, CEO of OpenAI. "I think people should be happy that we are a little bit scared of this."

    Altman sat down for an exclusive interview with ABC News' chief business, technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 -- the latest iteration of the AI language model.

    In his interview, Altman was emphatic that OpenAI needs both regulators and society to be as involved as possible with the rollout of ChatGPT — insisting that feedback will help deter the potential negative consequences the technology could have on humanity. He added that he is in "regular contact" with government officials.

    ChatGPT is an AI language model, the GPT stands for Generative Pre-trained Transformer.

    Released only a few months ago, it is already considered the fastest-growing consumer application in history. The app hit 100 million monthly active users in just a few months. In comparison, TikTok took nine months to reach that many users and Instagram took nearly three years, according to a UBS study.

    Though "not perfect," per Altman, GPT-4 scored in the 90th percentile on the Uniform Bar Exam. It also scored a near-perfect score on the SAT Math test, and it can now proficiently write computer code in most programming languages.

    GPT-4 is just one step toward OpenAI's goal to eventually build Artificial General Intelligence, which is when AI crosses a powerful threshold which could be described as AI systems that are generally smarter than humans.

    Though he celebrates the success of his product, Altman acknowledged the possible dangerous implementations of AI that keep him up at night.

    "I'm particularly worried that these models could be used for large-scale disinformation," Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks."

    A common sci-fi fear that Altman doesn't share: AI models that don't need humans, that make their own decisions and plot world domination.

    "It waits for someone to give it an input," Altman said. "This is a tool that is very much in human control."

    However, he said he does fear which humans could be in control. "There will be other people who don't put some of the safety limits that we put on," he added. "Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it."

    President Vladimir Putin is quoted telling Russian students on their first day of school in 2017 that whoever leads the AI race would likely "rule the world."

    "So that's a chilling statement for sure," Altman said. "What I hope, instead, is that we successively develop more and more powerful systems that we can all use in different ways that integrate it into our daily lives, into the economy, and become an amplifier of human will."

    Concerns about misinformation

    According to OpenAI, GPT-4 has massive improvements from the previous iteration, including the ability to understand images as input. Demos show GTP-4 describing what's in someone's fridge, solving puzzles, and even articulating the meaning behind an internet meme.

    This feature is currently only accessible to a small set of users, including a group of visually impaired users who are part of its beta testing.

    But a consistent issue with AI language models like ChatGPT, according to Altman, is misinformation: The program can give users factually inaccurate information.

    "The thing that I try to caution people the most is what we call the 'hallucinations problem,'" Altman said. "The model will confidently state things as if they were facts that are entirely made up."

    The model has this issue, in part, because it uses deductive reasoning rather than memorization, according to OpenAI.

    "One of the biggest differences that we saw from GPT-3.5 to GPT-4 was this emergent ability to reason better," Mira Murati, OpenAI's Chief Technology Officer, told ABC News.

    "The goal is to predict the next word – and with that, we're seeing that there is this understanding of language," Murati said. "We want these models to see and understand the world more like we do."

    "The right way to think of the models that we create is a reasoning engine, not a fact database," Altman said. "They can also act as a fact database, but that's not really what's special about them – what we want them to do is something closer to the ability to reason, not to memorize."

    Altman and his team hope "the model will become this reasoning engine over time," he said, eventually being able to use the internet and its own deductive reasoning to separate fact from fiction. GPT-4 is 40% more likely to produce accurate information than its previous version, according to OpenAI. Still, Altman said relying on the system as a primary source of accurate information "is something you should not use it for," and encourages users to double-check the program's results.

    Precautions against bad actors

    The type of information ChatGPT and other AI language models contain has also been a point of concern. For instance, whether or not ChatGPT could tell a user how to make a bomb. The answer is no, per Altman, because of the safety measures coded into ChatGPT.

    "A thing that I do worry about is ... we're not going to be the only creator of this technology," Altman said. "There will be other people who don't put some of the safety limits that we put on it."

    There are a few solutions and safeguards to all of these potential hazards with AI, per Altman. One of them: Let society toy with ChatGPT while the stakes are low, and learn from how people use it.

    Right now, ChatGPT is available to the public primarily because "we're gathering a lot of feedback," according to Murati.

    As the public continues to test OpenAI's applications, Murati says it becomes easier to identify where safeguards are needed.

    "What are people using them for, but also what are the issues with it, what are the downfalls, and being able to step in [and] make improvements to the technology," says Murati. Altman says it's important that the public gets to interact with each version of ChatGPT.

    "If we just developed this in secret -- in our little lab here -- and made GPT-7 and then dropped it on the world all at once ... That, I think, is a situation with a lot more downside," Altman said. "People need time to update, to react, to get used to this technology [and] to understand where the downsides are and what the mitigations can be."

    Regarding illegal or morally objectionable content, Altman said they have a team of policymakers at OpenAI who decide what information goes into ChatGPT, and what ChatGPT is allowed to share with users.

    "[We're] talking to various policy and safety experts, getting audits of the system to try to address these issues and put something out that we think is safe and good," Altman added. "And again, we won't get it perfect the first time, but it's so important to learn the lessons and find the edges while the stakes are relatively low."

    Will AI replace jobs?

    Among the concerns of the destructive capabilities of this technology is the replacement of jobs. Altman says this will likely replace some jobs in the near future, and worries how quickly that could happen.

    "I think over a couple of generations, humanity has proven that it can adapt wonderfully to major technological shifts," Altman said. "But if this happens in a single-digit number of years, some of these shifts ... That is the part I worry about the most."

    But he encourages people to look at ChatGPT as more of a tool, not as a replacement. He added that "human creativity is limitless, and we find new jobs. We find new things to do."

    The ways ChatGPT can be used as tools for humanity outweigh the risks, according to Altman.

    "We can all have an incredible educator in our pocket that's customized for us, that helps us learn," Altman said. "We can have medical advice for everybody that is beyond what we can get today."

    ChatGPT as "co-pilot"

    In education, ChatGPT has become controversial, as some students have used it to cheat on assignments. Educators are torn on whether this could be used as an extension of themselves, or if it deters students' motivation to learn for themselves.

    "Education is going to have to change, but it's happened many other times with technology," said Altman, adding that students will be able to have a sort of teacher that goes beyond the classroom. "One of the ones that I'm most excited about is the ability to provide individual learning -- great individual learning for each student."

    In any field, Altman and his team want users to think of ChatGPT as a "co-pilot," someone who could help you write extensive computer code or problem solve.

    "We can have that for every profession, and we can have a much higher quality of life, like standard of living," Altman said. "But we can also have new things we can't even imagine today -- so that's the promise."

    Caliber.Az

    Subscribe to our Telegram channel


Read also

US House advances $95 billion Ukraine-Israel package toward Saturday vote

20 April 2024 - 13:04

Israeli strike on Iranian air defense radar site limited in scope

20 April 2024 - 13:54

Explosion hits military base in Iraq's Babil

20 April 2024 - 10:49

Egypt, Türkiye to discuss bilateral trade, regional issues, including Gaza

19 April 2024 - 20:59

Secretary of State: US can not support major military operation in Rafah

19 April 2024 - 18:42

Second-in-command: Hezbollah would respond to Israeli escalation

19 April 2024 - 19:02
ADVERTS
Video
Latest news

    Azerbaijan, World Bank keen to cooperate on green transition

    20 April 2024 - 14:31

    Armenian PM announces withdrawal of Russian border guards from Tavush

    20 April 2024 - 14:23

    Athens conf highlights destruction by Armenia of cemeteries in Azerbaijani territories

    20 April 2024 - 14:13

    Media resources of Armenia, several other countries spread fakes

    Statement by Azerbaijan Media Development Agency

    20 April 2024 - 14:05

    Israeli strike on Iranian air defense radar site limited in scope

    20 April 2024 - 13:54

    Azerbaijan’s defence minister meets with commanders of Land Forces

    PHOTO/VIDEO

    20 April 2024 - 13:45

    Azerbaijan, UN discuss transition to green energy

    20 April 2024 - 13:39

    Russian analyst: EU mission in Armenia turns into NATO mission

    20 April 2024 - 13:23

    US House advances $95 billion Ukraine-Israel package toward Saturday vote

    20 April 2024 - 13:04

    Unveiling Armenia’s motives behind lawsuits at The Hague

    From politics to policy

    20 April 2024 - 12:35

    Foes Azerbaijan and Armenia agree 'historic' return of villages

    20 April 2024 - 12:28

    Police seize cache of weapons & ammunition in Khankendi

    Interior Ministry figures

    20 April 2024 - 12:06

    Pro-Armenian senator urges US to cut military aid to Azerbaijan

    French MP questions US diplomatic response

    20 April 2024 - 11:47

    Azerbaijan engages with IEA to forge energy efficiency roadmap

    20 April 2024 - 11:28

    Ukraine targets Russia’s seven regions in drone attacks

    20 April 2024 - 11:09

    Explosion hits military base in Iraq's Babil

    20 April 2024 - 10:49

    Armenian PM shares insights on Brussels meeting with party members

    20 April 2024 - 10:35

    UN chief urges Baku, Yerevan to resolve outstanding issues

    To normalize relations

    20 April 2024 - 10:21

    US hails return of four villages of Gazakh district

    20 April 2024 - 10:00

    Western Azerbaijani Community condemns G7 for discrimination in refugee rights call

    20 April 2024 - 09:49

    Azerbaijan’s Gazakh District hails liberation of villages with joyful demo

    PHOTO

    20 April 2024 - 09:37

    EU endorses Azerbaijani-Armenian negotiations, backs delimitation process

    20 April 2024 - 09:31

    Azerbaijan showcases diplomatic valor by hosting COP29

    20 April 2024 - 09:13

    Ethiopia opens door for prized coffee exports to foreigners

    20 April 2024 - 09:02

    Saudi Arabia's $500 billion Neom megacity reportedly seeking new sources of cash

    20 April 2024 - 07:03

    Ukraine’s frontline is collapsing – and Britain may soon be at war

    20 April 2024 - 05:04

    Can TikTok's owner afford to lose its killer app?

    20 April 2024 - 03:05

    Sydney bishop forgives alleged attacker and urges followers not to retaliate

    20 April 2024 - 01:03

    US blocks Palestine from becoming UN full member

    19 April 2024 - 23:00

    Egypt, Türkiye to discuss bilateral trade, regional issues, including Gaza

    19 April 2024 - 20:59

    Geneva to host Turkic Week for the first time

    PHOTO

    19 April 2024 - 20:49

    Azerbaijani FM commends outgoing Algerian envoy’s diplomatic service

    Discusses bilateral ties & regional developments

    19 April 2024 - 20:39

    NATO approves Azerbaijani-Romanian project

    19 April 2024 - 20:30

    Baku Airport once again awarded Skytrax

    19 April 2024 - 20:21

    Punitive op against Israel untied country: Raisi

    19 April 2024 - 19:59

    US leader weighs another $1 billion in arms sales to Israel

    19 April 2024 - 19:42

    Parliamentarian hails Azerbaijani president's strategic triumph

    For restoring control over four villages

    19 April 2024 - 19:35

    Tensions mount over religious freedom in India

    Concerns rise amid BJP's influence and controversial policies

    19 April 2024 - 19:22

    Azerbaijani top prosecutor becomes vice-president of International Association of Prosecutors

    19 April 2024 - 19:14

    Second-in-command: Hezbollah would respond to Israeli escalation

    19 April 2024 - 19:02

All news