Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    India’s Modi Visits Ukraine This Week, After A Recent Trip To Moscow. Here’s What It Could Mean

    August 23, 2024

    Former House Speaker Nancy Pelosi Says She Wanted To ‘Protect’ President Biden’s Legacy

    August 23, 2024

    China Says It Is ‘Seriously Concerned’ About US Nuclear Strategic Report

    August 23, 2024
    Facebook X (Twitter) Instagram
    Trending
    • India’s Modi Visits Ukraine This Week, After A Recent Trip To Moscow. Here’s What It Could Mean
    • Former House Speaker Nancy Pelosi Says She Wanted To ‘Protect’ President Biden’s Legacy
    • China Says It Is ‘Seriously Concerned’ About US Nuclear Strategic Report
    • How Emily In Paris Is Tackling Sexual Harassment In Fashion
    • England’s Hull Leads Women’s Open After Round One
    • Democrats Reject Gaza Protesters Demand To Give Speaking Slot To Palestinian
    • Coldplay Covers Taylor Swift At Vienna Stadium Where Her Eras Tour Shows Were Canceled Due To Foiled Terror Plot
    • FDA Signs Off On Updated Covid-19 Vaccines From Moderna And Pfizer/BioNTech
    Facebook X (Twitter) Instagram
    PrimeenewsPrimeenews
    Demo
    • Home
    • Politics

      India’s Modi Visits Ukraine This Week, After A Recent Trip To Moscow. Here’s What It Could Mean

      August 23, 2024

      Former House Speaker Nancy Pelosi Says She Wanted To ‘Protect’ President Biden’s Legacy

      August 23, 2024

      China Says It Is ‘Seriously Concerned’ About US Nuclear Strategic Report

      August 23, 2024

      Democrats Reject Gaza Protesters Demand To Give Speaking Slot To Palestinian

      August 23, 2024

      Parents Hide Children From Mandatory Evacuations As Ukraine Says Russia Advancing Fast On key City

      August 23, 2024
    • Technology
    • Travel
    • Health
    • Sports
    • Entertainment
    • Business
    • Shopping
    PrimeenewsPrimeenews
    Home»Technology»Sam Altman Warns AI Could Kill Us All. But He Still Wants The World To Use It
    Technology

    Sam Altman Warns AI Could Kill Us All. But He Still Wants The World To Use It

    admin@primenewsBy admin@primenewsOctober 31, 2023No Comments0 Views

    Sam Altman thinks the technology underpinning his company’s most famous product could bring about the end of human civilization.

    In May, OpenAI CEO Sam Altman filed into a Senate subcommittee hearing room in Washington, DC, with an urgent plea to lawmakers: Create thoughtful regulations that embrace the powerful promise of artificial intelligence – while mitigating the risk that it overpowers humanity. It was a defining moment for him and for the future of AI.

    With the launch of OpenAI’s ChatGPT late last year, Altman, 38, emerged overnight as the poster child for a new crop of AI tools that can generate images and texts in response to user prompts, a technology called generative AI. Not long after its release, ChatGPT became a household name almost synonymous with AI itself. CEOs used it to draft emails, people built websites with no prior coding experience, and it passed exams from law and business schools. It has the potential to revolutionize nearly every industry, including education, finance, agriculture and healthcare, from surgeries to medicine vaccine development.

    But those same tools have raised concerns about everything from cheating in schools and displacing human workers – even an existential threat to humanity. The rise of AI, for example, has led economists to warn of a massive shift in the labor market. As many as 300 million full-time jobs around the world could eventually be automated in some way by generative AI, according to Goldman Sachs estimates. Some 14 million positions could disappear in the next five years alone, according to an April report by the World Economic Forum.

    In his testimony before Congress, Altman said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

    Two weeks after the hearing, Altman joined hundreds of top AI scientists, researchers and business leaders in signing a one-sentence letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. It also highlighted an important dynamic in Silicon Valley: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

    ‘Kevin Bacon of Silicon Valley’

    Although Altman, a longtime entrepreneur and Silicon Valley investor, largely stayed out of the spotlight in prior years, eyes have shifted to him in recent months as the poster child for the AI revolution. This has also exposed him to litigation, regulatory scrutiny and both praise and condemnation around the world.

    That day in front of the Senate subcommittee, however, Altman described the technology’s current boom as a pivotal moment.

    “Is [AI] gonna be like the printing press that diffused knowledge, power, and learning widely across the landscape that empowered ordinary, everyday individuals that led to greater flourishing, that led above all two greater liberty?” he said. “Or is it gonna be more like the atom bomb – huge technological breakthrough, but the consequences (severe, terrible) continue to haunt us to this day?”

    Altman has long presented himself as someone who is mindful of the risks posed by AI, and he has pledged to move forward responsibly. He is one of several tech CEOs to meet with White House leaders, including Vice President Kamala Harris and President Joe Biden, to emphasize the importance of ethical and responsible AI development.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, and dozens of tech leaders, professors and researchers in urged artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” (At the same time, some experts questioned if those who signed the letter sought to maintain their competitive edge over other companies.)

    Altman said he agreed with parts of the letter, including that “the safety bar has got to increase,” but said a pause would not be an “optimal way” to address the challenges.

    Still, OpenAI has its foot placed firmly on the gas pedal. Most recently, OpenAI and iPhone designer Jony Ive have reportedly been in talks to raise $1 billion from Japanese conglomerate SoftBank for an AI device to replace the smartphone.

    Those who know Altman have described him as someone who makes prescient bets and has even been called “a startup Yoda” or the “Kevin Bacon of Silicon Valley,” having worked with seemingly everyone in the industry. Aaron Levie, the CEO of enterprise cloud company Box and a longtime friend of Altman who came up with him in the startup world, told CNN that Altman is “introspective” and wants to debate ideas, get different points of view and endlessly encourages feedback on whatever he’s working on.

    “I’ve always found him to be incredibly self-critical on ideas and willing to take any kind of feedback on any topic that he’s been involved with over the years,” Levie said. But Bern Elliot, an analyst at Gartner Research, noted the famous cliché: There’s a risk to putting all your eggs in one basket, no matter how much trust you may place in it.

    “Many things can happen to one basket,” he added.

    Challenges ahead

    When starting OpenAI, Altman told CNN in 2015 he wanted to steer the path of AI, rather than worrying about the potential harms and doing nothing. “I sleep better knowing I can have some influence now,” he said.

    Despite his leadership status, Altman says he remains concerned about the technology. “I prep for survival,” he said in a 2016 profile in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to,” he said.

    Some AI industry experts say, however, that focusing attention on far-off apocalyptic scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities. Rowan Curan, an analyst at market research firm Forrester, acknowledged the legitimate concerns around making sure training data, particularly for enormous models, has minimal bias – or has a bias that is understood and can be mitigated.

    “The idea of an ‘AI apocalypse’ as a realistic scenario that presents any kind of danger to humanity – particularly in the short and medium term, is just speculative techno-mythology,” he said. “The continued focus on this as one of the big risks that comes along with advancement of AI distracts from the very real challenges we have today to reduce current and future harms from data and models being applied unjustly by human actors.”

    In perhaps the biggest sweeping effort to date, President Biden unveiled an executive order earlier this week that will require developers of powerful AI systems to share results of their safety tests with the federal government before they are released to the public, if they pose national security, economic or health risks.

    Following the Senate hearing, Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, expressed concerns over what a future looks like with AI even if it’s heavily regulated. “If they honestly believe that this could be bringing about human extinction, then why not just stop?” she said.

    Margaret O’Mara, a tech historian and professor at the University of Washington, said good policymaking should be informed by multiple perspectives and interests, not just by one or few people, and shaped with the public interest in mind.

    “The challenge with AI is that only a very few people and firms really understand how it works and what the implications are of its use,” said O’Mara, noting similarities to the world of nuclear physics before and during the Manhattan Project’s development of the atomic bomb.

    Still, O’Mara said many people across the tech industry are rooting for Altman to be the force to revolutionize society with AI but make it safe.

    “This time is akin to what Gates and Jobs did for the personal computing moment of the early 1980s, and the software moment of the 1990,” she said. “There’s a real hope that we can have tech that makes things better, if the people who are making it are good people, smart and care about the right things. Sam embodies that for AI right now.”

    The world is counting on Altman to act in the best interest of humanity with a technology by his own admission could be a weapon of mass destruction. Although he may be a smart and qualified leader, he’s still just that: one person.

    — CutC by cnn.com

    admin@primenews
    • Website

    Related Posts

    Chat GPT Firm Open AI Strikes Deal With Vogue Owner

    August 21, 2024

    Archaeologists Found a 500-Year-Old Compass Turns Out It May Have Belonged To Copernicus

    August 14, 2024

    China Firm Claims World’s Fastest-Charging EV Battery

    August 14, 2024
    Leave A Reply Cancel Reply

    Top Posts

    Taylor Swift And Olympics Scams Fuelling Fraud

    May 22, 2024387K

    Israel Says South Africa Distorting The Truth In ICJ Genocide Case

    January 12, 2024675

    A High-Altitude Tunnel Is Latest Flashpoint In India-China Border Tensions

    March 22, 2024467

    Russia Election 2024: Voting Begins In Election Putin Is Bound To Win

    March 15, 2024362
    Don't Miss

    India’s Modi Visits Ukraine This Week, After A Recent Trip To Moscow. Here’s What It Could Mean

    By admin@primenews28

    NEW DELHI (AP) — Indian Prime Minister Narendra Modi is set to meet with Ukrainian…

    Former House Speaker Nancy Pelosi Says She Wanted To ‘Protect’ President Biden’s Legacy

    August 23, 2024

    China Says It Is ‘Seriously Concerned’ About US Nuclear Strategic Report

    August 23, 2024

    How Emily In Paris Is Tackling Sexual Harassment In Fashion

    August 23, 2024
    Stay In Touch
    • Facebook
    • Twitter
    • Instagram
    About Us
    About Us

    Welcome to Prime E-News! Your go-to source for the latest and most relevant news, delivered with accuracy and speed. Stay informed and empowered with our diverse range of curated topics to keep you updated on what matters most. Join us at the forefront of information and insight today.

    Facebook X (Twitter) Instagram
    Most Popular
    Taylor Swift And Olympics Scams Fuelling Fraud
    May 22, 2024387K
    Israel Says South Africa Distorting The Truth In ICJ Genocide Case
    January 12, 2024675
    A High-Altitude Tunnel Is Latest Flashpoint In India-China Border Tensions
    March 22, 2024467
    Latest Post

    India’s Modi Visits Ukraine This Week, After A Recent Trip To Moscow. Here’s What It Could Mean

    August 23, 2024

    Former House Speaker Nancy Pelosi Says She Wanted To ‘Protect’ President Biden’s Legacy

    August 23, 2024

    China Says It Is ‘Seriously Concerned’ About US Nuclear Strategic Report

    August 23, 2024
    © 2025 Primeenews.com
    • Home
    • Politics
    • Sports
    • Technology
    • Entertainment
    • Education

    Type above and press Enter to search. Press Esc to cancel.

    We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.AcceptDeclinePrivacy policy