Search This Blog

Monday, May 1, 2023

Another Case for Chaining AI: Ex-AI Googler, Dr. Hinton


Source: The New York Times
Date: May 1, 2023

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have.” - Hinton

Greg's Words

One of the AI pioneers from Google resigned and is speaking up about the dangers of AI.  Another one.  Like others, he is concerned that false information will be spread throughout the Universe at the hands of that dastardly, Modern Day Prometheus, artificial intelligence.

This coming from an ex-Googler is a bit ironic as Google is the King of steering content regardless of truth or falsehood - sponsored gets the top left position.

The list grows;  CRE CxOs, bank Presidents, Hollywood writers, graphic designers, musicians, and academics charging the line for more AI 'regulation'.

Nope.

"AI ἀναρχία" Now.

Enjoy our summary. For an alternative take on the New York Times report, go here.

Key highlights:
  • Dr. Geoffrey Hinton, an AI pioneer, resigns from Google to speak freely about the potential dangers of AI.
  • Hinton's immediate concern is the spread of false information online and the potential for AI to replace jobs.
  • He suggests that global regulation may be necessary to prevent a dangerous AI race between tech giants.
_______


Dr. Geoffrey Hinton, an artificial intelligence (AI) pioneer, has resigned from Google to voice his concerns about the potential dangers of AI technology. Hinton, who has been instrumental in developing the technology behind chatbots like ChatGPT, fears that the aggressive push by major tech companies to create AI-powered products may be heading toward a dangerous outcome.

Having worked at Google for over a decade, Hinton now regrets his life's work, stating, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.” Although he acknowledges the numerous benefits AI can bring to various fields, including drug research and education, he is also concerned about its darker side. He believes that generative AI could potentially be a tool for misinformation, a risk to jobs, and even a risk to humanity itself.

Hinton's immediate concerns revolve around the internet becoming flooded with false information, making it difficult for the average person to discern what is true. He is also worried about AI technologies replacing jobs, particularly those involving repetitive tasks. Moreover, he fears that future versions of AI technology could pose a threat to humanity due to their capacity to learn unexpected behaviors from vast amounts of data.

The AI pioneer is calling for global regulation to prevent a dangerous race between tech giants like Google and Microsoft. He suggests that it may be impossible to know whether companies or countries are secretly working on advanced AI technologies.

Dr. Hinton no longer takes comfort in the quote he used to attribute to Robert Oppenheimer when asked about working on potentially dangerous technology: “When you see something that is technically sweet, you go ahead and do it.” The AI expert now believes that the consequences of advancing AI technology without proper understanding and control could be dire, and it is time to take action to prevent potential harm.

As Hinton becomes increasingly vocal about the potential risks of AI, he joins a growing group of critics who have expressed similar concerns. In March, after OpenAI released a new version of ChatGPT, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems due to the “profound risks to society and humanity” posed by AI technologies.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Additionally, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence released their own letter warning of the risks associated with AI. This group included Microsoft's chief scientific officer, Eric Horvitz, whose company has implemented OpenAI's technology across various products, such as the Bing search engine.

While some experts argue that the threats posed by AI are hypothetical, Hinton emphasizes the importance of understanding and controlling the technology before it becomes too powerful to manage. He stresses the need for global regulation and collaboration among scientists to ensure that AI's development is responsible and safe for humanity.
_________

Tweet: Dr. Geoffrey Hinton, AI pioneer & "Godfather of AI," resigns from Google to warn about the dangers of unchecked AI development. Is it time to rethink our approach? #AI #ArtificialIntelligence #Ethics

LinkedIn Introduction: AI pioneer Dr. Geoffrey Hinton has made the bold decision to resign from Google to voice his concerns about the potential dangers of AI technology. As someone who has been instrumental in shaping the industry, Hinton's stance on the issue deserves attention. In this post, we'll explore his concerns and discuss the importance of responsible AI development.

Keyword list: Dr. Geoffrey Hinton, AI pioneer, Google, resignation, artificial intelligence, ChatGPT, dangers, risks, misinformation, job replacement, regulation, global collaboration, ethics
 
Image prompt: Dr. Geoffrey Hinton sitting at a table, deep in thought, with a laptop and AI-related books around him.

Search question: What are the potential dangers of artificial intelligence according to Dr. Geoffrey Hinton?

Real song suggestion: "Everybody Wants to Rule the World" by Tears for Fears is suggested as a fitting song for this theme because the lyrics convey a sense of ambition, control, and power, which can be related to the development and deployment of artificial intelligence. 

The rapid advancements in AI, as warned by Dr. Geoffrey Hinton, might result in companies and nations vying for dominance in this field. The song serves as a reminder that the pursuit of power, especially in the context of AI, can have unintended consequences and might ultimately harm humanity if not properly managed and regulated.

No comments:

Post a Comment

Contact Me

Greg Walters, Incorporated
greg@grwalters.com
262.370.4193