Skip to Main Content
American Flag
JULY 4TH SALE

50% off your first year of Quiver Premium

...

Use Promo Code:

JULY4
American Flag
Back to News

SoftBank’s AI Vision Faces Scrutiny as Shareholder Confidence Wanes

Quiver Editor

Shareholder support for the reappointment of SoftBank CEO Masayoshi Son has dropped to 79.22%, down from 95.93% a year ago, following a recommendation from proxy advisor Institutional Shareholder Services (ISS) against his reappointment. ISS's opposition stemmed from SoftBank's average return on equity of less than 5% over the past five years. Among SoftBank’s other directors, support for external director Kenneth Siegel remained low at 68.46%, only a slight increase from last year's 66.9%. This decline in support underscores growing concerns among shareholders regarding the company’s recent performance.

At SoftBank’s annual meeting in Tokyo, Son presented a bold vision for the future of artificial intelligence (AI), predicting that artificial superintelligence (ASI) could become a reality within the next decade. He claimed that by 2030, AI could be up to ten times smarter than humans and by 2035, potentially 10,000 times smarter. Son differentiated between artificial general intelligence (AGI) and ASI, with AGI being equivalent to a human genius and ASI far surpassing human capabilities.

Market Overview:
  • Shareholder support for Masayoshi Son's reappointment falls to 79.22%.
  • ISS opposed Son's reappointment due to SoftBank’s low return on equity.
  • SoftBank founder predicts significant advancements in AI and ASI.
Key Points:
  • SoftBank’s average return on equity over the past five years was below 5%.
  • External director Kenneth Siegel's support remains low.
  • Son envisions AI becoming exponentially smarter than humans by 2035.
Looking Ahead:
  • SoftBank’s pivot towards AI and ASI development raises questions about future impacts.
  • Formation of Safe Superintelligence Inc. (SSI) emphasizes safety in AI advancements.
  • Scientific community remains divided on the feasibility of achieving AGI or ASI.

Son's ambitious AI predictions align with the goals of Safe Superintelligence Inc. (SSI), founded by former OpenAI chief scientist Ilya Sutskever, Daniel Levy, and Daniel Gross. SSI aims to advance AI capabilities while prioritizing safety, reflecting a growing focus within the tech industry on developing superintelligent AI responsibly. However, the scientific community has yet to reach a consensus on the feasibility of achieving AGI or ASI, highlighting the uncertainty surrounding these advancements.

Son’s speech took a personal turn when he linked the development of ASI to his own purpose and legacy, expressing his belief that he was born to realize ASI. This vision, while captivating, raises significant questions about the potential societal impacts of superintelligent AI, including ethical considerations, job displacement, and risks associated with creating intelligence far beyond human control. As the race towards ASI intensifies, major players in the tech industry, including SoftBank, are positioning themselves at the forefront of this transformative technology.

About the Author

David Love is an editor at Quiver Quantitative, with a focus on global markets and breaking news. Prior to joining Quiver, David was the CEO of Winter Haven Capital.

Add Quiver Quantitative to your Google News feed.Google News Logo

Suggested Articles