Hot Topics

Google CEO Adds His Voice to AI Regulation Debate

artificial intelligence

Sundar Pichai, CEO of Google and parent company Alphabet, on Monday called for government regulation of artificial intelligence technology in a speech at Bruegel, a think tank in Brussels, and in an op-ed in the Financial Times.

There is no question in Pichai’s mind that artificial intelligence should be regulated, he reportedly said in Brussels. The question is, what will be the best approach?

Sensible regulation should take a proportionate approach, balancing potential harm with potential good, he added and could incorporate existing rules, such as the EU’s General Data Protection Regulation.

“We need to be clear-eyed about what could go wrong,” Pichai wrote in his FT column. “There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition.”

He pledged to be a helpful and engaged partner to regulators and offered them Google’s expertise, experience, and tools to navigate the issues surrounding AI.

“AI has the potential to improve billions of lives, and the biggest risk may be failing to do so,” he wrote. “By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do.”

‘Pretty Weak Sauce’

Pichai’s editorial is “pretty weak sauce,” wrote entrepreneur, journalist, and author John Battelle in Searchblog, but he did find one of Pichai’s statements worthy of note: “Companies such as ours cannot just build promising new technology and let market forces decide how it will be used.”

Pichai is late in coming to that realization, Battelle suggested.

“I wish Google, Facebook, Amazon, and Apple had that point of view before they built the AI-driven system we now all live with, known as surveillance capitalism,” he wrote.

Pichai is correct when he says AI must be regulated, observed Greg Sterling, vice president of market insights at Berlin-based Uberall, a maker of location marketing solutions.

“Society cannot allow technology companies to self-regulate with technology that can be easily abused,” Sterling told the E-Commerce Times. “It’s already happening in China and elsewhere.”

That said, what forms the specific regulation take and how global the consensus will be open to question, he noted.

“In the U.S., at least, the government needs to consult with a wide array of experts and then come up with legislation and regulations that permit innovation but don’t allow these technologies to be used for discriminatory purposes,” Sterling said.

“Decisions about hiring, healthcare, insurance, and so on should not be made by AI, which has no morality, no ethics, and no sense of social good,” he continued.

“Machine learning and AI will do whatever they’re programmed to do — whatever the algorithms dictate,” added Sterling. “Humans must set limits on the application of these technologies and absolutely draw bright lines around certain use cases to prevent their abuse by unscrupulous actors. “

Building Blocks of Sensible Regulation

“What I most appreciated about Sundar Pichai’s piece is his acknowledgment that AI principles documents will be an important source of norm building and facilitate the creation of sensible regulation,” Jessica Fjeld, assistant director of the Cyberlaw Clinic at Harvard’s Berkman Klein Center for Internet & Society, told the E-Commerce Times.

Three dozen prominent AI principles documents formed the basis for a report on AI ethics that Fjeld and Nele Achten, Hannah Hilligoss, Adam Christopher Nagy and Madhulika Srikumar released last week.

In the documents, the researchers found eight common themes that could form the core of any principle-based approach to AI ethics and governance:

  • Privacy — AI systems should respect the privacy of individuals.
  • Accountability — Mechanisms must be in place to ensure AI systems are accountable, and remedies must be in place to fix problems when they’re not.
  • Safety and Security — AI systems should perform as intended and be secure from compromise.
  • Transparency and Explainability — AI systems should be designed and implemented to allow oversight.
  • Fairness and Nondiscrimination — AI systems should be designed to maximize fairness and inclusivity.
  • Human Control of Technology — Important decisions should remain under human review.
  • Professional Responsibility — Developers of AI systems should make sure to consult all stakeholders in the system and plan for long-term effects.
  • Promotion of Human Values — The ends to which AI is devoted and the means by which it is implemented should promote humanity’s well-being.

The eight themes contribute a view that is ethical and respectful of human rights to the foundational requirements for AI, the researchers noted.

“However, there’s a wide and thorny gap between the articulation of these high-level concepts and their actual achievement in the real world,” they added.

Difficult to Regulate

As determined as regulators may be to keep AI on a short leash, they may find the task a daunting one.

“The reality is that once technology is introduced, people are going to experiment with it,” observed Jim McGregor, principal analyst at Tirias Research, a high-tech research and advisory firm based in Phoenix.

“The whole idea of regulation is foolish. AI is going to be used in good ways and bad ways,” he told the E-Commerce Times.

“You hope that you can limit the bad and that people have respect and integrity in using and developing the technology — but mistakes will be made, and some people will use it nefariously,” McGregor said. “Just look at what Google, Facebook, and other companies have done with their technologies for tracking people, monitoring information, and sharing data.”

Regulating AI will become increasingly difficult as it spreads, he added.

“By 2025, you’re not going to be able to buy a single electronic platform that doesn’t use artificial intelligence for something, whether it’s local, in the cloud, or a hybrid solution,” McGregor predicted.

“It could be for something as simple as managing battery power to as complex as operating an autonomous vehicle,” he said. “Whenever a new technology comes out, there’s always this knee-jerk reaction from some segments of the population that says, ‘It’s bad. Everything new is bad.’ In general, though, technology has benefited mankind.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Hot Topics

What's your outlook for the business climate in 2025?
Loading ... Loading ...

E-Commerce Times Channels