top of page
Writer's pictureKadiri Praveen

Mass Communication-Calls for AI Regulation Gain Steam -B-AIM PICK SELECTS


Should restrictions be placed on the use of artificial intelligence? Google CEO Sundar Pichai certainly does, and so do a host of other business leaders, including the CEOs of IBM and H2O.ai, as the chorus of calls for putting limits on the spread of the rapidly evolving technology gets louder.

Pichai aired his opinion on the matter in an opinion piece published Monday in the Financial Times, titled “Why Google thinks we need to regulate AI” (story is protected by a paywall).

In the story, Pichai, who is also CEO of Google’s parent company, Alphabet, shared his lifelong love of technology, as well as the breakthroughs that his company is making in using AI to fight breast cancer, improve weather forecasts, and reduce flight delays.

As virtuous as these AI-powered accomplishments are, they don’t account for the negative impacts that AI also can have, Pichai wrote. “There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition,” he wrote.

As a developer of AI technology, it is Google’s responsibility to not just “build promising new technology” and then let the market decide how best to use it. Instead, it’s Google’s responsibility to “make sure that technology is harnessed for good and available to everyone,” Pichai wrote.

“Now there is no question in my mind that artificial intelligence needs to be regulated,” he continued. “It is too important not to. The only question is how to approach it.”

He cited the European Union’s General Data Protection Regulation (GDPR) as a “strong foundation” for a regulatory framework. He also cited Google’s own AI principals, which it published in 2018 and which provide guidelines for dealing with the safety, security, and privacy aspects of AI, as well a call to never use AI for mass surveillance or violating human rights.

Ginni Rometty, the CEO of IBM, also came out this week in support of greater regulation of the use of AI, if not AI technology itself.

Speaking from Davos, Switzerland, where the World Economic Forum is taking place, Rometty told CNBC that “Precision regulation is what I think is needed because…we’ve got to compete in this world against every country. You want to have innovation flourish and you’ve got to balance that with security.”

H2O’s CEO Sri Ambati agrees that there needs to be regulation of AI. “Everything needs regulation,” he says in an interview with Datanami this week. “It’s a question of how we build the a balance between innovation and regulation.”

The potential good of AI must be weighed against the potential harm that it can do if it’s used indiscriminately, says Ambati, who is one of the more socially conscious tech CEOs and has backed H2O’s AI4Good program.

“AI is a superior form of technology that we have unleashed into the new world,” says Ambati, one of Datanami’s 2019 People to Watch. “H2O has famously walked away from AI weaponry, for example, so there are things you’ll end up seeing in the marketplace which will use AI for destructive purposes as well as constructive purposes.”

One of the ways to tilt the scales in favor of good uses of AI is to make it more broadly available, Ambati says. That means lowering barriers to not only the machine learning technology itself, but also the large sets of data that are used to train the algorithms.

“You want to democratize AI, which means AI is open, and AI without borders, even,” he says. “We definitely want to make sure it’s not math that only the largest tech giants have and everyone else is struggling to get.”

At the end of the day, however, you don’t want to go too far with regulation, which would stifle innovation and creativity. “In the classic sense of regulation, it definitely hampers innovation,” he says. “So it’s a very important balance between the two.”

Some jurisdictions have already taken steps to push back against against some of the potentially negative uses of AI. For example, in March 2019, the City of San Francisco banned the use of facial recognition within city limits. However, police in London currently are pushing forward with plans to use facial recognition in cameras mounted around the city.

“The public rightly expect us to use widely available technology to stop criminals,” Nick Ephgrave, an assistant commissioner with the London’s Metropolitan Police Service, said in a statement.

But there’s one AI tech that some believe have no redeeming quality: Deepfakes.

“Deepfakes are going to be a greater challenge in 2020, and it will have a menacing effect on everything — be it entertainment, or politics, or business,” says Sandeep Dutta, Fractal Analytics chief practice officer for APAC. “I am in the technology industry and generally believe that the benefits of a new tech revolution normally outweigh the negatives. [But] deep fake is one such technology that I fear has more of a downside than the potential to create any big and lasting beneficial impacts on society.”

Unfortunately, it will be exceedingly difficult for government to effectively regulate the AI technology that powers deepfakes, Dutta tells Datanami.

“Legislation will help. But, in the tech category, regulations lag behind,” he says. “The solution will lie in more self-governance by the tech and media industry and in the development of AI-based tools to detect and remove deepfakes.”

Matt Sanchez, the CTO at CognitiveScale (which bills itself as “the trusted AI company”), also recognized the need for regulation to stem the spread of negative aspects of emerging AI technology. But he also argued that there must be a broad coalition of groups working to set the rules of the road.

“It’s important that CEOs across the globe call for AI regulation, but it can’t be tech companies alone,” Sanchez tells Datanami via email. “Government organizations, academic institutions and the public must all put overarching standards in place that prioritize transparency and trust.”

Specifically, the community needs better ways of measuring the risks of AI through less visible elements, such as bias and explainabiltiy of models, Sanchez says.

“When it comes to AI, it’s the invisible things that need to be considered with any legislation – and if we do nothing, hidden biases will manifest as unforgiving disparities,” he says. “But by implementing regulations across industries that build trusted and ethical standards for AI, we can unlock a wealth of outcomes that work for everyone.”

Watch this video:https://www.youtube.com/watch?v=Z5vxRC8dMvs

bottom of page