87% of organizations rely on AI, but far fewer assess the risks
Subscribe to our daily and weekly newsletters to receive the latest updates and exclusive content on industry-leading AI coverage. Learn more
According to a new PwC survey of 1,001 U.S. business and technology executives, 73% of respondents are currently using or planning to use generative AI in their organizations.
However, only 58% of respondents have started assessing AI risks. For PwC, responsible AI is associated with value, safety and trust and should be part of a company’s risk management processes.
Jenn Kosar, US Assurance Leader at PwC, said VentureBeat that six months ago it would have been acceptable for companies to start implementing some AI projects without thinking about responsible AI strategies. However, this is no longer the case today.
“We are now further along in the cycle, so now is the time to build on responsible AI,” Kosar said. “Previous projects were internal and limited to small teams, but now we are seeing large-scale adoption of generative AI.”
She added that pilot projects of new AI actually go a long way towards a responsible AI strategy, as they help companies determine what works best for their teams and how they use AI systems.
Responsible AI and risk assessment have come to the forefront of the news in recent days after Elon Musk’s xAI deployed a new image generation service on social network X (formerly Twitter) via its Grok-2 model. Early users report that the model appears to be largely unrestricted, allowing users to create all sorts of controversial and inflammatory content, including deepfakes of politicians and pop stars committing acts of violence or in overtly sexual situations.
Priorities to build on
Survey respondents were asked about 11 skills that PwC identified as “a subset of the skills that companies most commonly prioritize today.” These include:
- Higher qualification
- Involvement of specialists for AI risks
- Regular training
- Data protection
- Data management
- Cybersecurity
- Model tests
- Model management
- Third party risk management
- Specialized software for AI risk management
- Monitoring and auditing
According to the PwC survey, more than 80 percent reported progress on these skills. However, 11 percent claimed they had already implemented all 11 skills. However, PwC said, “We suspect that many of them are overestimating progress.”
It added that some of these markers of responsible AI can be difficult to manage, which could be one reason why organizations struggle to fully implement them. PwC pointed to data governance, which needs to define AI models’ access to internal data and put guardrails around it. “Legacy” cybersecurity methods may be insufficient to protect the model itself from attacks such as model poisoning.
Accountability and responsible AI belong together
To help companies transform to AI, PwC suggests ways to build a comprehensive, responsible AI strategy.
One of them is creating accountability, which Kosar said was one of the challenges for respondents. She said it is important to ensure that the responsibility and accountability for the responsible use of AI can be attributed to a single leader. This means thinking about AI safety as something that goes beyond technology and having either a Chief AI Officer or a responsible AI lead who works with various stakeholders in the company to understand business processes.
“Perhaps AI is the catalyst that brings technology and operational risk together,” Kosar said.
PwC also suggests thinking through the entire lifecycle of AI systems, going beyond theory and implementing security and trust policies across the organization. It also recommends preparing for any future regulations by focusing on responsible AI practices and developing a plan that is transparent to stakeholders.
What surprised her most about the survey, Kosar said, were the comments from respondents who believed that responsible AI added commercial value to their companies. She believes this will encourage more companies to think more deeply about it.
“Responsible AI as a concept is not just about risk, it should also provide value. Companies said they see responsible AI as a competitive advantage that allows them to build their services on trust,” she said.