Explainable AI might overcome the distrust that enterprise network engineers have for AI/ML management tools that have the potential to streamline network operations. IT organizations that apply artificial intelligence and machine learning (AI/ML) technology to network management are finding that AI/ML can make mistakes, but most organizations believe that AI-driven network management will improve their network operations. To realize these benefits, network managers must find a way to trust these AI solutions despite their foibles. Explainable AI tools could hold the key. A survey finds network engineers are skeptical. In an Enterprise Management Associates (EMA) survey of 250 IT professionals who use AI/ML technology for network management, 96% said those solutions have produced false or mistaken insights and recommendations. Nearly 65% described these mistakes as somewhat to very rare, according to the recent EMA report “AI-Driven Networks: Leveling Up Network Management.” Overall, 44% percent of respondents said they have strong trust in their AI-driven network-management tools, and another 42% slightly trust these tools. But members of network-engineering teams reported more skepticism than other groups—IT tool engineers, cloud engineers, or members of CIO suites—suggesting that people with the deepest networking expertise were the least convinced. In fact, 20% of respondents said that cultural resistance and distrust from the network team was one of the biggest roadblocks to successful use of AI-driven networking. Respondents who work within a network engineering team were twice as likely (40%) to cite this challenge. Given the prevalence of errors and the lukewarm acceptance from high-level networking experts, how are organizations building trust in these solutions? What is explainable AI, and how can it help? Explainable AI is an academic concept embraced by a growing number of providers of commercial AI solutions. It’s a subdiscipline of AI research that emphasizes the development of tools that spell out how AI/ML technology makes decisions and discovers insights. Researchers argue that explainable AI tools pave the way for human acceptance of AI technology. It can also address concerns about ethics and compliance. EMA’s research validated this notion. More than 50% of research participants said explainable AI tools are very important to building trust in AI/ML technology they apply to network management. Another 41% said it was somewhat important. Majorities of participants pointed to three explainable AI tools and techniques that best help with building trust: Visualizations of how insights were discovered (72%): Some vendors embed visual elements that guide humans through the paths AI/ML algorithms take to develop insights. These include decisions trees, branching visual elements that display how the technology works with and interprets network data. Natural language explanations (66%): These explanations can be static phrases pinned to outputs from an AI/ML tool and can also come in the form of a chatbot or virtual assistant that provides a conversational interface. Users with varying levels of technical expertise can understand these explanations. Probability scores (57%): Some AI/ML solutions present insights without context about how confident they are in their own conclusions. A probability score takes a different tack, pairing each insight or recommendation with a score that tells how confident the system is in its output. This helps the user determine whether to act on the information, take a wait-and-see approach, or ignore it altogether. Respondents who reported the most overall success with AI-driven networking solutions were more likely to see value in all three of these capabilities. There may be other ways to build trust in AI-driven networking, but explainable AI may be one of the most effective and efficient. It offers some transparency into the AI/ML systems that might otherwise be opaque. When evaluating AI-driven networking, IT buyers should ask vendors about how they help operators develop trust in these systems with explainable AI. Related content opinion Why enterprises should care more about net neutrality Net neutrality policies are the most significant regulatory influence on the Internet and data services, and they're the reason why end-to-end Internet QoS isn’t available. By Tom Nolle Oct 23, 2024 7 mins Network Management Software Telecommunications Industry news 2024 global network outage report and internet health check ThousandEyes tracks internet and cloud traffic and provides Network World with weekly updates on the performance of ISPs, cloud service providers, and UCaaS providers. By Ann Bednarz Oct 22, 2024 101 mins Internet Service Providers Network Management Software Cloud Computing news cPacket doubles the speed of its packet capture appliance For enterprises that need high-speed network monitoring and analytics, the cStor 200S appliance delivers 200 Gbps concurrent packet capture, indexing, and analytics. By Denise Dubie Oct 22, 2024 4 mins Network Management Software Network Monitoring news Gluware expands network automation platform with AI copilots, GitHub integration Gluware's AI copilots offer a natural language interface for managing tasks such as device discovery, attribute updates, and remediation workflows. By Sean Michael Kerner Oct 17, 2024 4 mins Data Center Automation Network Management Software Networking PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe