Why is Trust in AI Falling?18 min read

by | Aug 21, 2024 | Blog

The recent Uptime Institute Data Center Survey found that trust in AI for use in data center operations has declined recently.

“Most operators recognize the benefits of AI and its potential,” said Andy Lawrence, Executive Director Research, Uptime Institute. “Despite many operators planning to host the technology, trust in AI for use in data center operations has declined for the third year in a row.”

He believes that a certain degree of healthy skepticism may have arisen. After all, the history of IT is full of overhyped technologies that promised to revolutionize data center management. These have included Windows NT in the nineties, information lifecycle management and deduplication in the early 2000s, as well as other notable technology saviors such as Data Center Infrastructure Management (DCIM), the Internet of Things (IoT), digital twins, augmented reality, and others. Some delivered on their promises, while others only delivered modest results. But even successes like the cloud and virtualization were sometimes marketed over-aggressively.

Data center managers were more positive about AI three years ago. Uptime Institute noted that 76% of respondents would trust an adequately trained AI model to make operational decisions in the data center in 2021. The same question was answered positively by only 58% in 2024.

Some of the negative aspects of AI cited in the study included:

  • Lack of decision-making transparency and accountability
  • Cybersecurity risks introduced by additional network connections
  • AI creating additional points of failure.

Security Rears Its Head

Most new technologies go through an initial broad deployment cycle followed by a slowdown due to security concerns. A couple of decades ago, Windows PCs were all the rage. All of a sudden, viruses appeared on the scene, which put a wrench in the works. It took a year or two of work in security to come to terms with this cybersecurity setback. Similar things happen with new software. It’s great for a while – until hackers find a backdoor into it or spot serious flaws they can exploit. A rash of patches and updates follow. If the software is patched up sufficiently, it then heads into the mainstream.

A similar thing may be happening with Generative AI (GenAI). In the weeks following its release, ChatGPT peaked at 1.5 to 2 billion visits per month. But numbers have waned. 600 million monthly visits is the current total. Why the slowdown? A great many people grabbed ChatGPT and similar tools to help them write emails, create web content, and sales collateral. Millions of data prompts and queries were fed into GenAI engines.

Stories began to appear on the perils of ChatGPT. For instance, sensitive data was being provided to GenAI tools. This includes CRM databases, personally identifiable information, healthcare data, and proprietary information. That kind of behavior could expose the organization to a data breach or cyberattack. Further, the answers and conclusions spat out by ChatGPT could sometimes contain bias, hallucination, proprietary or sensitive information, and at times, plagiarized content. Lawsuits began to mount up from creative people about their work being stolen by GenAI tools and used in responses – or their material was used as the basis for these engines to “create new content.”

What wasn’t initially understood is that these tools are running in the public cloud. They need to be treated with the same security standards as would be applied to any other public cloud service. To make matters worse, a host of vendors have come up with third-party add-ons to enhance the user experience, simplify operation, or add new features to ChatGPT and other popular GenAI tools. While tools from OpenAI, Google, and Microsoft are subjected to a fair amount of security rigor, those from some of these other small vendors may have escaped attention. It was realized that data was being fed through some of these third parties in an insecure way or in a manner that potentially violated privacy rules. No wonder 80% of respondents in the GenAI Market Report from SAS  raised serious data privacy and security concerns about their use of GenAI.

What we are seeing now is the formulation and standardization of GenAI security best practices. Further, the vendor community is developing tools to safeguard users of GenAI, as well as ways to monitor risk and police proper usage by employees.

Onward and Upward

Despite this setback, don’t expect GenAI to falter. 91% still think it likely that AI will become broadly used over the next five years as a way to improve operational efficiency and availability.

Further, hype will continue to thrive in such a vibrant area of technology. Some of the tools being developed will add real value while others will be found to have more sizzle than substance.

“Some AI products will deliver value for money, but others will inevitably be promoted to ride the AI wave, confusing customers with machine learning terminology, yet offering no substantial benefit,” said Lawrence. “With highly publicized failures of generative AI systems throughout 2023 and continuing into 2024, the impressive results produced by simple, well-proven AI models in industrial settings are often ignored.”

Thus, those in the data center should keep their feet on the ground and channel their GenAI aspirations through well-defined use cases that are likely to add real value.

According to McKinsey, those that don’t get carried away with the hype and deploy it sensibly can win big. Businesses can add anywhere from 4% to 20% in revenue by adopting GenAI, depending on the industry they serve.

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor,
manage, and maximize the power and cooling infrastructure for critical
data center environments.

 

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor, manage, and maximize the power and cooling infrastructure for critical data center environments.

Drew Robb

Drew Robb

Writing and Editing Consultant and Contractor

Drew Robb has been a full-time professional writer and editor for more than twenty years. He currently works freelance for a number of IT publications, including eSecurity Planet and CIO Insight. He is also the editor-in-chief of an international engineering magazine.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest