Overcoming AI Obstacles: Lessons from AI Practitioners in the Enterprise
3 mins read

Overcoming AI Obstacles: Lessons from AI Practitioners in the Enterprise

In our continuous effort to better understand and support our customers, we regularly conduct end-user surveys. We recently sponsored a study with IDC* that surveyed teams of data scientists, data engineers, developers and IT professionals working on AI projects in enterprises around the world. Our goal was to identify the biggest challenges they face and best practices for more mature AI teams. The study revealed that companies’ AI challenges vary with the maturity level of teams as they begin to test and operationalize AI and GenAI. However, several key findings emerged as persistent issues at all maturity levels, which now inform our strategy for developing intelligent data infrastructure to support AI.

One of the top findings was that 63% of respondents indicated that right-sizing storage for AI needs major improvements or a complete overhaul, with storage bottlenecks a persistent problem in slowing down AI modeling. To address this, we are working closely with NVIDIA to qualify solutions for model training and have optimized ONTAP for SuperPOD qualification, which is currently being tested. Our CEO recently announced a new data infrastructure designed for the AI ​​era, which will independently scale capacity and performance to handle the needs of the largest basic model development and scale down for inference. This infrastructure will run both on-premises and as software in the world’s largest public cloud, offering ONTAP data management services critical to effective and accountable AI.

Another key finding was that respondents cited data access due to infrastructure constraints as the number one reason AI projects failed. To address this challenge, we are focused on simplifying and unifying data storage to support better data access. We provide a single data and control plane across NetApp’s data landscape, spanning the edge, data center and public cloud. This approach natively supports all data formats and efficient data movement, bringing data to the right resources, either on-premises or in one of the hyperscales for each stage of the data pipeline. Additionally, we expose our capabilities to the tools that data teams use, such as AWS SageMaker, Google Vertex, and Azure ML Studio.

The third key finding was that only 20% of respondents have mature, centralized data governance and security policies for AI. This is an area where we will place great emphasis going forward. NetApp differentiates itself with policy-driven security at the data warehouse and uses continuously updated AI/ML models to detect and respond to threats in real-time with 99%+ accuracy. This can be used to protect both data and models within the AI ​​workflow. We also develop tools for data scientists to get their work done safer and faster, in compliance with privacy laws. These tools will accelerate and simplify data discovery and curation, provide assurance of safe and compliant AI, guarantee accuracy and traceability, and integrate with data science workflow tools.

In conclusion, the insights from the IDC survey have significantly influenced our planning and approach to developing AI-ready intelligent data infrastructure. By addressing storage bottlenecks, improving data access, and improving data governance and security, we are better positioned to help our customers leverage AI and GenAI effectively. For more information you can:

  • Look this conversation between NetApps Gabie Boko, CMO & Hoseb Dermanilian, Global Head of AI GTM & Sales.
  • Get access to IDC study to self-assess your organization’s AI maturity and learn best practices
  • Register for NetApp-IDC-NVIDIA Webinar to dive deeper into the survey results
  • Understand our The CEO’s AI vision from the last insight conference
*N=1220 – IDC Transformation Study, March 2024