Categories: Technology

Report: 37% of ML leaders say they don’t have the data needed to improve model performance

[ad_1]

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


A new report by Scale AI uncovers what’s working and what’s not working with AI implementation, and the best practices for ML teams to move from just testing to real-world deployment. The report explores every stage of the ML lifecycle – from data collection and annotation to model development, deployment, and monitoring – in order to understand where AI innovation is being bottlenecked, where breakdowns occur, and what approaches are helping companies find success.

The report’s goal is to continue to shed light on the realities of what it takes to unlock the full potential of AI for every business and help empower organizations and ML practitioners to clear their current hurdles, learn and implement best practices, and ultimately use AI as a strategic advantage.

For ML practitioners, data quality is one of the most important factors in their success, and according to respondents, it’s also the most difficult challenge to overcome. In this study, more than one-third (37%) of all respondents said they do not have the variety of data they need to improve model performance. Not only do they not have variety of data, but quality is also an issue — only 9% of respondents indicated their training data is free from noise, bias and gaps. 

The majority of respondents have problems with their training data. The top three issues are data noise (67%), data bias (47%) and domain gaps (47%).

Most teams, regardless of industry or level of AI advancement, face similar challenges with data quality and variety. Scale’s data suggests that working closely with annotation partners can help ML teams overcome challenges in data curation and annotation quality, accelerating model deployment. ML teams that are not at all engaged with annotation partners are the most likely to take greater than three months to get annotated data. 

This survey was conducted online within the United States by Scale AI from March 31, 2022, to April 12, 2022. More than 1,300 ML practitioners including those from Meta, Amazon, Spotify and more were surveyed for the report.

Read the full report by Scale AI.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

[ad_2]
Source link
Admin

Recent Posts

Air India: A Journey Through Time

Hey there! Ready to embark on a historical journey with Air India? Whether you're a…

1 week ago

The Rise of Smart Altcoins: How 2025 Is Reshaping the Crypto Hierarchy

In 2017, altcoins were seen as experimental side projects to Bitcoin. By 2021, they became…

3 weeks ago

5 Services That Can Transform Your Shopping Center in Las Vegas into a Must-Visit Destination

Shopping centers in Las Vegas have a unique opportunity to stand out by offering not…

3 weeks ago

Levitra Dosage: Guidelines for Safe Use

Levitra, a widely recognized medication for treating erectile dysfunction (ED), has proven to be a…

1 month ago

Practical Tips for Carpet Cleaning on a Budget

Have you ever looked down at your carpet and wondered if there’s a budget-friendly way…

2 months ago

The Best CSGO Case to Open in 2025: Top Picks for CS2 Skins

Counter-Strike 2 (CS2) has elevated the thrill of case openings, captivating both seasoned CS:GO veterans…

3 months ago