Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • U unicoc
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 126
    • Issues 126
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Adell Collier
  • unicoc
  • Issues
  • #93

Closed
Open
Created Feb 15, 2025 by Adell Collier@adell628893828Maintainer

Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy


Machine-learning models can fail when they attempt to make forecasts for people who were underrepresented in the datasets they were trained on.

For bphomesteading.com example, a model that predicts the very best treatment option for somebody with a chronic disease might be trained using a dataset that contains mainly male patients. That design might make inaccurate forecasts for female patients when deployed in a medical facility.

To enhance outcomes, engineers can attempt balancing the training dataset by eliminating information points up until all subgroups are represented similarly. While dataset balancing is appealing, it frequently requires eliminating big amount of data, hurting the model's general efficiency.

MIT scientists established a new technique that identifies and gets rid of specific points in a training dataset that contribute most to a model's failures on minority subgroups. By removing far less datapoints than other approaches, this strategy maintains the overall accuracy of the model while improving its efficiency relating to underrepresented groups.

In addition, the technique can determine covert sources of bias in a training dataset that lacks labels. Unlabeled information are even more prevalent than labeled information for numerous applications.

This technique could also be integrated with other techniques to improve the fairness of machine-learning models deployed in high-stakes situations. For instance, it might one day assist guarantee aren't misdiagnosed due to a prejudiced AI model.

"Many other algorithms that attempt to resolve this issue presume each datapoint matters as much as every other datapoint. In this paper, we are showing that assumption is not true. There are specific points in our dataset that are adding to this predisposition, and we can find those information points, eliminate them, and get better efficiency," states Kimia Hamidieh, an electrical engineering and computer system science (EECS) graduate trainee at MIT and co-lead author of a paper on this method.

She wrote the paper with co-lead authors Saachi Jain PhD '24 and fellow EECS graduate trainee Kristian Georgiev; Andrew Ilyas MEng '18, PhD '23, annunciogratis.net a Stein Fellow at Stanford University; and senior authors Marzyeh Ghassemi, wiki.dulovic.tech an associate teacher in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for securityholes.science Details and Decision Systems, and Aleksander Madry, the Cadence Design Systems Professor at MIT. The research study will be provided at the Conference on Neural Details Processing Systems.

Removing bad examples

Often, machine-learning designs are trained using substantial datasets collected from many sources across the internet. These datasets are far too large to be carefully curated by hand, so they might contain bad examples that harm model efficiency.

Scientists likewise know that some information points affect a design's efficiency on certain downstream jobs more than others.

The MIT scientists integrated these two concepts into a method that determines and gets rid of these bothersome datapoints. They seek to fix an issue called worst-group error, which takes place when a design underperforms on minority subgroups in a training dataset.

The researchers' brand-new strategy is driven by prior wiki.vifm.info work in which they introduced a technique, called TRAK, that recognizes the most crucial training examples for a particular design output.

For this brand-new strategy, they take incorrect forecasts the model made about minority subgroups and utilize TRAK to determine which training examples contributed the most to that inaccurate prediction.

"By aggregating this details throughout bad test predictions in the proper way, we are able to discover the specific parts of the training that are driving worst-group precision down overall," Ilyas explains.

Then they get rid of those particular samples and retrain the model on the remaining data.

Since having more information usually yields much better total performance, getting rid of just the samples that drive worst-group failures maintains the model's total accuracy while increasing its efficiency on minority subgroups.

A more available technique

Across three machine-learning datasets, engel-und-waisen.de their approach outperformed several strategies. In one circumstances, it improved worst-group precision while getting rid of about 20,000 less training samples than a conventional data balancing approach. Their strategy likewise attained higher accuracy than techniques that require making changes to the inner functions of a design.

Because the MIT method involves altering a dataset instead, it would be much easier for a practitioner to utilize and can be used to numerous kinds of models.

It can likewise be utilized when predisposition is unidentified due to the fact that subgroups in a training dataset are not labeled. By identifying datapoints that contribute most to a function the design is learning, they can comprehend the variables it is using to make a prediction.

"This is a tool anyone can utilize when they are training a machine-learning model. They can look at those datapoints and see whether they are aligned with the capability they are trying to teach the model," states Hamidieh.

Using the strategy to find unknown subgroup predisposition would require instinct about which groups to try to find, so the researchers hope to validate it and explore it more completely through future human research studies.

They likewise wish to improve the efficiency and dependability of their technique and ensure the method is available and easy-to-use for practitioners who might sooner or later release it in real-world environments.

"When you have tools that let you critically take a look at the data and determine which datapoints are going to lead to predisposition or other unwanted behavior, it gives you a primary step toward building designs that are going to be more fair and more trustworthy," Ilyas states.

This work is moneyed, fishtanklive.wiki in part, by the National Science Foundation and the U.S. Defense Advanced Research Projects Agency.

Assignee
Assign to
Time tracking