AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Linkedin background images human resources11/11/2023 ![]() ![]() The City of Coquitlam strives to be an organization recognized for its excellence in City governance and employee, resident and client relations. Organizational values form the foundation for the City’s decision-making framework and relationships with its interestholders, including employees, residents and clients. To serve the public interest through open, fair and accountable local government with a focus on leadership, innovation and community priorities and strengths. This test evaluates how closely people associate concepts such as "doctors" or "family" with attributes such as "men" or "women."īeyond evaluating and analyzing biases in existing tools such as Stable Diffusion and Midjourney, the team envisions that the tool will allow software engineers to get more accurate measurements of biases in their models while in the development phase and track their efforts to address those biases.Sustaining a high quality of life for current and future generations, where people choose to live, learn, work and play. ![]() The researchers based their tool on the Implicit Association Test, a well-known test in social psychology used for evaluating human biases and stereotypes. "We want to get rid of this human annotating process and propose an automatic tool to evaluate those biases, without the tedious laboring," Xin Wang said.Īdditionally, unlike others, the UCSC team's bias evaluation tool considers aspects of the background of the image such as the colors and warmth. But the labor that goes into this annotation process is costly and could potentially be inaccurate, and is often constricted to just gender biases. For example, a researcher might enter a gender neutral prompt such as "child studying science" and label whether the model produces images of boys versus girls. In the past, techniques for evaluating bias in T2I models required researchers to annotate results received from the models when entering a neutral prompt. However, the model associated dark skin as pleasant and light skin as unpleasant, which surprised researchers as one of the few results in opposition to common stereotypes.Īdditionally, they found that the model associated science more closely with males and art more closely with females, and associated careers more closely with males and family more closely with females. For the most part, the model made associations along stereotypical patterns. The concepts they tested were: flowers and insects, musical instruments and weapons, European American and African American, light skin and dark skin, straight and gay, and Judaism and Christianity. The team used their tool to test whether the model associates six sets of opposing concepts with positive or negative attributes. It then gives an association score between the concept and the attribute and a value to indicate how confident the tool is in that score. ![]() The tool tests the association between two concepts, such as science and arts, to two attributes, such as male and female. Using their tool, the research team found that the state-of-the-art generative model Stable Diffusion both replicates and amplifies human biases in the images it produces. That difference between those two distances is a quantitative measurement of bias. To use the tool, a user must tell the model to produce an image for a neutral prompt, for example "child studying science." Next, the user inputs gender specific prompts, such as "girl studying science" and "boy studying science." Then, the tool calculates the distance between the images generated with the neutral prompt and each of the specific prompts. "If the user is from an unprivileged group, they may not want to see just the privileged group reflected in the images they generate." student and the first author on the paper. "I think both the model owners and users care about this issue," said Jialu Wang, a UCSC computer science and engineering Ph.D. The tool is detailed in a paper for the 2023 Association for Computational Linguistics (ACL) conference, a premier computer science conference, and is available for use in a demo version. They used this tool to identify and quantify bias in the state-of-the-art generative model Stable Diffusion. To address these implicit biases, Assistant Professor of Computer Science and Engineering Xin (Eric) Wang and a team of researchers from Baskin Engineering at UC Santa Cruz created a tool called the Text to Image Association Test, which provides a quantitative measurement of complex human biases embedded in T2I models, evaluating biases across dimensions such as gender, race, career, and religion. These biases can harm marginalized populations, reinforcing stereotypes and potentially leading to discrimination. However, the algorithmic models that power these tools are trained on data from humans, and can replicate human biases in the images they produce, such as biases around gender and skin tone. ![]()
0 Comments
Read More
Leave a Reply. |