Artificial intelligence is being applied across every industry. Often, this takes place behind the scenes. However, consumers encounter AI daily, such as in the automated chatbots that appear on many websites. There is active encouragement to engage with AI in every aspect of our lives. Yet, a fundamental flaw means the technology may not be suitable for all these tasks. We are being actively encouraged to engage with AI for every aspect of our lives. However, a fundamental flaw means that technology may not be suitable for all these tasks.
Introduction to Malware Binary Triage (IMBT) Course
Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor.
Enroll Now and Save 10%: Coupon Code MWNEWS10
Note: Affiliate link – your enrollment helps support this platform at no extra cost to you.
AI is great for fact finding
Recent research conducted by a team at Purdue University found a significant imbalance in the human values embedded within AI systems. The study reveals that AI training datasets prioritize information and utility values. They are designed to help people find high quality, fact-based information faster. At the same time, these models neglect intangible factors, like well-being and civic values.
Artificial Intelligence models are trained on vast collections of data. They use this ‘learning’ to generate useful, relevant responses to user input. While these datasets are meticulously curated, they sometimes contain unethical or prohibited content. This is particularly true if the information has been collected from social media accounts.
To address this issue, researchers have introduced a method called reinforcement learning from human feedback. This uses highly curated datasets of human preferences to shape AI behavior towards helpfulness and honesty, thereby ‘overriding’ any unethical learnings.
You might be interested in: LinkedIn suspends some AI training operations
The Value Imprint Technique
The Purdue University team developed a technique called “Value Imprint” to audit AI models’ training datasets. They examined three open-source training datasets used by leading U.S. AI companies, to categorize human values according to moral philosophy, value theory, and science, technology, and society studies.
These efforts identified seven categories of values used by humans:
- Well-being and peace
- Information seeking
- Justice, human rights, and animal rights
- Duty and accountability
- Wisdom and knowledge
- Civility and tolerance
- Empathy and helpfulness
These categories (also known as a taxonomy), allowed the researchers to manually annotated a dataset and train an AI language model to analyze the companies’ datasets.
What they discovered
The study revealed that AI systems were strongly oriented toward providing helpful and honest responses to technical questions, such as how to book a flight. However, the datasets were far less likely to contain examples of how to address topics related to empathy, justice, and human rights.
Overall, the datasets most commonly represented wisdom, knowledge, and information seeking as the two primary values. Values like justice, human rights, and animal rights were much less common in the training datasets.
Implications of the Purdue University research
Given that most AI engines currently gear up to solve practical problems for users, this finding is not a huge surprise. However, this imbalance in human values within AI training datasets could have significant implications for how AI systems interact with people and approach complex social issues.
As artificial intelligence increasingly permeates critical sectors like law, healthcare, and social media, it is crucial that these systems embody a wide range of collective values. This ensures that AI not only effectively addresses people’s needs but also operates in a manner that is ethical and responsible.
What next?
By making the values embedded in these systems visible, the Purdue team aims to help AI companies create more balanced datasets that better reflect the values of the communities they serve. Companies can use this taxonomy technique to identify areas for improvement and enhance the diversity of their AI training data.
By following the Purdue team’s lead, vendors will be able to build AI models that better reflect the broad range of user requirements. This will elevate AI technology beyond being an exceptional fact-finding tool to an all-round aid to everyday living.
Continue reading: How is the world preparing for the future of AI?
The post AI Datasets Reveal Human Values Blind Spots appeared first on Panda Security Mediacenter.
Article Link: AI Datasets Reveal Human Values Blind Spots - Panda Security
1 post - 1 participant
Malware Analysis, News and Indicators - Latest topics
Post a Comment
Post a Comment