Ethical Limitations of AI

Artificial intelligence has changed design and content creation for the better or worse, enabling designers to create a huge amount of images, wireframes, writings, and lots of other things at a crazy scale. ChatGPT, DALL·E, Midjourney, and Adobe Firefly all have the power to create images in countless different styles with anything you can imagine. However, AI has its flaws. AI-generated content can be biased, generate slop or nonsense, have ethical concerns, and inaccuracies. This raises concerns because AI use is very widespread now. As AI continues to evolve, it is important for designers to be aware and understand the limitations of AI. If left unchecked, AI can promote misinformation and reinforce societal biases. By focusing on diversity, transparency, human oversight, fact-checking, and improved AI training, designers can help shape AI into a more ethical and reliable tool for the future.

Firstly, AI can have bias. AI models collect data from all over the Internet, which means they inevitably collect the biases present in that data. If training datasets contain historical inequalities and/or stereotypes, the AI will reflect and even worsen those biases. For example, studies have shown that some AI hiring algorithms have discriminated against women due to biased training data that favored male candidates (Dastin, 2018). Similarly, facial recognition AI has been criticized for higher error rates when identifying people of color (Buolamwini & Gebru, 2018). These biases are also present in images created by AI as well. Some AI art tools have been criticized for underrepresenting certain ethnicities or genders in their outputs. For instance, an AI image generator asked to depict a “doctor” may primarily generate images of white males, reflecting historical biases in training datasets. Additionally, AI beauty filters often reinforce Eurocentric beauty standards, further perpetuating unrealistic ideals. 

Furthermore, AI generators do not “understand” information the way humans do. They do not have awareness and only generate words based on algorithms and machine learning and this works much simpler than the human mind. In turn, this can cause AI to give nonsensical or contextually incorrect answers, and a double-whammy here is that the AI is totally confident in its answer. For image design, AI also has difficulty maintaining consistency across a series of images. For instance, a person’s face or outfit may change unpredictably between different images in a large generated set, making it unreliable for comics or animations.

One of the biggest dangers of AI content is its potential to spread misinformation. AI can generate false but convincing narratives, making it easier to fabricate fake news, deepfake videos, and make misleading social media posts. This problem gets even worse when AI is used to create content at an insane scale, leading to misinformation everywhere that is difficult to detect. For example, AI-generated deepfake videos have been used to impersonate politicians, causing public confusion (Chesney & Citron, 2019). Similarly, AI generated text has been used to create fake reviews and bot comment spam on social media. AI can become a tool for deception rather than innovation without the proper safeguards.

So how can designers solve or mitigate all of these problems? To start, designers must prioritize diverse and representative datasets. The AI models should be trained on data that includes perspectives from different cultures. Additionally, companies can implement bias detection tools that flag discrimination in AI. Some AI companies can use fairness audits to detect and correct biases in their models before sending the AI out. Regular testing and user feedback can help identify and address biases that may have been overlooked during training. For AI image creators, designers should test outputs for bias and implement controls that allow users to specify diversity in generated results. Tools that allow users to manually adjust variables like ethnicity and gender representation can help address representation issues.

Another way to build trust in AI is to make its decision making processes more transparent. Designers should make sure that AI content is clearly labeled, helping users distinguish between that and human made things. For example, platforms like YouTube and Instagram have started labeling posts and videos made or potentially made by AI to prevent misinformation. This is done with a watermark or disclaimer. AI explainability tools can also help users understand how AI reaches its conclusions, and this could just be a simple reading or even an image of a diagram of how the AI works.

Rather than replacing humans, AI should be seen as an assistant to the designer that enhances human creativity. Designers can integrate AI in ways that require human oversight, ensuring that final outputs are reviewed before publication. AI concept art can serve as inspiration for artists rather than final pieces. Graphic designers can use AI to speed up the creative process but should refine and modify outputs as needed.

To reduce misinformation, AI models should be designed with fact checking mechanisms that are built into it. Developers can integrate AI with trusted knowledge bases, such as scientific journals or reputable news sources, to ensure more accurate outputs. Additionally, AI systems should allow users to report inaccuracies, helping to improve reliability over time. Some AI tools already include warnings when generating uncertain information, and expanding this feature can further reduce errors.

Finally, AI should incorporate ethical guidelines that respect artists’ rights. This could include opt-in training databases, allowing artists to choose whether their work is used for AI training. Companies should also explore compensation models for artists whose work contributes to AI training datasets. 

Ultimately, AI presents exciting opportunities but also big challenges. Bias, misinformation, ethical concerns, and creativity limitations highlight the need for responsible AI design. By prioritizing diversity, transparency, human and AI collaboration, and ethical AI training, designers can create a future where AI serves as a valuable tool rather than a disruptive force. AI is not a replacement for human creativity and judgment, but with responsible design, it can become a powerful ally in innovation. As AI continues to evolve, designers and developers must actively shape its ethics and creativity as well as its boundaries.

References
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the Conference on Fairness, Accountability, and Transparency, 77-91.
Chesney, R., & Citron, D. (2019). Deepfakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1819.
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
OpenAI. (2025). ChatGPT (GPT-4.5) [Large language model]. https://chat.openai.com/chat