Sales 3.0 Labs Ethics Series - OpenAI
Everyday ChatGPT processes more than 1 billion queries. A majority of those making the queries would consider themselves ethical and supporters of ethics in a vague general sense of the term. But what happens when one is actually confronted with certain ethical facts about the solutions and tools they use?
Take social media for example. It is well documented in the public domain that TikTok is a tool used by the totalitarian government of China to not only spy, but to also manipulate the sentiment of US citizens. We also know that Meta’s practices have wrecked immeasurable harm on youth, and more evidence and examples are being continually uncovered.
One can argue things like “they all do it,” or “the government will handle it,” when the only true way to change the behavior of such companies is to stop using their products. And yet, there is little to no appetite among users to change their behaviors in the name of ethics. In other words, ethics are great as long as individuals don’t have to exert any effort or sacrifice anything.
Disturbingly, it is the same with AI, only the existential risk of ignoring ethics in this case is exponentially greater. Too often, when I begin to explain the ethical issues with OpenAI, for example, to routine ChatGPT users, they become quiet and change the subject, often stating some issue another company is having along the same lines.
No doubt, all current AI chatbot providers have ethical issues, but that is no excuse to ignore the subject, particularly from the largest and worst offenders. Before deciding on which chatbot(s) to continue using you should first take a look at the Sales 3.0 Labs EQI (Ethical Quality Index).
Regarding OpenAI (ChatGPT), here are just some of the significant issues found in the Sales 3.0 Labs research:
- Documented cases of users developing unhealthy emotional dependencies.
- No built-in safeguards to ensure human oversight in critical decisions.
- Inadequate diversity testing across different demographics and regions.
- Documented bias issues that shift with model updates.
- Inadequate disclosure of specific training datasets and sources.
- Inadequate explanations of how the AI generates responses.
- Training data includes scraped internet content without user consent.
- Default opt-in rather than opt-out for consumer data usage.
- €15 million fine from Italy's data protection authority (Garante per la Protezione dei Dati Personali) in 2024. Violations included: failing to establish lawful basis for processing personal data for model training, inadequate age verification, and insufficient user notification.
- Court order now requires OpenAI to preserve all ChatGPT logs indefinitely, including deleted conversations.
- Austrian Data Protection Authority complaint filed in 2024 highlighting inability to confirm data origins or prevent inaccurate outputs.
- Various lawsuits from content creators, authors, and publishers claiming unauthorized use of copyrighted material in training data.
- Questions about AI systems mimicking human voices or characteristics including controversy over the "Sky" voice allegedly resembling Scarlett Johansson without permission.
- Lack of transparency about harmful emissions.
- Exploitation of Global South Workers.

Sales 3.0 Labs
Through research and education, we strive to elevate ethics and transparency in both AI and the Sales Industry. Our pledge to you: we will continually provide unbiased information to help you cut through the current fog of self-proclaimed AI "experts" and hucksters.
Responses