OpenAI releases more Closed AI innovations

The OpenAI logo on a black rectangle against a blue gradient background
Photo by Andrew Neel / Unsplash

Today, the company with perhaps one of the most ironic names in history has unveiled new additions to its Artificial Intelligence arsenal.

The most notable announcements are the new GPT-4 Turbo model, an API for DALL-E 3, and a shop for GPT users to distribute customized versions of the models. Like previous models, these were trained on data that included incredible amounts of copyrighted material and other questionably used works. Let's talk about it.

These announcements from OpenAI come at a time of heated debate between industries and regulatory bodies on whether applications of AI such as GPT or DALL-E are a violation of copyright law. Very recently, the US Copyright Office opened a request for public discussions, to see what people’s thoughts were on new regulations of AI.

To no one’s surprise, massive companies partaking in the AI race – like Microsoft, Meta, Google, and Anthropic – pitched in with absolutely unhinged takes on the matter. Their comments, summarized, include “regulating AI’s use of copyrighted material hurts the small companies” and “the copyrighted material holders wouldn’t get much money even if payment was required”.

Companies like these never cease to amaze me with the sheer amount of horse shit they distribute on a weekly basis. Here’s my take on it: Building your product off the works of others without their permission or an appropriate license, especially when you profit off of it, is theft. It’s not a complex concept. And no, I don't think FOSS alternative frontends are an example of this. Why? Because it's unmodified, the link to the original contents is right there, and it also credits the poster/author on every page.

But is training off that Data actually Unethical?

Allow me to write up a scenario: Imagine you write a book and publish it. A library sees it online and instead of buying a copy, it instead paraphrases the entire thing, prints it, and adds it to their shelf. That's messed up, right?
Now, flip the script and make it a scenario that isn't purely metaphorical. You're making a song, and want to take close inspiration from another song you heard, whether that's in melody or lyrics. It's not copying, so it seems fair, right? No. You can't do that, you'd need to credit the sample and/or artists, and it often comes with a fee or royalty to the original label.

Huh, so why is it that when we take inspiration, it's a copyright violation, but when OpenAI is out here scraping millions and millions of copyrighted works, it's suddenly perfectly fine? Do the rules not apply when you're rich? “They do, probably”, thinks Universal, trying to sue Anthropic while probably waiting on AI to become good enough to replace artists. Fine, just take one of the thousand other categories of work that AI companies are leeching off, while only average people face consequences for using it.

Here's a more general “why” question, why is it that we're continuously on the losing side in fights with huge companies? It's like only their word ever matters, and it's really exhausting.

What Now?

Here's the issue: Generative AI can be useful. Not for everyone, and certainly not in every case (looking at you, lawyers), but they can be useful. In fact, this entire blog post was written by ChatGPT. No, just kidding – but that would've been kind of funny. Jokes aside, though, I personally use Claude occasionally to explain concepts to me, summarize certain PDFs I'm reading, or to get debug steps for an issue that I can't find on the internet. But I also think I can recognize it's ethically wrong while still using it for personal purposes. What do I mean by “personal”? Well, that I won't distribute it as something I'd profit off of, and not as a means of replacing for-hire categories, such as logo design or writing. Am I in the wrong for using it at all? Yeah, probably.

To answer the “what now?” question, I have absolutely no idea. I don't think many people do. On one hand, it's helpful when used appropriately, but on the other, it's harmful when in the wrong hands. Do we slow its advancement? If we do slow it, will other countries follow suit? Unlikely. So… do we let the AI companies go wild? It's all uncharted territory.

A Film Recommendation

Credit: 20th Century, New Regency, Entertainment One (Featuring Ken Watanabe)

To close it on a more relaxed note, I wanna recommend a film called The Creator. It's a film I saw a few weeks ago in theatres and thoroughly enjoyed it. It's a retrofuturistic humanity vs. AI film, and it covers the aftermath of the Western world banning the development of AI while the East pushed forward with it. I usually dislike humanity vs. AI films because I'm a bit childish and want us to have a clear win every time. But, I gotta say the story in this one was enough to keep me on the edge of my seat. The VFX are also among the best I've seen anywhere, period. It's a fun sci-fi watch and I doubt you'll regret checking it out.