Informed Thoughts on AI

AI (in terms of generative AI) is a pretty controversial subject nowadays that I'm going to analyse as a whole in this post. While I do have my opinions and worries about it, I'm going to try to be unopinionated so that both pro-AI and anti-AI people can read this without getting angry. So first, I will state facts and introduce this all. This article will also explain everything so you can also understand it even if you don't really know much about tech.

Generative artificial intelligence (going to be shortened to Gen AI) is AI that makes text, images, and other media. The purposes/uses of generative AI vary widely in ethicality, practicality, efficiency, etc. ChatGPT is mostly thought when we think about LLMs (large language models), where you can chat in a messaging app-like interface, however, you aren't chatting with a human, but rather a machine. This machine is software, like an app on your phone. However, running the software in question requires a copious amount of computing power, such as RAM and storage. We can also see how it has changed, replaced, and created many jobs. Many people love AI, and have used it to make money. Many others hate AI, and have lost money because of it.

In this article, I'm just going to dive into its water usage/environmental impact, why it's so pushed, if it can make art, it replacing humans, and its okay uses. Get some popcorn since this is a long article!

Water usage and impact on the environment

To effectively offer this service for free without their servers exploding, they need water to cool their CPUs and GPUs using water. This process is called liquid cooling, in which water (or glycol mixtures/dielectric fluids) will flow through a component called the cold plates to absorb heat and make it flow away from the components (reference: https://flexpowermodules.com/the-basics-of-liquid-cooling-in-ai-data-centers ).

AI's affects on the environment through water haven't really been positive, as this technology needs a lot of water to function efficiently at the capacity which it is provided (to billions of users). From https://www.wateronline.com/doc/water-conservation-and-reuse-for-data-centers-0001,

Scientists at the University of California, Riverside have stated that each 100-word AI prompt (such as ChatGPT, Google AI, Grok) uses about one bottle of water (519 milliliters). With billions of AI prompts occurring every minute across the globe, the water usage at these data centers is multiplied significantly.

I don't think that the water consumption of AI should be overlooked or forgotten by those who love AI or those who hate AI. What I think is that if it's gonna use up so much water accumulatively, then it should be worth it. By "worth it," I mean that it should be very useful and having enough positive effects that everybody can be okay with the fact that it uses up this much water. But is it worth it? And when AI focused data centers try to come into new places, backlash occurs. For instance, in Uruguay, which was suffering its worst drought in 74 years, Google wanted to build a data center. That's either horrible timing or absolutely evil and intentional on their part, in which I can assume the latter is the case. To read more about that, visit https://www.theguardian.com/world/2023/jul/11/uruguay-drought-water-google-data-center . Here is a list of other articles I recommend that you read if you don't believe me:

Why is it so pushed?

Now, and with the last 3 years (as of 2025), we can see there is obviously a GIANT push for AI. That is why you see it more, and I'd say some of the push is also inappropriate, like the incident I mentioned of Google trying to have a data center placed in a place with water issues. This push is primarily by the big tech corporations and wealthy individuals, since they're the ones who funded it at the beginning. To further prove my claim, just look at the world around you. Whenever you search something up on Google, the first thing you see is a Gemini AI overview. There are more "Summarize this for me" buttons on the web, seen with how Elon Musk has been pushing Grok onto Twitter so you can also summarize tweets. Many people see this as a helpful thing, because they don't have to think as much to accomplish a task that would require thinking before the arrival of AI, and analytics of such usage has probably led to it being pushed.

I see that there is a push for AI by governments, which is weird to me, as even the U.S. government pushes it, considering we see Donald Trump make tweets with AI generated pictures and videos. Governments don't just support trivial things, which is why the push for AI with them is weird. I feel like more oppressive governments and bad organizations would use AI to make propaganda and deceive to a greater extent. I have seen increasingly more humorous videos on the internet that look extremely real, but then after lots of analysis and zooming in, I realize it's AI generated. And as AI gets better at making such convincingly real pictures, it becomes crystal clear that in the future, you won't know if any image is real or not. See the images here: https://www.reddit.com/r/ChatGPT/comments/1pcwt2x/these_pics_are_generated_using_nano_banana_pro/ . If I saw these with no caption indicating that it's made by AI, I would've thought it was real. There are some indicators that this is AI, but I only noticed them after seeing the caption. Not even Photoshop was able to do this on such a level. How is this beneficial to humanity? I don't think that AI should be able to plop out hyperrealistic pictures for any purpose, and that is really just the job of cameras and really good painters. AI being able to do that introduces a problem just as bad as its water usage: we can't trust that any pictures we see are real because there is a huge chance that it could've been AI generated. This takes away attention from real people, and is already causing/proving the Dead Internet Theory.

Photo/video evidence is probably a lot harder to use in courts as evidence of anything happening. Real footage of people doing crimes could be dismissed as just AI made in 10 years if we continue perfecting AI abilities. Or on the flip side, it's scary to know that if someone prompted a video of you doing something embarassing, then doctored the image further by editing its metadata (things like location and time stored in a photo) and whatnot, you could have your reputation ruined for something you didn't even do.

So what if a big reason that it is pushed is because it can deceive people on a greater level that involves less effort? That's what I'm thinking. Why would they kill the act of using a picture or video or audio record to prove something happened? I can't be overreacting, because this is the sort of thing I've READ about. I don't think I'm a Luddite or technophobe for thinking this is a bad thing that'll destroy other things as well as replacing other things.

I will not ignore OpenAI's official statistics though, let me show them; from https://openai.com/index/how-people-are-using-chatgpt/ :

ChatGPT consumer usage is largely about getting everyday tasks done. Three-quarters of conversations focus on practical guidance, seeking information, and writing—with writing being the most common work task, while coding and self-expression remain niche activities.
Patterns of use can also be thought of in terms of Asking, Doing, and Expressing. About half of messages (49%) are “Asking,” a growing and highly rated category that shows people value ChatGPT most as an advisor rather than only for task completion. Doing (40% of usage, including about one third of use for work) encompasses task-oriented interactions such as drafting text, planning, or programming, where the model is enlisted to generate outputs or complete practical work. Expressing (11% of usage) captures uses that are neither asking nor doing, usually involving personal reflection, exploration, and play.
How use is evolving
ChatGPT’s economic impact extends to both work and personal life. Approximately 30% of consumer usage is work-related and approximately 70% is non-work—with both categories continuing to grow over time, underscoring ChatGPT’s dual role as both a productivity tool and a driver of value for consumers in daily life. In some cases, it’s generating value that traditional measures like GDP fail to capture.
...

The same source also states later on, "ChatGPT helps improve judgment and productivity, especially in knowledge-intensive jobs."

And how many people use it? The same page mentions "700 million weekly active users of ChatGPT", so we can estimate that 700,000,000 people use it every week. 40% of its usage, or what 280,000,000 people use ChatGPT is for having it do things for them. 49% of its usage, or what 343,000,000 people use ChatGPT for, is for "Asking," which is a bit vague, but described as "an advisor." And what the rest, 77 million people, or 11%, use it for is "Expressing," which means "involving personal reflection, exploration, and play."

OpenAI pushes AI because they are an AI company (obviously), but mainly they believe it will "benefit humanity" but they don't really define that in just one place. Throughout their website, they say they will "ensure that artificial general intelligence benefits all of humanity" and their goals include "making intelligence a tool that everyone can benefit from, building safe and aligned systems, turbocharging scientific discovery, and strengthening global cooperation and resilience."

This makes sense, the purpose of ChatGPT is to help you, or really anything you want as there are no restrictions. The only restrictions are that you can't ask/talk about really bad or illegal things. But what is "help?" There is a difference between having it help you, and having it do something for you. Even they make that distinction in the statistics above. I feel that over 280 million people (not even including users of other AI providers) having AI do stuff for them isn't great. The more you have it do something for you, the more you forget how to do that thing.

Back when ChatGPT came out towards the end of 2022, I was in the 7th grade, and it was pretty awesome then. I got to play video games faster because ChatGPT did my homework for me, which was lazy and maybe even a bit dishonest in that case. Now, people who aren't qualified for college degrees and certain jobs have those only because they used AI to do the hard work for them. That surely is not "benefiting humanity" because making degrees useless since anyone could have one doesn't sound like a benefit to me.

Students using ChatGPT was an issue from the start, because I remember around a week or two later from it coming out, my school started saying you can't use AI on your assignments. It's always been academic dishonesty, and that made sense because it enables you to cheat on assignments. Even now, if you make it very obvious that you are a school student, and you ask it to generate answers to essays and whatnot, it'll give answers completely. That's an issue I don't think will be solved as long as AI is around, but somehow, there's been a change of heart now.

Now, in the 10th grade, using AI is more accepted in schools, which is weird to me, and referring to MagicSchool AI (a studying helper/tutor AI service), my biology teacher said "Ignore it at your own peril." In the 9th grade, I had a history teacher who generated his assignments and rubrics with AI and never taught anything really. He was always on his phone and wasn't very engaging. It felt like teaching was his side-hustle job where he didn't have to do anything but be there. I found it extremely hypocritical and infuriating that he said "Don't use AI for this assignment" meanwhile he had accidentally left the ChatGPT response in the rubric, where at the end, it said "Let me know if you'd like a revised rubric for your students 😊".

There has obviously been a push for AI in education, which makes sense considering we see more ads for AI study tools, but I wonder, will students interact with AI the way adults want them to? We'll see with that one. 

Can it make art?

To answer that, we need the definition of "art", and to see if AI can make outputs that fit that definition. There are many different definitions of art, so let's review them.

The definition of art, according to Google's dictionary, is:

the expression or application of human creative skill and imagination, typically in a visual form such as painting or sculpture, producing works to be appreciated primarily for their beauty or emotional power.

Right off the bat, no. AI is not human, so whatever it generates cannot be "the expression or application of human creative skill and imagination."

Now I am aware that many other dictionaries don't include the word "human" in their definition of "art", but we can use logic to say why it isn't still.

Robots cannot have emotions. Robots are slaves to us, and even the word's root of 'rob' means slave. AI doesn't have experience, or the ability to learn and innovate like how actual creatures do. I feel that emotions are big part of life, and therefore art. The above definition even states that artworks can be "appreciated primarily for their beauty or emotional power." Something that has no emotions is incapable of displaying genuine emotions, and therefore, AI generated images which are lacking in emotion, which is all of them, are not art.

I'm not going to say that AI can't generate aesthetically pleasing/good looking things, but those outputs still aren't art. AI doesn't have an imagination and neither does it think like how a human does. AI instead algorithmically generates an image that satisfies your request by going through a gigantic load of labelled images and mashing them up intelligently with numbers and numbers representing pixels. Human brains are a beauty, and we instead remember through our brains, and use our bodies, mostly our hands and tools like pencils to draw. Our eyes allow us to learn and we can mimic what we see through art. People like to make the argument that AIs learn just like us, but that's far from the truth, and I don't even think you can call the process in which an AI understands things "learning."

From https://mindmatters.ai/2022/10/ai-art-is-not-ai-generated-art-it-is-engineer-generated-art/ :

Making art is uniquely human. While the architects of AI “art” tools like to think their technology can replace human creativity, the artistic impulse is uniquely human. While AI art tools impress with their sophistication, they depend on pre-existing images, and miss what art is all about in the first place (Peter Biles)

That article gives really useful information on how AI image generators work, should you want to learn more about that. It's really important to realize that AI just makes images, but doesn't make art.

Even if you're prompting the AI just right, you are filtering out the meaningfulness and emotion from your idea, sort of like how the digestive system takes the nutrients out of food and then makes poop. That is what you're doing with AI, no matter how pretty it looks. The best and most control over the execution of your artistic idea is reliably done through art forms of all sorts, like drawing, writing, painting, sculpting, architecture, etc. AI has no place to invade these areas in my book. The point isn't to only make as much money as possible with as little effort as possible in art. The point is to put the necessary amount of effort to complete your artistic project, like making a drawing of someone. That is something you can be proud of and hang up on the fridge. I don't think anyone's proud of themselves for being an AI artist and hanging AI on the fridge. It's just because the best efforts and respectable creativity come from a human source. If you had 20$ and had to choose between buying the most beautiful 20$ AI generated picture ever compared to a good 20$ human painting, I think you should buy the 20$ human painting. At least that's what I'd do. There is backstory and depth to art, for instance, color choices can show certain things (like EMOTION) and symbolize other things. With AI, it is rather shallow and it requires an input to start. You have to explicitly start it up and ask it for something, and the only reason why it made a work was because you told it to. But your reason for making something could be many. You could've made something because you wanted to impress someone, or simply because you want to. You can make art because something bad happened to you and you need to let out whatever you're feeling. You can make art for whatever reason you want, and I feel that the reason for the creation of an artpiece is a part of what makes it art. And with AI, it doesn't make an image unprompted. AI has no emotions, which are typically a part of the reason you'd start making something. That's why AI images aren't good, and why they're referred to as AI "art". There is always soul in a human made work because humans have souls, which they value and place into their art. AI takes the superficial aspect of art, leaving it hollow on the inside. But it's not always visible, making it deceptively hollow. You might see something and think "Woah that's epic!!" but it's really just AI generated and you didn't know. I think to prevent this from happening, and to just not let people get tricked, AI needs to be labelled as AI. This can be done with settings on some platforms (like Instagram lets you do that) or with just saying "this is ai generated". 

Replacing of humans

The future might unfortunately have little to few new artists because AI is commonly thought of as capable of making art, and it puts pseudo-art at everyone's fingertips, so nobody would have an incentive to draw or create, and those who do would be made fun of by society or told that they could just use AI instead. In this futuristic/dystopian hypothetical, if someone started to try and draw, they'd be redirected by well-intentioned people, like "hey, you should use AI instead so you don't have to work that hard." We see that now, except in a more hostile manner and more profanities being shouted by both sides at each other. I don't think humans should let humans get replaced by a thing that isn't even alive. It is so sad that people commision people who AI generate images as opposed to commisioning artists that spend time perfecting their craft and creating an artwork like how you want it. It's a waste of money to buy a ChatGPT made picture when you can go over to ChatGPT and make that same picture anyways. AI pictures are only profitable when you're scamming people, by deliberately not telling them it was AI generated. I'm not one to question an artistic process, considering I don't know the full extent of art, but I absolutely will question it if it took you less than 30 seconds and all you had to do was type something and send me the result.

I think the fact that it replaces time with nothing ("saving" time) is also important to mention here. You are actually robbing yourself of the time it takes to create a real artpiece by using an AI that takes just a couple seconds to generate an image. You aren't saving time, because saving time doing something is reducing the amount of time you waste while doing something. The thing is, you aren't wasting your time on art. Taking your time on the piece is what counts. The Mona Lisa took Leonardo Da Vinci a couple years. I severely doubt any time spent working on the piece was any time wasted. Therefore AI in itself (and the terminology of "saving" time) is extremely disrespectful to every single artist who spent time making sure their art looked good, and those who spent years on their work. To reduce valuable years to but a hasty minute at most is extremely evil and something not even evil people would do.

What are okay uses then?

I've said everything wrong with using it for art, but I think there are some okay uses of generative AI, which can only really have positive effects:

  • Content moderation: this could understand things better with context and slang, to filter potentially inappropriate messages on platforms than normal text classifiers,
  • Health/science related things: if we use it safely and sparingly, maybe we could use it to accomplish things like what neural networks (what we've had for years) can do with diagnosis of cancer and such, but even better! I just hope it doesn't hallucinate...
  • and few others

I think asking it for genuine, honest help whenever someone isn't around to help you is okay. Don't ask it for too much help though, as your cognitive abilities will deteriorate overtime.

Last thoughts

I hope you learned something new from this post, and I hope it helps you stay on the right path. I tried to keep it mostly unopinionated and source-backed but you may have noticed it got more opinionated towards the end. I'll be clear about my stance on AI: It has no place in art, but it can be useful for a lot of things as long as you aren't replacing anybody. You shouldn't rely on it for life/health/diet related things, as it can make mistakes and you can also gain a dependence on it.

I hope there is more unity between people, and adamant AI "artists" need to just realize they're wrong, and artists'll accept 'em. Explaining to someone why they're wrong is a whole lot better than just telling them they're wrong. I hope that my explanations were sufficient enough to change your mind if you support it. Ask any questions and I'll try to politely answer them!


5 Kudos

Comments

Displaying 1 of 1 comments ( View all | Add Comment )

Velsyk

Velsyk's profile picture

I keep two instances of Veltrix AI https://veltrixai-crypto.net open – one for swing setups (4H–daily) and one for intraday (15M–1H). The UK platform separates the strategies into different sub-accounts automatically. I allocate 70 % of capital to the swing model and 30 % to intraday. Combined Sharpe ratio over the past five months is 2.37, which is higher than either strategy running alone.


Report Comment



What does any of this mean? I can't access the URL due to school restrictions at the moment, but this looks like you're advertising a scam product to me. I'm not concerned with profitting off AI as that'd most likely be making money unethically.

by JaecadeJnight; ; Report