Marlow ⛥⃝ (EXAMS)'s profile picture

Published by

published
updated

Category: Web, HTML, Tech

my opinion on ai (as someone who has used it) - a comprehensive essay(?)

before we start, i would like to say that i am not an ai bro. i dont support ai, nor do i condemn it. i also studied digital technology for two years, and one of the topics was artificial intelligence - note that this was shortly before chatgpt became a thing (released november of 2022). ive also used ai services for the purpose of education - learning about the ai itself - to study it. im not a professional by any stretch of the imagination, but i know most of what im on about here.


for the people who have a life and (surprisingly) may not know what ai is: ai is an acronym that stands for 'artificial intelligence'. artificial intelligence consists of an algorithmic computer system with several different layers of neural networks (deep learning) required to enable the system to perform specific tasks that typically require human intelligence. this includes visual perception, speech recognition, decision-making, and more. in the recent years, there has been more discourse surrounding artificial intelligence - primarily, the potential benefits and drawbacks it can present to humans upon further proliferation.

i believe good things can come from ai, but first and foremost, there are many many issues that need to be resolved before we can confidently use ai as a tool.

the three issues i will go over before confidently stating my opinion are:

1. ethical issues

2. environmental issues

3. social issues

there are many other issues with ai, however these are the ones ive done the most research on.

///

the first issue is obviously the ethics of ai. ai has been shown time and time again to take from sources on the internet (including people's personal information) and essentially chop it all up, throw it altogether and regurgitate it to 'mimic human behaviour' in their response; demonstrating human-level problem solving, human-level comprehension, etc. a lot of people have several issues with this;

the three ethical concerns i have both seen and researched the most are:

bias and discrimination - depending on what specific data (and the quality of such) is harvested for ai training, the ai may exclude groups of people from its database or even stereotype them, which can inadvertently contribute to systemic issues such as racism, sexism, etc. this means that ai isnt exactly 'neutral', and can be used as a tool against marginalised communities (who are already oppressed enough) and essentially make their lives harder. there needs to be oversight and fact-checking involved with data input for ai training instead of funnelling a bunch of information on the internet straight into the ai for it to be trained off.

privacy and consent - personal data from the internet (which is usually most of an ai's database) could be harvested for the sake of training the ai, and the people who's data has been taken do not know what specifically is being used to train the ai, let alone if their data is being protected. most of the data gathered was not gathered with established consent.

human judgement - basically, if theres any oversight from humans when ai is making decisions, there should be more. ai cannot make decisions akin to those made with human intellect as it lacks fundamental requirements to do so; critical thinking, social awareness, ethics, human morals, empathy, etc. ai has been found to be much harsher when making judgements than humans. simply put: ai lacks the nuance needed for decision making and judgement; therefore, human oversight is required if ai is to make any impactful judgements or changes that can potentially harm people.

a lot of ai bros have openly complained that limiting ai's data intake (e.g. making sure the ai's intake isnt from the internet) will limit the proliferation of ai. my answer to that is that ai won't be truly accepted by society until it is made as ethical as it possibly can be. if we cant make it ethical however, then we shouldnt have made it in the first place.

///

next up is environmental impact. this one goes hand-in-hand as it is also an ethical concern to some extent, however ive seen continually less people speak on it. ai's environmental impact is one that i personally have gripes with.

some examples of ai's impact on the environment include:

cooling systems - data centres hosting ai systems need to be cooled momentarily to prevent overheating. they are cooled down by - typically - hybrid cooling systems (also known as direct-to-chip cooling, where the ai chip is cooled near a cold liquid - water - which directly dissipates heat from the chip), the most common way of cooling systems in a data center to support higher rack densities. cooling systems are needed is because the use of ai is continuous, and the usage of electricity is also continuous (see below). it is also rumoured that ai chat models such as chatgpt use up 500ml of water between 5-50 interactions - note that chatgpt has about 200-300 million active users and accounts for about 62.5% of the market share for ai tools. (source - backlinko)

electricity consumption - as of november 2024, ai uses about 33 times the amount needed for regular software (source - arxiv.org) and will require more in the future as it continues to develop and expand. even during 2022, about 2% of the global demand for electricity was from ai (alongside cryptocurrencies and the running of data centres) - it has very well changed since then with the proliferation of chatgpt and other various ai services assisted by openai, the average amount of energy needed daily being estimated to be about 19.99 million kilowatts per hour (source - business energy uk).

extraction of materials - fundamentally all technology - not just ai - requires the use of valuable material to both be made and function optimally. for example: materials such as lithium and copper are needed for device proliferation, but can be extremely dangerous if they are extracted incorrectly via mining. lithium toxicity and copper poisoning, while rare, can happen if an individual is exposed to dangerously high levels of the materials. these materials are typically extracted by children in third-world countries. im sure you can see the problem from this explanation alone.

///

thirdly, social impacts. this is one ive seen a lot on and did some of the most research i have done prior on the topic. this also goes hand in hand with ethics as things like bias and discrimination can also be considered social issues surrounding ai.

technological dependence and isolation - as technology continues to advance and more ai services therefore become available, people's problematic use of time on their phones and devices should be called into question. people may become increasingly dependent on ai for even the most simple of tasks, leading to an 'ai addiction' and inability to make one's own decisions. this can also lead to mental health problems as we are living in a more technology-based era, and people may end up speaking to ai chat bots over real people. this isolation can make many people deeply depressed and make them further reliant on ai systems.

unemployment and replacement - a large amount of people are afraid that ai may take over jobs that require either a lot of creativity and passion (art, music) or jobs that require minimal effort. these worries have only gotten more plausible with the appearance of ai-generated artwork in advertisements (e.g. coca cola's christmas advert, alongside actual advertisements i have seen at my local bus stop promoting christmas products) and instances of people using ai in creative spaces (e.g. colorado state fair fine arts competition - an ai-generated artwork won first place. further context - cnn). if jobs are to be taken by ai in these specific sectors, it could leave many people unemployed or create a 'useless class' of citizens.

///

personally, i think ai can be moderately beneficial for a range of job sectors such as medicine/doctors (e.g. diagnostics; analysing medical histories of patients and identifying symptoms that align with a more accurate diagnosis), retail (tracking orders, stock analysis, etc), and even as a tool for things like computing (helping with code correction) and criminal justice (e.g. assistance with forensic analysis/evidence and identification of potential criminals), but i feel that the issues ive listed here should have been addressed first while ai was still in its beginning stages - about 2-3 years ago - before we started to implement it into everything. because right now, it really isnt looking great.

not to mention, i have seen lots of people using ai for very... questionable reasons. there has been ai cat gore being recommended to me via youtube (and worse, being pushed to children) for some weird reason. im not sure if this has anything to do with the fact that its ai or the fact that this is actual content people are churning out using ai to make it. either way, im still concerned.

i absolutely hate how ai bros try to defend and justify the flaws in ai. even ai-generated nfts (which is double the amount of environmental damage, iykyk). it feels almost dystopian seeing the mental gymnastics they endure to defend their sweet sweet ai baby nft cryptocurrencies 😭😭 respectfully, pick up a pencil my dudes.


sorry if my writing is sporadic at all, i tried to structure my thoughts here a bit better and this took me two hours to write. i also ran out of brain juice towards the end of the post so uhmmmm... ignore that

also please correct me if im wrong and feel free to critique me!


19 Kudos

Comments

Displaying 2 of 2 comments ( View all | Add Comment )

maciel

maciel's profile picture

for some reason, people like to use ai to replace stuff that was cool because it was human from the start instead of focusing on helping make more human stuff possible


Report Comment

Lakes

Lakes's profile picture

yeah i think ppl would have less of a problem with ai if it wasn't used for art, was used ethically & was actually eco-friendly


Report Comment