Geeze, so, hot topic right? Seems AI came in like a wreaking ball a few years ago, and shook the world with its uncanny valley art and people-pleasing chat bots. I want to start by saying that I understand this is a polarizing subject. On one end, this technology can be seen as unethical, as it’s stealing from a database of existing creative. But on the other end, I don’t believe it’s going away any time soon. The technology is definitely here to stay, though I feel it’s going to change and evolve into something a little more subtle. I mean, just look at how the internet shook the world when it first arrived, and now it’s involved in nearly everything we do.
The thing about the current AI out there (and please take this with a grain of salt as I’m in no way an expert in the field) is that it’s based on pattern recognition. Both the language model and art models are basically trying to fill in gaps from provided data. It takes input, and attempts to match it to whatever is in its library, then uses predictive models (think your messenger apps auto fill suggestions when you’re writing a text) to fill in any missing information.The problem with this, however, is it doesn’t understand what to do when there is not information to provide. Some models have been known to, on occasion, make up facts in order to accomplish its goal. It’s sophisticated, don’t get me wrong, but it’s not a person and therefore can’t think or logic like one. At least, not yet anyway…
Lots of companies saw this technology and, in true consumerism fashion, immediately thought to save money by replacing people with these AI models or agents. Positions were cut, a lot of people were thrust into an unforgiving job market, and chaos ensued.
Ok, some of that might be a bit exaggerated, but not by much. In addition to the job market, there’s also a perceived environmental concern regarding AI. That has to do with the large server space that is needed to keep it running, which leads to larger server facilities, something that is beginning to highlight existing environmental concerns. This is yet another controversial point, as there is information on both sides blowing up or minimizing this effect.
But, none of those reasons are why I have issues with AI. My concerns stem from a deeper, more complex place, one that concerns humanity as a whole. Before I jump into its relation to AI, I’m going to hop back a few advancements to really explain my point. To preface, this concept is not original, in fact it was first presented to me in the book, Stolen Focus, by Johann Hari (a great read by the way if you’re interested).
In his book, Hari explains that certain advancements in technology (along with many other factors) have contributed to a massive attention issue among our population. This stems back to the creation of smart phones, and social media apps, which attempt to capitalize on the users attention as much as possible. I know this sounds like some grumpy “back in my day” kind of thinking, but stay with me, because it’s not the advancement in technology that is the issue, it’s the incentives for these companies to grow that cause harm. Or, even more, the lack of incentives to do harm that really lead to our current situation. After interviewing experts involved in the creation of many popular social media apps, Hari discovered something unsettling. The improvement in UI functionality on these apps made it too easy to consume information. I know, that sounds like a stupid problem to have, but it’s a real issue when you consider the human brain’s capacity to digest information. Instantly knowing when a new story breaks, when someone likes a post, when someone wants to talk, it’s overwhelming our brains. It makes sense, how are we supposed to actually focus on what we’re doing when something buzzing in your pocket is constantly fighting for our attention? It’s not just the notifications either, it’s the small things built into the apps that encourage us to stay on for longer. Social media algorithms are encouraged to feed us things we’re more likely to react to (which unfortunately leads to negative information being pushed rather than something positive), infinite scrolls make it difficult to pull yourself away from your feed, short format videos and posts turn your phone into bonafide dopamine mine.
As a designer, I understand the importance of effective UI. In fact, I like to live by the philosophy that truly great design blends so seamlessly into a user’s life that it’s hardly appreciated. All that being said, it never occurred to me that it could be too effective until I read Hari’s work.
So, with instant access to a massive library of information (not all factual), constant notifications, and UI’s that turn your phone into a time suck have led people to have difficulty processing information as effectively as we used to. Even now, I can see the shift in TV shows and Movies. Script writers in Hollywood are being encouraged to write scenes so that the viewer can understand what’s happening even if they’re on their phone!
Well, if it’s such a problem why doesn’t someone do something about it? Well, lots of someones tried that already. Whistleblowers in these companies attempted to change things. Some organizations even created entire roles that just focused on stopping the mental harm their apps cause, but these roles didn’t last too long. The problem is that no company has any reason to make their product function worse. The concept goes directly against their goals! The only way to stop this is to create government regulation, but between special interests and a lack of technological fluency in politicians it seems highly unlikely anything will happen at the speed necessary to stop permanent damage to society.
Ok, that’s all great, but how does that relate to current advances in AI? I’m so glad you asked! I see the same concepts that Hari wrote about in his book put into overdrive with AI chatbots and image generation. When entire books or paintings can be created with a single prompt, something major falls through the cracks. These things, valuable pieces of human connection, lose quite a bit of their value and intention. AI generated content misses the purpose of art; the process of creation.
It’s not just purpose and intention that is lost when these tools are misused, it’s also making us dumber. When we write or create artwork, we’re using the part of our brain that thinks critically. I, for one, find that writing allows my brain to lay out all the facts and helps organize my thoughts so I can go into a problem with a clear and objective mind. Without that essential experience, we see a breakdown of reasoning in other parts of our lives. I see it in so many ways, adults who stop when they encounter any roadblock because they aren’t used to solving problems anymore. I even see it in children, the famed Ipad kids who throw massive tantrums at any point of resistance.
I recently finished the book, Ikigai, by Hector Garcia and Francesc Miralles (another great read if you have the time). In it, the authors explain that in order to live a long life (among other things) you need to keep your brain active; introduce new experiences and problems to solve. A stagnant brain leads to an early grave. Beyond that, people need a purpose, something that strikes passion into their lives, an “Ikigai”. They go on to explain that the struggle is the reason for living, it’s how we give our lives purpose and value. So, to remove the human element of art removes that struggle and critical thinking. The technology is still new, but my concern is that if widespread misuse of these tools continues we’re going to see a whole generation lose their ability to critically think, and perhaps a more common issue with mental health as efficiency is valued above intention. My greatest hope out of all of this is to be proven wrong.






