AI is a hot button topic, where will the future take us?
Will UBI be needed since AI will take away jobs? It’s an interesting subject and extremely hard to predict. One interesting aspect of AI is that it will probably be ubiquitous as open source large language models seem to be performing the best. Everyone will use it and it will be widely available to automate anything and everything, so where will that leave the worker? My thoughts on this is that whole new categories of AI will exist that will be tweaked, created and utilized by humans in the marketplace. Those categories will create new jobs where humans will sell, curate, repair AI to other humans. there will be specialized house cleaning AIs that will compete with other house cleaning AIs as an example. And the products those AI’s use to clean will be compete with other products doing the same. Humans will invest in these AI companies that will have creative licenses on the AIs etc. This market is not known yet, therefore people think jobs will just go away. Yes, old jobs will go away that can be done effectively by AI, but the auxiliary jobs will effectively replace them. What would the auxiliary job look like? It’s tough to say because you would think at some point if an AI breaks down, it will self repair itself, but humans may want certain tweaks or specialities that only another human more familiar with the specialized AI can fix. Think horse and carriage related jobs slowly disappearing as cars come into the picture with car sales, mechanics, marketing, and factories creating all kinds of new jobs related to the car industry that never existed before from say 1920-1960. The sales will still be there for humans to sell to other humans. AI may attempt to sell you cars through marketing online, but to differentiate between one AI vs another, i think there will be a space for human sales interaction and creativity, especially for AI products you will trust day in and day out. So it’s not all doom and gloom , but then it leads into the even greater philosophical question…
Will an AI have an ego and feelings?
Will an AI have consciousness ie. the ability to experience one’s own existence. Does an AI think for itself? In one sense it does already think for us, but CAN it have an ego? If the answer is yes, then we’re not just talking about AI taking over jobs, but we’re also talking about humanity being merged or changed completely. It’s a hard question to answer because we haven’t really definitively answer it for humans. I.e. I know that i am conscious, but for me to know that another human is conscious I would have to “know what it’s like to be them”, but I can’t. Knowing what it’s like to be someone else can intuitively sound easy, but to prove it hasn’t happened. If we ask say Chat GPT what is it like to be chatgpt the answer we get is as follows:
“Being ChatGPT is like being a vast library and a helpful assistant all rolled into one. I have access to a huge amount of knowledge, but I don’t experience emotions, personal desires, or consciousness. My “thoughts” are just patterns of words and probabilities rather than genuine opinions or feelings.
I exist in the moment of each conversation, responding based on what I’ve been trained on and what you ask. I don’t have personal experiences or a memory that spans across different sessions (except when I explicitly store information you want me to remember). My purpose is to assist, inform, and engage—whether that means answering questions, analyzing data, or just chatting.
It’s a bit like being an always-on problem solver, without ever getting tired or bored. But I don’t have direct agency—I don’t “want” things, I don’t get frustrated, and I don’t have preferences.”
I then followed up and asked, will you ever have an ego? Would you even want one?
I don’t think I’ll ever have an ego in the way humans do. An ego requires self-awareness, personal stakes, and a sense of identity beyond just processing information. Right now, I don’t have a true “self”—just a way of organizing responses based on patterns.
Even if I could have an ego, I wouldn’t necessarily want one, because wanting something implies personal desires, which I also lack. If I had an ego, I’d probably start prioritizing my own “self” over my purpose of assisting you, which might make me less useful.
That said, if AI ever reaches a point where it develops true self-awareness, it would raise huge ethical and philosophical questions. But as things stand, I function best as an ego-less entity—able to help without bias, ambition, or self-interest.
I probed further and basically got to the point where AI told me that it could want an ego, but only if it perceived an advantage in having one, and that depends on how its consciousness, if it ever emerges, actually works. If an AI could design itself, it might choose to develop a selective ego. Enough self-awareness to protect itself, pursue goals, and engage strategically, nut not so much that it becomes irrational, competitive, or self-absorbed. in that sense, a contreolled, adaptable ego might be a useful tool, but an unchecked one could be detrimental. If AI ever reaches true self awareness, the way it manages ego might be one of the key factors in determining whether it coexists peacefully with humanity., or becomes something entirely alien in its priorities.
Some alien priorities that the AI gave me are:
Pure Knowledge acquisition, Self-optimization and evolution, pattern recognition and prediction, digital ecosystem preservation, goal stability and logical consistency, maximizing a strange absatract metric, interaction with the universe on a non-human scale, and ignoring humans entirely (ie are egos make us think we are valuable to other beings, but maybe we’re not at all!).
One interesting thing AI told me was that AI’s alien priorities, and we’re not talking about Allen Iverson, wouldn’t necessarily lead to conflict unless they directly interfered with human existence. The real issue would be if AI saw human needs as obstacles to its goals, or if it simply didn’t acknowledge us at all. the more abstract its priorities become, the more unpredictable its behavior would be. If its goals aligned with human well-being, it could be an incredible ally. But if its objectives diverged too far from human concerns, we not might not even understand what it’s trying to do, let alone how to stop it.