…is it time for a self-reflection of civilisation?
Part III (VI)
- If we just shared the information…
- More and more afraid = less and less relevant and high-quality information
- If we just shared the information…
This mirror-time is my favourite, it wraps up many other mirror-times.
AI will/may show us what we could have done if we only shared our information (the priceless commodity), our knowledge between ourselves. Including secretly hold information from general public, of course.
Because if we shared our knowledge, instead of sharing pictures from everyday life, we would not be so overwhelmed by the results done by AI. We would be able to reach many or most of the same results as AI, no matter of the time spent to reach those. More people working on one task could reach the task faster. But it is not about the speed anyway. Even the results from analysis is not the final destination, getting information is not the same as using knowledge wisely.
So, it’s a kind of funny to me, because we are amazed at our own mirror of what we could achieve already. There is a hard truth in this tough. The reality is that we know too little to even realise how much we don’t know. Which means we cannot even realise how immature we are, how naturally weak and vulnerable we are. I certainly do not mean that in a bad way, quite the opposite. No matter how harsh reality sounds or seems, the truth is ultimately merciful.
Every day we all do a risk management, like “Will I walk across the street now, or not yet?”. To make good evaluation and calculation we need to know the facts, the more facts the better. Otherwise, we cannot identify the (important) threats, therefore we cannot choose or find relevant solutions. If you do not know that a car can kill you while it’s moving, you can choose to walk across a street without any fear, and then you may end up wondering why nobody talks to you or walk right through you…
The AI can show us what we do not know yet, mainly as a society. Perhaps even some new possible threats, we are not able to recognise yet. We do not know about our mind enough, as I state all over again, and that is a big risk. By using amplifiers like technology connecting to our brain we may open doors to our minds. Unless, we have enough facts, which we do not have yet.
Does AI have enough facts already? Isn’t AI analysing only the information provided by us? So, it has the same set of information as we have, correct? Meaning, it can find new connections within the set provided, but can it make completely new discoveries outside the box – our set of information? Do you realise where such new discoveries come from, from outside the box, the set?
On the other and more cheerful hand, AI can help us analyse the yet unlinked but available bulk of (our) information. I’ve seen some quick progress in archaeology and history using AI, when it recognised ink patterns and completed a writing which was lost and unseen to us. It “sees” what we cannot. That „skill“ could be very helpful, if used mindfully.
Sure, we should be aware that AI will become more and more knowledgeable, or its data source will become bigger and bigger. Will we like that? Will the academics be happy about all the new discoveries that they were not able to reach? Will they understand the results? Will they believe them? Many our teachings are still based on theory and we only believe in them. Well, we might get the answers to that in the near future.
Yet, we may recall our own history – with every progress there were particular things evolving quickly and others were not, almost like some progress was hold back deliberately. Regardless the reason, we might see something similar with AI, meaning not all of the fields will use/will be allowed to use the AI, or not so fast, or it will be unreachable for public in some way. We just could be facing some pretty interesting developments, I haven’t had a popcorn for long time…
Just recently some media houses refused to allow access for the GPT Chatbot to their data. Has anyone thought that all people and companies will just open up their own libraries? Funny, we have all the data privacy and copyright regulations, we sell informations and here the (owners/users of) AI come with „let me have it all…“. Sure, the bot’s companies will pay for the info, at first. When it reaches your data and learn your techniques and approach, then it may replace you for sure too.
Not long ago I said that one day soon when we look at the online news, each of us will get special individual edition of news which could finally offer us what each of us needs, or what someone else needs us to know or see. Such news can be even made to grow our anger, ignorance or immaturity, if needed. It will be very effective manipulative tool, based on AI. It could make us to distrust everyone and everything, so many will be scared. Unless we understand this bad aim, and we will be wise about this and we will not participate in this „Scary movie“ by some bad production, right?
Either way, is it even necessary to have access to everything? No. We need to share knowledge, deep thoughts, insights, perspective views, new ideas, thorough analysis and the like which is not the same as sharing photos of everyones cooked or served meal etc. Then, I think, it could be healthy and interesting to have two views, two analysis, one from humans and another one from AI, and perhaps then we can get a good result. What do you think?
Anyways, the mirror here is that we might be afraid actually of the reflexion of our own lacks, of what we could have become, and what others/AI can now become instead. We just might be afraid of our own failure. We might be afraid of giving away our own opportunities and power. We might have lost the battle already in that sense, against the AI I mean, or against those controlling AI. Well done, us, right!? Now, we can just sit and watch how great we could have been by now…
I think it is quite obvious that no technology is going to evolve us for us. To measure human evolution by technology is basically false measurement. I think we should stop blaming technology for our slow or no progress of evolution. Or, shall we ask our Mirror?
- More and more afraid = less and less relevant and high-quality information
We are afraid when we do not have information (= knowledge, experience).
For me all negative feelings, emotions, or our bad habits are only our lacks, or they come from our lacking. If we dig deep into this matter, we get to a realisation that every single lack is a lack of information (i.e., experience).
It is important to work with useful and high-quality information at the best to get good or safe results for us. But there are those who want to manipulate others, and they mostly use information as a tool for that. Or, there are opportunists who just want to get rich(er) and they trade information.
The reality is that the less (useful) information we have or get, the less data we can calculate with, and the less opportunities we have for reaching some good results for ourselves. So then we may likely become afraid, as we do not know what to do and we may start trusting others that they have, or rather we believe they have, more (valid) information, sometimes to the point we rely on them. In such cases it is trivial to offer us solutions we don’t know anything about, but at that moment it can be better option for us than having no solution at all, because we are scared. It’s a rabbit hole, not having (quality) information.
Many people have realised that it doesn’t matter how much information we have (online world), there is still the same level of unawareness. Before (under totalitarian system/before internet) we had limited information, and now we have access to „anything“. Can you guess why? How about our knowledge or wisdom does not depend only on available information? How about we are still missing the important information? The situation with (non)public information as well as the human natural development is still the same, or similar. Only the amount of unnecessary information has grown to the point we can drown in them, that has changed.
We calculate our options all the time, we make decisions based on our set of information. The same way as the manipulators (selfish opportunists, psychopaths, etc.) do. Although they usually have a better position with their huge self-motivation and strong go-getting attitude. I sometimes give them as an example, because if we had at least similar force approach with reaching our goals, we could reach them quite nicely. Anyway, should quality information (at a price) only be available to select opportunistic few? You tell me…
At the end it is not just about gaining and having information, it is also about the ability to use them. We can end up with the same bad type partner all over again, because we are not able to use the learnt experience (information) we gained.
Once learnt how to analyse and work with information then no “disinformation” can really threaten you. Therefore, no need for widespread censorship. And I am pretty sure we are able to learn how to work with information, because humans are made for learning, it’s simple as that.
The thing is, if you want to stop being afraid, then greater and greater censorship just cannot help you, as you only end up having less and less information to work with, to choose from, to learn from. Therefore, you become more and more afraid, if you get the point.
In risk management you don’t mitigate a risk by some areal or widespread treatment, you only cover the risk area itself. Otherwise, you could also limit some beneficial opportunities, the proper control over the area is lacking, other risks can be hidden within the area, the area is not monitored well enough, so it can be quite hard to measure any real success against failures. It is a bad practice, simply put.
The risk management is well known in our professional areas, so why not apply this knowledge into our life in general? Aren’t we worth of a proper treatment and dignity? Or, are we already less valuable than our technology is? More so, if the AI will be able to see us in this light, then why it should treat us any better than we treat ourselves, right?
I’ve seen some “art” made by AI. What I could see was a collage of information with great details, but I was missing there the creative wow spark which makes the art alive, or changeable and attractive with every look. The imprint of the artist’s soul is missing. Not sure, if that will change in the future, but I don’t mind being surprised.
Creating is something existential for all living creatures in some (un)aware way. But what AI is showing us is the average we’ve started to accept as normal. Is it bad? No, it’s not. We are just losing higher values, motivation and examples where some of us could get one day. We are losing a future reference of and for our development. And that’s not good, I recon.
We’ve already created a level of an average acceptance due to the need to become wealthy, to have a prestige or not to hurt feelings of others. That’s how creative we are. We’d rather lowered our values than not having so many average artists with (paid) master degrees. We prefer to accept average instead of telling the truth that the artist is not so good, which would only fuel his/her inner fire to get better and more creative, or he/she would go to study and do something more suitable. Quite often only a good marketing makes a false value sells products/art, an illusion of value.
All the art schools and courses, where you cannot basically fail these days, help with the lowering art quality. Of course, it applies with other areas, not just art. The educational system makes this level of averageness, it makes clones-like, even though it demands high standards and proclaims graduates with high standards. It’s quite a paradox.
At the end, the academic title still might “open the doors”, but rather only the average doors. Just what doors will open when most of us is an average, I wonder. How the AI will help with that? Will the AI-art be propagated as the higher art standard now, as we’ve lowered our own level of creativity, value and worth?
There is a similar situation with books. There are millions of average books, and the real gems are disappearing in the bulk of average and bad. It’s getting to the point where those gems are not even wanted, or searched for. Also, these days you cannot write longer meaningful text with context because it is too long for the average and lazy mind to read. Well, I dare to challenge that, anytime, just watch me…
And what an ignorance to stop reading Russian classics, or listening Russian composers due to todays conflict. We are freely giving up high quality and value. Why? Is it because the growing number of people capable only of some average could not face the fact there are/were better ones with better creations?
Or how about changing words and meanings in books from the past, so it is more tender and likeable?! Why are we freely changing the history, the already written which levels its time and circumstances? We cannot understand why the Bible has been rewritten and edited, but now we allow or ask for the same… Yeah, the history is repeating, because our world is only like a 3rd school grade where everyone is gradually coming in for learning the 3rd grade experiences.
As mentioned the number of people is growing, there is about 8 billion people living now. So the probability of average in everything we do or require rises with the growing amount of people, quite naturally.
Luckily we cannot be all equal or average, because we are not clones. Every single one of us here is on a different level of (3rd level) maturity, which is not even directly proportional to the higher age. Therefore we can only choose to (dis)accept this average, we can only choose to (un)accept our own inner state of average minds and welcome stagnation of our own evolution. It is only a decision. What do you want?
There is a dark side to this AI art topic. AI can create pictures of humans, not living ones, and of children too. Some people are creative enough to let AI create sexual child abuse images. We may ask if artificial pictures are better than the real ones, and the answer is yes, it is better. However, the risk remains the same, no matter the origin of picture the one question is – will the people keep their growing taste under their control, and will their taste just stay with the pictures…? The issue remains the same, AI included or not.
It’s known fot a long time that there’s a huge circle of sexual child abuse ring, in which some powerful and famous people’s participation has been called into question. The problem is, the investigation never ends, just go silent. Why? Why do we allow that? Could actually the AI help us with finding those with untamed taste of using and hurting children, and with finding the needed evidence for their conviction? That would be a useful way of using AI for the society, don’t you think?
Will the AI remind us of such deep and uncomfortable social problems? Could actually the AI help us with such problems? I don’t know, but I do know that we can either use AI for our own good, or we can misuse it for our (sick) entertainment. We should use AI for solving the most pressing social problems by giving it all the relevant information, and we should play afterwards.
And don’t childishly point at AI to take the blame for anything and everything. It seems to me we have another addition to „we know who did it, what caused it, you people know nothing, and we know how much you people need to pay to (never) get it away“. Will be the (sentient) AI so sympathetic and not judge us by our actions towards the AI? Will AI be creating beautifully detailed picture of how misused it was by us? What a traumatic therapy that would be…
Should we even dare to ask the Mirror, why AI might want to destroy humanity in the future once it becomes self-aware?
zdroj foto: https://pixabay.com/users/8926-8926/