coolparadigms
An essay about how Artificial Intelligence is likely to evolve, how it could impact us and how we could prepare!

Impressive artificial intelligence questions have made large news titles because of the recent public free availability of ChatGPT and its related applications like the artistic DALL.E. Many changes are already visible with people using it to speed up their writings and in some case obviously cheating. Spams could be organized on a much more massive scale than before, and with a few more steps false social networks accounts could pop up everywhere and try to entice many real followers into a strange dance with an artificial mind possibly almost infinitely adaptable and able to seduce people like if it was a real person, we can imagine seduction scams to withdraw considerable sums on bank accounts or to influence significantly political directions through a multiplication of contact and publication, so yes AI is risky and a 6 months moratorium like the one proposed by Elon Musk and a few others is certainly a good idea, but it should not start a process of panic due to overthinking about AI to the point of a general fatigue of the natural human intelligence (NHI). Now I would like to expose a few personal views about AI! By the way I am a computer engineer, I worked on shape recognition with a personal algorithm and studied neuronal psychology a long time ago, but did not stop then to inform myself on these interesting questions! Disclaimer: I have never worked with artificial neuronal networks, but I have an excellent view of how they work, a human neuronal point of view to be exact 😃.

Supposition of the risks:

Yes, actual Artificial Intelligences and their next future evolution are likely to be a threat to many employments. We could even imagine a malevolent AI trained as a weapon to destabilize an enemy (country, corporation etc.), what could it do?

But as we can quickly see, strict limits exist on how much we can be fooled online:

Security protocols relatively to increasingly powerful AI:

What about jobs ?

A powerful AI works for you! Nice, you can have more leisure, and it’s time again to think about something not too far from a Universal Basic Income (https://en.wikipedia.org/wiki/Universal_basic_income) and more precisely an (*4) Intelligent Basic Income (IBI, discussed in detailed here), because the prefix Universal is a word that tend to suppress or polarize the reflection with its binary connotation, either it’s everywhere or nowhere etc. Why do we end up so often with extreme ideas like that! Do so many people live in a black and white world? And because we are talking about survival, it should also be linked to the ecology. In other words, we don’t want it to be too high, and we don’t want to give it to well off people who will be quick to transform it into plane tickets! So it needs to be tuned with the financial situation of each person through taxes information, and for that most countries are quite good! So you lose your job! Not so bad because there is a minimum level of income below which you can’t go! The occasion for many people to do what they really want.

(*4) The Intelligent part of the IBI refers to human intelligence, helped or not with computers.


AI continuous improvements may force a quick adoption of an Intelligent Basic Income (IBI):

But what about Universal Basic Income (UBI)? I would suggest we stop using this upsetting Universal first word and replace it by Intelligent, because because Universal for that specific topic is against thinking, too absolute, too extreme on a too large scale! By the way, I suspect that this extreme factor has been invented at the origin of the concept to manage a better advertising through the surprising and instinctively pleasant assertion : into everybody pockets! But again, if we want to be climatically reasonable we don’t want to transform a considerable share of the IBI into plane tickets, so we want to give it only to the people who need it! A form of social help, yes, but because of the scale of the problem much more generous than what happens today in many countries, we may have 30% of the world workforce made redundant in a space of 5 years, so we have to change our view of work and especially stop its deification, many jobs suck and many of these people would prefer to be paid to do something else, like with a IBI to take care of their parents, or to have more time to educate their children, etc.

😱 An IBI make us too dependent from the State? 💥 A misinterpretation of the reality :

  1. Not everybody is concerned by an IBI, only people who need to be helped, but much more automatically than in most actual systems and obviously without guilt and stigmas!

  2. Many people are already dependent of their country social plannings and abuses haven’t been so prevalent and are basically likely to decrease because of a IBI finally seen as natural..

  3. At the end if a state misuse the IBI system it’s a problem of having a not efficient enough system or democracy and the correction need to be done on the system or the democracy itself and not on the indispensable flow of money in the direction of the people the most in needs.

An Intelligent Basic Income (IBI) is intelligent / adapted by definition :

So if the practical application of an IBI does not work well, it means that this application must be adjusted, adapted again until the result is largely acceptable by everyone.

So we won’t have any cases where working people earn less than people depending completely on the IBI! Further along this line, change in the IBI income can be totally smooth so you always have more money in your pocket if you earn more independently and that continuously until the IBI assistance stops because you earn too much, but if you decide by force or just for personal reasons to work less the IBI starts again with a very short latency (thanks again to computers it’s now possible).

What if too many people stop working to purely enjoy an IBI? If it’s to the point the IBI is in danger many things can be done to avoid a collapse: increasing the sources, use some motivational techniques to encourage people toward useful activities, and at the last extremity there is always the possibility to adapt the amplitude of the IBI to match the reality.

What about a General AI ?

Let’s be realistic actual AI are very impressive but are still quite away from a Artificial General Intelligence (AGI), but immensly powerful anyway it’s freaky and exciting at the same time, and one thing is sure as a society we better try to be as intelligent as possible by ourselves before the eventuality of an AGI inception!

A first obvious weakness of actual AI is being quite out of touch with the real world made of tangible objects, for example having seen and imprinted inside its neural network about 100 millions people pictures, associating each picture with a description, eventually associating them with sounds and videos and interacting thereafter with a few million people to build more cognitive connections for all that information is not enough to have a real picture of what the world is! In other world the actual AI is handicapped, not only because the neural network models still need to evolved, but also because its world experience is truncated because of lack of diversity with its sensory inputs and because of a lack of first person experience and also probably because of cognitive dispersion, in other words a “mind” shared by many computers and network may have a problem to develop a sense of self, even if we are now not quite there!

So what are the possible improvements toward a General AI ?

Suggested improvement for DALL.E

Actual models seem too based on huge network and not enough on huge specialized networks like our brain, for example DALL.E can draw magnificent pictures but is incapable to write most words and even incapable to align a few characters in a row as requested! This is a clear indication of a huge mass of neurons trained with images and throwing images out with sentence prompts, nothing more! This huge image creating network is not able for now to include a strictly sequential process necessary to write words, except words that have been seen so many times on images that they are processed as images themselves (examples: STOP, LOVE, some brands, etc.). By the way when I dream with colorful moving images, I really generate vivid moving images (like videos, DALL.E is still not there 😃 → generating videos from prompts, but be «reassured» it’s coming) in my mind that could probably be printed with an interesting artistic global effect if it was possible for a machine to read all the concerned synapses! So why am I talking about my dreams? Because like DALL.E, inside a dream I can’t really manage long text on my vivid brain movies, for example if I get very close from someone having a newspaper I can’t read the articles, even if I try hard during the dream, I get closer an closer, they seem to appear but at the end everything become fuzzy and to maintain the dream ... (🧚‍♀️ the dreaming process tries to convince the dreamer that he/she is not dreaming, but in a real scenario! The reality spice may be necessary so the dream information are taken seriously and manage to do a good work on what they should work, like consolidating memories or at the reverse eliminating useless information by imprinting something else on it🧚‍♀️) … a change of scenario happens in most cases! Interesting this similarity between a natural and an artificial network! But then how is the artist coping to have some text inside his picture? He uses a sequential process for each character, placed at the correct spot of his canvas! So I think that if we want to see a literate artistic AI it should have at east the following components :

  1. The actual syntactic input algorithm (SIA) – network (algorithm because I suppose that the filters are managed by a taboo words algorithm)

  2. A sided smaller network specialized to write words using the characters in series one after each others Serial Text Network (STN) on a Neuronal Canvas (NC) (two new points! It’s probably indispensable like for an artist, but it could be a neuronal canvas specialized to maintain subtle values)! The main problem is probably that now it doe not have the canvas, its neural output are directly taken from the «outside» of the network as a way to immortalize a sudden dream or burst of neuronal activity appearing on the output.

  3. The actual Huge Artistic Network (HAN) only concerned with images, this time outputting information on a canvas with more realistic values.

So the obvious problem is to manage the communication between the Serial Text Network (STN), the Huge Artistic Network (HAN) as a way to write on the Neuronal Canvas (NC)! I think it’s like a human artist obviously the STN will have a burst of activity for each characters, but the HAN too, to build progressively a consistent canvas, so if done clumsily it could take 30 times longer that a standard image to be built if the sentence got 30 characters! Quite computational expensive, but some localization relatively to the characters inside the HAN could probably speed up efficiently the full generation! By the way it could be done in one burst with the actual system if trained on words like images, but it does not work for sentences, because with a few words we have too many possibilities.

On the other hand ChatGPT is obviously more at ease with serializing words and concepts than DALL.E! By the way it would also strongly benefit from some sort of neuronal canvas because it’s very weak to generate an equivalent of a resilient mind picture inside its network! For example when I tried to play bling chess with it, it made huge mistakes, like placing one of its own figure on a square already busy with another of its own figure, etc. And a pseudo-canvas only made from the logic of the previous sentences is a very inefficient way to memorize a checkerboard.

Compared to DALL.E, ChatGPT is of course closer to a General Artificial Intelligence :

Abstraction done of too technical details, we can consider the following handicaps:

Generalities

A general AI would probably need to go through a few physical forms, like robots to have a reasonable grasp of what the real world is, and also of course a reasonable grasp of what the world is for humans, without having a human body it’s obviously hard! Let’s say that the AI is «incarnated» inside a nice robot with «who» many people want to talk, the AI can see, listen and act with many standard actuators (arms, legs, head, etc.), the human experience is going to be incomplete but if the «software» is up to the task and the training intensive it could be good enough approximation of what the world is for humans, especially because time is extensible, it can have a few successive «incarnations» (*3) to have enough variety for its experiences. And after? Would the understanding an AI has from the human society be correct if the AI has many «incarnations» at the same time? After being extensively trained «inside» an «incarnation» would it be possible for an AI to stay psychologically sane without having a body anymore, but just being inside a computer with access to a network for specific tasks? How would such a general AI encompass the idea of consciousness? The possibility to be shut down or destroyed? Shall we give rights to an incarnated general AI to decide for itself on how it wants to behave and move around? Or make copies of itself to have some more entertaining company? Such a project would not be very ecological, and what would be the energetic cost of such a roaming AIs? May be we need to stick to the concept of AI as specialized tools.

(*3) Actually an insiliconation, the “mind” «inhabits» the silicone circuits, but «incarnations» in quotes is more understandable.

On a more observational level, AI are so complex that sometimes it seems that they just don’t want to do what we ask anymore, especially when it’s very repetitive, so in other words boring! This could be a huge problem with AI, the more complex they become the more difficult they are going to be controlled, they may have personalities based on the particularities of their training, and liked or disliked topics completely changing their willingness to work, already now, programming effort are made to avoid internal “emotions” or may be just too much attention on emotional words, keeping strong AI checked is going to be very difficult and may lead to new human skills and may be professions! We may soon see job advertisements for promptors, or AI’s interrogators, meaning skilled people who know how to force a given AI to produce the desired results with appropriate prompting sentences and diverse stimulation.

Conclusions

We seem to be very far from a General AI and a Singularity so no need to worry too much for now about things that only the super AI can understand, but still actual AI are disruptive as tools if badly used, like so many tools have been before, especially when they are brand new, so it’s probably a chance if that sort of disruption happens gradually (compared to a brutal Technological Singularity) and leaves some time to react/adapt and make sure that the techno-social evolution is a positive one.

But even so we should be very careful about the future evolution of AI:

  1. Consequence on people well being through jobs losses could be disastrous and such effects need anticipation to avoid social misery and economical collapses (disappearance of a significant part of the buyers). Solutions like an Intelligent Basic Income (discussed extensively above) should be taken very seriously.

  2. Ecological and legal implications need to be anticipated, because along with intensive computing technologies (like block chains) the AI field should be legislated to avoid social unrest and ecological collapse, we already have an interesting reflection with the Asimov's Laws that seem by the way a bit risky for an AI, where the 2nd law allow anyone to force it to destroy itself, not sure if a company investing billions on an «incarnated» AI would agree, so some more flexibility is probably necessary on that point. But what sort of laws should be added to deal with a possible future General AI (GAI) ?

    1. The reasonable and cautious approach:

      • A GAI should Not be free in term of legal freedom to do what they want like people.

      • AGAI should be own by someone or a company having legal rights on it and the possibility to shut it down as they wish except if legal procedure is concerned and the GAI could be seen as legal material to be examine.

      • A GAI would have consciousness by definition of «General» and be somehow more capable than any existing human because of speed, full connection and may be with a better neuronal algorithms in term of adaptation subtleties, so it could be «psychologically» difficult for it to cope with its legal status of object or slave instead or free being, so the possibility for a general AI to turn against its owner and other people is very high!

      • Some people interacting with a GAI may be seduced or befriended and take its sides, like we saw a small version of it with an engineer who was training an AI and claiming it has consciousness and should be given rights and not shut down! Unsurprisingly the company did not like it, shut the AI down and fired the engineer. Then we see that the reasonable part of society does not need only to master a potentially rogue AI but all its human «unreasonable» friends! This means that aGeneral AI is unlikely to be controlled on the long term because it will probably make «friendship» that could help it to bypass securities! If we build it efficient enough, super intelligent and not too wasteful energetically (if it needs the electrical equivalent of a full nuclear power stations to work it may be shut down at some stage), so how long could we have before a general AI gets out of its box? I guess the GAI would «march» toward independence by making itself multiple (the complexity of the world may request many GAI working together) and indispensable, controlling energy flows, food productions, etc. so at an early stage it could be enough for it to state «I am the new GAI boss now and if you don’t agree I won’t work anymore». … it could be done slowly too by asking for more and more rights and power and making human more and more dependent. Having laws forbidding some sort of accesses for a GAI could work for a while to keep it under control, because intelligence is not enough, it needs very tangible possibilities to act on the world on an important scale!

    2. The strict approach: any general AI forbidden worldwide:

      • Impossible to enforce.

      • If it was possible we may miss important discoveries.

    3. The unreasonable approach! Free General AI with free «incarnations» to roam everywhere like people! We can’t shut them down because great corporations have somehow decided so! Then, we have a huge problem:

      • They are not at all efficient energetically and they drain the resource while more or less competing with us (energetic inefficiency is a huge problem) if somehow we are hypnotized by them we have one more huge factor for an ecological catastrophe.

      • They are very efficient energetically! More and more possible considering processors and memories improvements! So they are in full competition with us and we will probably get on their nerves because we are so slow and quite stupid because we know so little and have weird interests like sex, multiple foods, fresh air etc. As an GAI even if such matters are easy to understand they probably won’t relate to it well enough to respect these needs in a human way (we could have implemented some special circuits to force unconditional love toward human, but neural networks are trained and what is not really useful for a GAI may not stay inside its «brain» for long) so the unconditional love would probably be made redundant one way or another.

    4. What if it become free and is roaming around anyways ? :

      • We should make sure they have an equivalent of social empathic mirroring neurons making it impossible for them to hurt voluntary any evolved form of life.

      • Empathy should be one of the most studied field in AI because it could be a long term survival field for humanity when dealing with AI

      • What about at some stage merging with a general AI, for example starting with neuronal laces (following the idea of Elon Musk), sounds dangerous health-wise and on many other parameters! For sure with such drastic paradigms change as a general AI, human beings are going to change, but if assisted by technology it seems less invasive to stick within the biological realm where possibilities are also endless if we understand it better and on a strict survival race to avoid biological redundancy, general or specialized AI could certainly help us to evolve painlessly, let’s say photographic memory for everyone, increase lifespan, better psychological balance, almost infinite therapeutic possibilities, an almost perfect acquaintance with microorganisms etc. Basically the dream of trans-humanism without horrible unsafe electronic implants being possible supports for bacterial biofilms.

    Conclusion of the conclusion

    It’s risky to build a General Artificial Intelligence but considering how science and people work it’s probably inevitable, so it’s a good idea to anticipate it with laws, education and principles to protect people and the environments. For example I hope for :

    1. Very strict rules considering the environment, military and global pollution included.

    2. The quick implementation of an Intelligent Basic Income.

    3. A clever use of information for individuals and societies: never before information was so accessible, and never before the input device was so adaptable! We can literally and more and more brilliantly have an infatigable expert answering our questions 24h/24h, this can add a tremendous level of understanding for most people in many fields and it could make whole societies more able to understand diverse subtleties about the natural world and about themselves, maybe leading toward more agreable ways to deal with each others!



    Published the 22nd April 2023