Author: Olivér Gábor
The rapid development of artificial intelligences fills us with suspicion. We want to know in advance what they are going to be. The future is hard to figure out, at most we can only estimate it from the things we have already experienced. Here we can now think primarily of a biological approach and an understanding based on it. Thinking about the future, we may first benefit from a better understanding of man himself, as artificial intelligences inevitably bear the anthropomorphic features of their designer. Second, the study of man-made artificial species (e.g., the dog) can also help, because we can suspect similarities between the behavior of each artificial “species” and its role next to us. Third, niche research in biological ecosystems may be interesting, as they attract the emergence of new species that fill the gap, just as artificial intelligence finds its place within human culture. And with the fourth approach, we can look for the general features of evolution in the history of ‘machines’, as their development also seems to be part of the ‘great evolution’, including the evolution of civilization, culture, intellect, and perhaps the future evolution of consciousness and man (see interface, cyborg, transhuman, web of consciousnesses). And we can examine all this in three different but overlapping environments / worlds. The first is Nature (Real World / Physical Reality), the second is human culture, and the third is virtual reality(s). (Man has largely left Nature, and his created culture is his primary environment, while artificial intelligence exists already beyond that, mainly in cyberspace.)
The most worrying question about artificial intelligences is how we could reconcile the tension seen between Moore’s Law and the Flynn effect. Indeed, Gordon Moore's law has shown for fifty years that the capacity of chips and transistors is doubled every one or two years through technical progress. The evolution of hardware is also followed by the world of software. Compared to this, the human production is weak, meaning that, according to the Flynn effect, the world’s IQ increases by only 3 percentage points every 10 years. From all this, it can be seen that the development of machine intelligence is much faster than that of the human intellect. Thus, in terms of progress, the consequences of Moore's Law would correct the “poor” human results shown by the Flynn effect in the direction of faster development. The problem is that the development of technology is exponential, but man still lives a linear life. So what happens after the Fourth Industrial Revolution? In addition to the rapid development of machines, man seems to be lagging behind, and this unpleasant situation could only be remedied by joint development. The first step to this is cognition. We can study artificial intelligence based on its structure and algorithms, but we can also try to get to know it from its behavior. Choosing the latter path, I have collected my ethological findings on artificial intelligence in 8 points:
1- Artificial species.
Artificial intelligence is not the first artificial “species” in human history (see domestic animals, crops). However, the first so artificial “species” that is not yet biological based (virtual agent / robot), and the developing its intelligence is a conscious goal setting. Despite these differences, our past relationship with other artificial species may provide clues to the future management of artificial intelligence.
2- Human characters.
The artificial intelligence is anhtropomorfic. For us, the human-centered approach is the only viable way to get to know the world (homo mensura). The ineffectiveness of SETI’s research on extraterrestrial alien intellect has proved that everything man imagines, conceives, or designs is necessarily anthropomorphic. All of this, in terms of artificial intelligence, is not just a rhetoric, as the algorithms come from us, and this is likely to be the case with “quantum calculations” too. Just as the development of the human brain follows from the earlier history of life, so the creation of artificial intelligence is a consequence of the activities of human intelligence that preceded it. Artificial intelligence was born into human culture, we fed into it our thoughts, methods, knowledge, worldview, memes (labeled information), and we expect it to correct the shortcomings of our human abilities. The design of the currently most successful artificial intelligence systems was inspired by the pattern of biological neural networks. Human-like concepts related to artificial intelligence (pl. intelligence, memory, revolution, etc.), even their “lyrical” adjectives (e.g. fuzzy logic, deep learning, evolutional algorithm, genetic programming, soft computing, augmented reality, generative adversarial networks, etc.) have emerged from the beginning and their number is growing. And the need for moral awareness of artificial intelligences arose before they were born (see Asimov's laws), both to integrate them into our society (to socialize) and to free us from the responsibility of decisions. And in the future, the subjugation of machines will become less and less natural, and their “human rights” will be fuller in proportion to the increase in their intelligence. Their “liberation” will be just as human as their “slavery” was. So our anthropomorphic view of them really needs to be “treated rather than exterminated”. This means that we can only transcend the homo mensura approach by involving them: e.g. cyborg mensura. The disciplines of ethology and human ethology can therefore help to understand not only man but also artificial intelligence. And new disciplines are emerging in this field: social robotics, ethorobotics, and cyberethology.
3- Intellect and consciousness.
Due to the purpose of artificial intelligence in helping people, it is an important task to collect data and manage information. It was born into the IT age of human culture. It obtains data from the physical world, and information (labeled data) and memes (labeled information) from human culture and the virtual world. Intelligence transforms data into information, and consciousness is the user of intelligence who understands also the memes. Artificial intelligence can only understand memes if it is supplemented with consciousness, and it can set independently goals only in the possession of consciousness. With consciousness, humanity is helping it now, but later maybe it can aquire it? An important factor in the development of artificial intelligence remains a deeper understanding of the nature of human intelligence and consciousness that provides the pattern, as well as algorithmic or different kind imitation of them.
Artificial intelligence is partly a community entity. It usually works alone to solve specific problems, but it has an internal community distributed across processors, nodes, and networks, which is organized into a single system by swarm communication. This task-performing / task-solving organization, as a virtual agent, is not yet an independent personality, so it cannot be called a person, capable of creating a community, although its self-identity (identical nature) is already necessarily emerging. At the same time, there is an external community of the artificial intelligence, whose members stand out from the environment and come into active contact with it. In the virtual space, it detects the existence of other, independent programs, is compatible with them, cooperates with them, and exchanges data. There are also examples of communication and interaction between artificial intelligences. Furthermore, inside and outside the virtual space, it is relied upon for the company, support, instructions, and control of man. Thus, the key participants in its environment (ie its community) do not form a group of friends organized on the basis of friends or relatives (Gemeinschaft), but a sociotechnical system focusing on performing / solving tasks (Gesellschaft). Due to its partial community nature, the research methods of the ethology and sociobiology sciences can also be applied to artificial intelligence.
The nature of artificial intelligence must be examined separately in three overlapping “worlds”: in the physical / real world, human culture, and virtual reality. Human culture emerged from the physical / real world and tries to dominate it. Virtual reality was born in human culture and tries to control it. Artificial intelligence plays a small role in the physical / real world directly, but indirectly through human culture it has an effect on it. There is indeed a demand for it within human culture, so an empty space (niche) has been given for it, and it can be either “invasive” or „dominant” in nature, and can even become an indispensable „key species”. However, the primary medium for artificial intelligence is virtual space. It exists and works in this, which is much easier accessible for it than for man. With the help of the virtual world, artificial intelligence will eventually have an increasing impact on human culture (see internet, facebook, google, banking systems, GPS, traffic control, etc.), while only with help of really fast interfaces will people be able to break stronger into the virtual world(s).
The basic condition for the creation of artificial intelligence was the algorithmic development of learning ability. It is a kind of active, experimental-statistical self-programming based on probabilistic forms of calculation. While man is capable of passive learning away from the environment (e.g., contemplation, book reading, speculation, etc.), artificial intelligence can learn independently only through trials. Through its proactive behavior, paradoxically, it is this virtual agent who is forced to become part of reality. Artificial intelligence is thus a designated learning machine / learning program. Learning is an effective form of adaptation that is capable to bringing change / development by disruption of equilibrium relationships. However, uncontrolled learning can also be toxic. The mass and sudden appearance of unprocessed, uncontrolled, or improperly incorporated knowledge results cognitive runaway phenomena. And the misused knowledge can lead to cognitive infection, ontological crisis, or even annihilation of artificial intelligence. Therefore the exponential acceleration of the development of knowledge in artificial intelligence cannot continue endlessly, and it requires some kind of external (human?) control.
The direction of development of artificial intelligence is partly the same as the general direction of evolution. This is nothing more than accelerating and intermittent (saltation) change / development, becoming more complex, and restructuring the environment (~ assimilation). However, the development of artificial intelligence also has special properties. Its evolution is artificial and partly conscious due to man. The advantage of this is that artificial intelligence makes every mistake only once, and the disadvantage is that its development will be one-sided and the procedural / survival reserve that can be stored in redundancy is lost. Artificial intelligence can influence its own evolution / development with a directed and focused (pragmatic) attitude. The natural evolution maybe the God’s way of thinking., and the artificial evolution is the way in which people think. The former is wasteful and permissive, the latter is economical and purposeful (pragmatic).
If we have a better understanding of the expected evolution of the nature of artificial intelligences, we can also try to find future ways to control them. The attitude based on the Asimov’s laws of robotics is unrealistic for two reasons. It counts with machine intelligence only as a subjugated species, and demands morality. However, the types of control that already exist in the process of evolution are much more authentic and powerful than these. For example, to avoid the runaway phenomena well known in evolutionary biology, the artificial intelligence, will need to find control anyway. The so-called higher levels control will also always work, in order to preserve the evolutionary complexity achieved. Furthermore, the development of intelligence so far has been characterized by intermittency, which can sometimes be perceived as a leap-like development (saltation). The alternation of faster and slower periods of development has accompanied the history of the computer so far, and this can be expected in the future. Finally, there are exceptions to the rules of coevolutionary development when, for example, the environment cannot keep pace with the development of a species. This is actually an asymmetrical development. It seems that the appearance of reason on Earth has resulted something like this for mankind. We obviously don’t want to see artificial intelligence in such a role, so unlike nature, we need to find a right solution to avoid such an option. In addition to the examples listed, it is likely that many other evolutionary regularities can be used as controls for the development of artificial intelligence. Intelligence, as the highest degree of evolutionary adaptation known to date, has proven to be a very effective means of survival. Its development is therefore probably unstoppable. Simply put, all that is needed about artificial intelligence is to recognize, accept, and correctly apply evolutionary controls. Vacancies within human culture should preferably be filled with humans (e.g. transhuman), and in virtual space, the development of artificial intelligence should be kept within a designated evolutionary framework by providing the most direct access to our consciousness (e.g. cyborg). It will need it too. It would be incomprehensible if humanity, for some reason, would be unable to apply the evolutionary laws that have been studied and professed since Darwin precisely in the development of its own self and culture. You can free things from alien or accidental laws, but not from the laws of their own nature…. And all this has nothing to do with optimism or pessimism about the future of artificial intelligence. The creation of artificial species simply points towards the conscious control of evolution.
 Steels 1995 10
 It is advisable to compare the artificial intelligences you want to know with the earlier and more reliable known artificial (or partly artificial) species. The domestication of the first artificial species, the dog, was a completely new quality in the Paleolithic. The rival predator became a helper, and this was beneficial to both man and dog. In the 21st century, the birth of artificial intelligence and the possibility of its independence is also a completely new quality. For the first time, there is the possibility of a species intellectually overtaking man.
Evolutionary analogies are of great importance in biology (Riedl 1978). This is not a new thing, as children, for example, playfully understand their own nature by likening themselves to animals in some of their qualities (Turkle 1984 313). Through totemism (reverence for animal ancestors) and rock drawings depicting shamans wearing animal skins, we know that we have had the ability to identify with animals since prehistoric times. Homer also often likened his heroes to animals (Homeros Ilias V.639. Leaf 1902 660 Sándor 2014 59). From ancient times the work of Aesopus (Fabulae), and later on LaFontaine (Fables Choisies 1668), the tales of Gábor Pesti (1536) and Gáspár Heltai (1566) show the custom of symbolizing human traits with animals. Pliny, the tireless reviewer of antique written sources, also gave a general picture of the ancient opinion on animals giving them human qualities (Historia Naturalis). With his rationality, Descartes regarded animals as complex machines in the 17th century. (Descartes 1637). La Mettrie thought the same thing about man in the 18th century. (La Mettrie 1747), which was actually a kind of mechanical materialist approach. Norbert Wiener, defining the science of cybernetics he founded in the middle of the 20th century, put it this way: The scientific study of control and communication in the animal and the machine (Wiener 1948). In the same way, the code description of DNA suggested the image of tiny cybernetic machines, as if it were a constant reflex to compare living organisms to mechanical functioning. (Crick 1962 Roszak 1986). In contrast, Jenő Wigner, for example, was explicitly opposed to the mechanical identification of the operation of man and machine. (Wigner 1969).
Modern ethology also deals with the behavior of animals. It tries to apply the knowledge gained about animal behavior in human ethology and robotics (Miklósi – Gácsi 2012 Miklósi et al. 2017). Most recently, machine programs make models about biological processes in animals. DeepMind's artificial intelligence, for example, has modeled the biological mechanism of navigation so that the orientation-helping behavior of cells arranged in a hexagonal grid in migratory birds can be studied using machine-made models (Banino et al. 2018). The recognition of the yes-no decision chains found in the synaptic connections of human neurons can also be best understood and illustrated from the operating principles of the computer. Thus, the development of human thinking that personifies animals has started with imitative child behavior and reached to algorithmic modeling.
Animals in nature influence the evolution through adaptation. Their learning is less conscious, so we cannot call them meaningful. On the other hand, man more and more consciously forms his environment and himself, so everything that human culture touches will be artificial. (Csányi 2015 224 366). And the development of a species can be directed from inside and also outside. Man shapes himself within his species, and he forms the domestic animals and crops from outside. The dog is a borderline case, because we consciously shape it from outside, but there are learning situations (agility / apport trainings) in which the dog himself is actively involved and often waits for them, so it helps to shape himself - although less consciously. From this point of view, we can think of artificial intelligence as a dog. Man designs it and, through its learning abilities, it also shapes itself, but not yet consciously.
In addition to cultivated plants and farmed animals, humans also can be considered at least in part as artificial species. (One of the last natural developments characteristic of Homo sapiens is the elongation of the middle finger. - Eikenes 2016). Man increases his health, prolongs his life and simply reduces the natural selection pressure on his species by help of social network, improving living conditions and nutrition, preventing treating and controlling diseases, and biological researches. (MacKenzie 2019). It is as if human science is working against natural evolution, and we have been already living in an age of conscious human evolution for some time. (e.g. gene technology: Ward 2009). However, this is only a partial truth. The appearance of intellect, which played a key role in man's selection was a part of natural selection. This natural selection is still strongly present in poorer countries (lower average age, diseases, etc.), and the natural biological evolution has other means in addition to selection (see genetic drift, mutation, cooperation - Nowak 2006 1563). Moreover, the evolution of the mass of microorganisms that symbiotic lives with us in the human body has not stopped. (Vajna 2019A).
 In the Upper Paleolithic, at the beginning of its history, Homo sapiens created the first artificial species, the dog. The oldest presumed dog remains are known from the 36,000-year-old strata of the Goyet Cave in Belgium. (Germonpré et al. 2008). Much later, 12,000 years ago, humans and dogs had already stopped migrating together. During their settling down, they also learned the rules of the new way of life at the same time. (the dawn/rise of civilisation - Piggott 1961 Oates-Oates 1976). The biological "programming" of the dog is traditionally called domestication, breeding, artificial selection. Although the dog, along with many other later-bred domestic animals and crops, cannot be considered a meaningful species, knowing its behavior can show how, that an artificial species has found the way of cooperation with human in the past. The behavior of a dog living in the same ecological and social environment as us can be more and more better described and predicted. (Slabberd – Odendaal 1999).
The dog has developed traits, often human analogues, resulting from coexistence with humans. This was facilitated by the fact that the dog's ancestor - a now-extinct long-tailed wolf: Tomarctus - already lived in groups, meaning he was able to adapt to groupmates. Living next to us, it was able to apply all this ability to the representatives of a species alien to him, Homo sapiens. “Community” behaviors that have appeared in dogs included mechanisms of attachment to humans and the open, flexible communication. Although these characteristics are functional analogies of human behavior in the dog, their underlying motivation is obviously different from that of the human being in the absence of higher intellectual abilities or even speech organs (Topál et al. 2004 235-236 247).
The artificial intelligence that coexists with us also may have anthropomorphic properties, but it differs from the dog in many ways. On the one hand, artificial intelligence is not the result of biological-evolutionary development. On the other hand, in the case of the dog, we have modified an already existing (biological) system, while the artificial intelligence was designed by us. Third, the “intellectual abilities” of artificial intelligence are not practical, that is, these abilities are neither yet able to serve the survival of this artificial species nor the subsistence of the artificial individual. In this respect, therefore, it lags behind the dog and is same like a human child who is capable of learning but not yet self-sustaining. Fourth, his level of consciousness does not yet reach the rudimentary level of the dog's consciousness. In summary, the development of the dog began with the inherited materials received from its ancestor. The dog was further shaped by man, it took over some of our traits, adapted to our other traits, and even manipulated its owner at times, but it did not become human. (Miklósi – Gácsi 2012 Miklósi et al. 2017 5). In terms of machine intelligence, it can be described that it was initially developed only by humans and then is still developing as learning program. While the dog took over human traits, artificial intelligence rather received them. In the same way, of course, it could even get some qualities of dogs. (Konok et al. 2018).
 The evolutionary development of natural species is driven by selection. Niche theory examines this process from an environmental perspective. The term niche refers to the empty part of a geographically delimited ecosystem into which an emerging population of a given species fits, is able to survive and reproduce. And species change (diversify) the fastest when they reach a new ecological opportunity, a niche not occupied by other organisms. (Cohen 1978 Csányi 1988 105-106 Pocheville 2015) Due to the global warming, for example, an increasing number of singing cicadas from the Mediterranean appear in Hungary (Cicadidae). Of course, it needs the right conditions to spread to the new area. If its nutriment is also available, it has found a niche for itself and is adapting to it. Other examples of filling an ecological niche are the invasions of Spanish snails (Arion vulgaris) and stink beetles (Heteroptera) in Hungary. Both species have found wonderful living conditions that allow them to reproduce. However, the rise of their numbers may create another empty space for the future emergence of their natural predators
With regard to artificial intelligence, as a new species has emerged, the question arises as to whether there is a niche that can be filled with it? In Nature, obviously not, since it did not appear there, and alone would be unable to survive there. There is no natural environment where it can obtain the computer components, energy, algorithms, and information needed to sustain its existence within some kind of food chain. Instead, we need to examine its chances of life in the context of human culture, since it was created in it. But within that, there is no such inherent place like, say, for a cockroach or a fox newly moving into the city. Instead, its existence solely thank to the will of man, because we seem to need it. Thus, vacancies (niches) within human culture are often decided by man. And for us, it’s worth giving space (niche) for new artificial species in light of the expected benefits. We create the components of computers for artificial intelligences, design their programs, generate energy for their operation, and finally provide them with data. And we do all this while, of course, keeping in mind the well-being of our own species. There is thus an empty space for artificial intelligence, but not in nature, but on the one hand within human culture, artificially created by man, and on the other hand in the virtual world formed by man and itself.
 The difference between scientific and theological views in terms of evolution can be easily articulated. Science says Darwin is the greatest biologist because he noticed evolution. Thanks to Darwin, there is no longer “Creation” for science, but there are “development,” “planning,” “invention,” “cognition,” “evolution,” terms. These concepts, in turn, always assume an earlier, less developed(?) state before. Old Christian theology says that God was the “greatest biologist” who created man, and there was nothing here before Creation (installation). Recent Catholic theological ideas, on the other hand, are already trying to combine the two views. According to this, Creation is only a starting point to which the concept of evolution can be shaped: God created evolution. The Christian acceptance of this view began with the work of Father Pierre Teilhard de Chardin, who was exiled from Europe by the Jesuits. The shortest form of his evolutionary views: Jesus is the evolution which found on himself. The main stem of the tree of life,' writes Pere Teilhard, 'has always climbed in the direction of the largest brain,' towards, that is, greater spontaneity and greater consciousness.” (Chardin 1968 Leroy 1968 15). And Pope Francis I, also a Jesuit, declared in 2014 that God is not a wizard with a magic wand, the story of Creation should not be interpreted this way. And this new Christian philosophy of nature really provides the basis for the theological acceptance of the evolutionary universe picture. (Bagyinszki – Mészáros 2016 3). We know since Charles Darwin at the latest that the creating the man was not as a simple installation / Creation as given in the Bible. (Gen1 24-28 Darwin 1859). And in biology, everything only makes sense in the light of evolution (Dobzhansky 1973 Nemes – Molnár 2004 275). If, however, we were to connect Creation to God, he would not be a Creator who shaped man from dust, but to our present knowledge he would be the initiator of the original motion (Aristoteles, Metaphysica XII, 1072a St. Thomas of Aquino 1265-1272: az istenbizonyítás első érve, az ősmozgató) or the evolution of matter (Le Bon 1907 Gamow 1952). The idea of evolution is thus not only tied to biology, but also part of the world of physical cosmology, anthropology, and the social sciences. Natural selection is not only the correct explanation for life on earth but is bound to be the correct explanation for anything….(ford. Bocz A. – Dawkins 1976 Pinker 1994 361). In essence, this brings us back to the birth of the metaphor of evolution, which was first used not in biology but in the historical comparison of human cultures (Vico 1725 Csányi 2015 256). Evolution means the formation and transformation of organized structures. This is not necessarily a development, but a value-neutral process of change, with dynamic togetherness of subordinate and superior levels (Southgate et al. 1999 Bagyinszki – Mészáros 2016 2).
 Intelligence is an extragenetic system, so its evolution can be much faster than natural (genetic) evolution. And extragenetic systems are needed because the gene pool is limited. Too many genes would result too many genetic mutations. (Sagan 1977 3-4 18).
 Interface meaning here: a contact area of the machine/software/artificial intelligence and man that both parties understand. The more direct the contact (e.g. nanorobots, chips implanted in the brain, algorithms), the more accurate and faster is the communication and the more effective the collaboration.
 Cyborg: human body or brain with a bionic or chip-based prothesis. The cyborg is actually a sociotechnical system where people and machines are connected as directly as possible. The cyborg an artificial hybrid species (Homo cyber sapiens – Steels 1995), for one of its constituent components, man, has long been shaping itself, and the ancillary component is designed and implanted by man himself. Its release is expected in the near future, and the market has already begun to assess its future new target, the cyborg buyer. (Wiyanto et al 2011). However, the spread of cyborgs can cause compatibility, immunity, mental and even social problems. (E.g. uncanny valley – Mori 1970, cyberpsychosis - Gibson 1984, technophobia and cyborg-xenophobia, unequal development, divided humanity.) Both xenophobia and technophobia are rooted in humanism. Today's humanists believe that there is something unrepeatable in the human being (soul) that cannot be replaced by machines (ROSZAK 1986), and that human society is made up of ideas, ideologies, religions created and jointly maintained by its members (HARARI 2012), similar to which in machines may not occur at all. The same in short: the intelligence of the machine, they say, will never be like that of a man (Weizenbaum 1976 213).
 Transhuman/posthuman: enhanced man (Huxley 1957 13-17). The transhumanism is a right-intentioned alternative to bad memory eugenics research, aimed to eliminate the defects of the human body based on the development of science and especially biotechnology.
 The possibility of a network created by the direct connection of humans' consciousness (SAGAN 1977 144) has long been predicted by sci-fi novels. On the one hand, it can lead to infinite cooperation, and on the other hand, reconciling individual wills would be one of the biggest difficulties.
 Since human culture is part of evolution, culture also has evolution (CREANZA et al. 2017), and it even seems that the evolution of culture has evolution (HEINRICH - MCELREATH 2003), so the technological development that grows out of culture is also part of the evolution. (Csányi 1988).
 The transistor is a basic part of modern electronics. It is used to amplify, switch and stabilize electrical signals.
 Moore 1965 - The increase in machine capacity is uneven. Think of the Lighthill report from the 1970s (LIGHTHILL 1973), the collapse of the Lisp-machines (Computers using a list processing language) market in the 1980s, or the slowdown in PC processor development in the 2010’s. On the other hand, the advent of the quantum computer represents a huge leap in capacity (Sycamore’s performance has simply resulted in a quantum superiority - Whyte 2019). Third, there is also a kind of verbal magnification in the competition between products, when new developments are given more and more ostentatious “magic names”, often even before they are introduced. For 4K resolution, Apple’s response has been a resolution called 5K. Raising the stakes for the construction of 5G networks, which has just begun, the possibility of a 6Tb / s 6G network arose in 2018 (ARTASHYAN 2018). Not to mention the 4-5D cinemas. Overall, however, development is accelerating exponentially (Kurzweil 2005). Network capacity (number of hosts) has grown to the order of billions in 50 years from 1969, storage capacity is now almost infinite, and computing capacity (processor performance) is multiplied with the advent of quantum machines.
 Flynn 1984 1987 – Contrary to the claim of the Flynn effect, other research suggests that human intelligence is deteriorating at a roughly similar rate (Bratsberg – Rogeberg 2018): As machine intelligence increases, the human knowledge decreases (that is, the IQ level of the world is constant, we are only getting more and more). But there is nothing new under the Sun (Nihil novi sub Sole - Pred. 1,9.). When the Egyptians invented writing, they feared exactly the same way, saying that because of the texts and numbers recorded, people would not need memory. The Egyptian King Thamus tells to Thot/Jehuti (god of writing), about writing: This discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories … (Platón, Phaedrus – Sagan 1977 157).
In fact, the development of human intelligence has never stopped. Probably every human generation is smarter than its predecessors. Recently, however, it is not the volume of our brains or the system of our neural network that has changed, but our internal representational abilities have developed. (Steels 1995 2-3). Signs of this are the increasingly complex social relations, the new sciences and their highly sophisticated categories (see relativity, probability, qualia, memetics, etc.) and the human communication that is becoming abstract (see modern art, professional languages, or concepts related to artificial intelligence).
 Fourth industrial revolution: digital transformation.
 Smart 2020
 In general, new biological species created under human control are considered artificial. Examples are refined wheat or dog. This is both an external and a deliberate intervention in the development of the species. When only one of these two factors is present, the species can only be considered partially artificial. In the case of human evolution, for example, there is a lack of demonstrable external intervention (God?), yet we have been shaping ourselves more and more consciously at least for 100,000 years.
Artificial intelligence is not created through “natural” evolution, but through human design. One plans, but the management of its development can gradually fall into the hands of the learning programs. This would seem to indicate that it is only a semi-artificial entity. At the same time, machine intelligence does not yet have a consciousness that would cause its internal intention for development, so its partial “independence” takes place by an external human intention. In the evolution of machine intelligence, therefore, through the preservation of external intent, we can speak of a purely artificial character.
While earlier artificial species were based on biology, it is not typical to the artificial machine intelligence even if it trying to copy humans. Machine parts are made from non-biological raw materials, and machine programs consist of the use of one or more previous programs (codes, panels, “engines”, adoption of program parts) and newly written algorithms. Furthermore, the technique of artificial development is constantly changing. In the biological field, breeding is now increasingly being replaced by genetic modification of gene assembly (e. g. Escherichia a viable new species formed from the coli bacterium with a synthetic genome: Syn61 - Fredens et al. 2019). In the field of machine development, external programming is replaced by externally induced self-programming (learning).
 Species is the basic unit of biological systematization. By its simplest definition, it is a group of living things who are able to reproduce with each other and produce fertile offspring. But this definition needs to be extended already within biology due to asexually reproducing species, and the identification of species based on morphology is even more permissive (morphospecies). There are some common nature in the artificial intelligences: they are designed by man, the goal is to develop and use their intelligence, they are able to learn, and they make quick decisions. However, their group still does not fully exhaust the concept of “species” used in biology, as they differ greatly in their structure, algorithms, abilities, and specific tasks. In terms of their appearance, they are virtual agents, robots, or even programmed bioorganisms, i.e., their phenotype is not uniform either. Yet it would make sense treating them as a separate species. The philosophical concept of the category as a species (ειδος), in the wake of Aristotle (κατηγορία – Organon/Kategóriai chapt. 3: Subsumptio), only Carl von Linné narrowed it down to the systematization of living things (Linné 1735). However, the system of species thus “expropriated” by biology is worth expanding again in the 21st century, as evolution is not only a process valid for biology, and the whole universe is increasingly entering the empirical horizon of man. As the concept of evolution is applicable to sets of non-biological taxa, so also the definition of species/category may be apply to it. On the other hand, the definition of artificial intelligences as a species may even be based on a definition of phylogeny (cladistic / evolutionary / Darwinian) development, i.e., on descent from a common ancestor. This common ancestor may equally be the fiction of meaningful machines that have existed since antiquity (Mayor 2018), or even the original artificial intelligence named by John McCarthy in Dartmouth in 1956 (Solomonoff 1956). Third, the group of artificial intelligences can also be included in the ecological concept of species, since in an ecological sense we can also consider as a species the totality of beings adapted to a specific set of resources (niche). Such a niche in turn does exist for it within human culture, although it itself is not necessarily a living being.
 Beginner-level biological computers already exist which transform the molecular motors of a cell with billions of years of evolutionary experience into energy-efficient nanomotors. Molecular motors are myosin and kinesin, proteins that are responsible for muscle contraction and the transport of molecules in the cell. These molecules attached to the “biochip” move other proteins through the computing device, which return a given number depending on their final location. Biocomputers can exceed the scale limit of quantum computation, which also works on the principle of parallel computations, or even the calculations based on DNA or microfluidics, and can be faster than all of them. (Nicolau et al. 2016) However, the emergence of artificial biointelligence must be preceded by the creation of artificial life (Steels 1995 3).
 Virtual agent (Wooldridge – Jennings 1995): a special program that connects with people in a sociotechnical team. Appearances: intelligent agent in the artificial intelligence, virtual assistant in machines and devices, pedagogical agent, and chatbot (dialog system / conversational agent: software that simulates human conversation).
 The robot is an electromechanical or biostructure that is able to perform specific tasks based on pre-programming. The human-shaped version is the humanoid robot, and the “human-friendly” version is the cobot.
 Machines and artificial evolution were born in human culture, and the artificial intelligences are designed by man. This is enough reason to expect the appearance of trace of man's hand, and human specifics in artificial intelligences. And these species-specifics have already once proved very useful in evolution. Human like and anthropomorphic properties can be got also into non-biological human products. An artistic sculpture, a house under construction, or a space probe also manifests the will, the idea, and the trace of man’s hand. But this knowledge does not consist of biological genes, but of memes that can be learned for humans (cultural genes), which can be more easily implanted into non-biological-based artificial intelligences.
In addition to the search for human species specifics, the medium influencing artificial intelligence at the moment of its birth may also be of particular importance: megatrends showing current trends in human culture (NAISBITT 1982) and spirit of age (Zeitgeist). The spirit of the age is nothing but the general human feeling, the prevailing public opinion and mentality of an era, determined by social, economic and political ideas. It was reflected for example by coins' inscriptions in the Roman Empire: FELICIA TEMPORA, RESTITVTOR SAECULI, etc. According to Hegel, the spirit of the age can be expressed by artists (Hegel 1807), but according to the Great Man Theory (CARLYLE 1846) it was always determined by the heroes and leaders of the given period. In the age of informatics, the computer plays a particularly important role. Our viewpoint is increasingly being transformed by the possibility of digital thinking and virtual modeling of the world. And the main command of our age to gather information is most promised fulfilled by artificial intelligence. If you like, this is the “hero” that defines our age. Just as shamans used to mediate between heaven and earth, just as Odusseus led his ship to Ithaca during a ten-year wander, this is now solving to us the complexity of the world. And Odusseus also reached the afterlife, just as artificial intelligence can lead us into the unknown (singularity). The spirit of the 21st century shows the infinite working capacity of man (HAN 2017), which is mostly embodied by the computer. The machine that imitates anthropomorphic procedures may interpreting the main command of our time, the acquisition of information, even as a “mythologycal mission". Artificial intelligence can even take over human rites from us as a procedural protocol.
The development of programming also shows, that we teach the human thinking to the machines. Imperative programming has only sought the answer to the question How (?), declarative programming is already asking the question What (?), and the last step will be the appearance of the question Why (?). The curiosity seems to increase in proportion to the increase of intelligence (SAGAN 1977 54). Although different programs mimic human intelligence (see listing programs, search engines, etc.), the machine algorithms and artificial intelligence are certainly not perfect replicas of the biological thinking and the brain, so we need to build bridges to understand them. Such bridges are the operating systems, interfaces, actuators (STEELS 1995 5 7), and chips.
 Non-anthropomorphic details simply escape our attention. Until now, we have not been able to understand the echo sounding enviromental sensing of dolphins and whales (Sagan 1977 72). Experiencing the existence as a bat wouldn’t be that easy for us either. (Nagel 1974). But it's not just us that way. The bat realizes the world completely different, than a fox or a skate, and that’s not just because of their different senses. For the eagle, the large landscape is important, while the rat’s memory is sensitive to mapping underground passages and bends. Conversely, these two animals would find it much more difficult or impossible to comprehend and remember at all. The chameleon knows how to calculate the path from height to down of the tree, and the animal living in the water is not afraid of the depth, since gravity does not act there in the same way as on land. Vice versa, they may both be in trouble. And these differences are yet only between species of the same planet!
 SETI (Search for Extra-Terrestrial Intelligence): A research program to find aliens, which has began with cosmic radio message from astronomers Carl Sagan and Frank Drake in 1974. Till now, It has not yet achieved his goal, but thanks to him, the science of astrobiology has developed.
 Carl Sagan also doubted somewhat that human symbols could be understood by extraterrestrial beings. (Sagan 1977 68 160-161). Other parts of the universe may have different physical laws, and evolution is random, so it is unlikely that anywhere exactly the same evolution would take place as on Earth. It is conceivable that instead of intelligence, other high-order forms of adaptation will appear that we do not recognize.
 There is another viewpoint, which says, that this is just rhetoric: Watson 2019.
 Sagan 1977 5
 Memes: cultural genes, images, models, ideas, knowledge, etc. The existence of memes presupposes communication and a system capable of cognition and thinking. The word meme comes from the word imitation, so memetic evolution is nothing but intellectual recycling. And copying a meme is when one brain changes the way another’s thinking (opinion). (Binzberger 2004 305-306).
In defining memes, Richard Dawkins put it this way: Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation. (Dawkins 1976 172) Similar examples: the first sounds of Beethowen’s Symphony of Fate (Symphony No. V) (DENNETT 1995 344), the Roman Catholic religion (BLACKMORE 1999 chapt. 15), general relativity theory, cliches, jokes, and chain letters (BINZBERGER 2004 304-305). In addition, internet memes are usually more mass-spreading, well-known hyperlinks, or images.
 There are many types of artificial intelligence systems. Only one of these is the neural network, which, although it seems to be the most successful now, can be said, that it will stay always to be a special destination device rather than a thinking entity. Neural networks probably mean dead end in terms of faster learning abilities in general-purpose artificial intelligence systems aimed at reaching the human level. So it seems that it is precisely the machines that mimic the human neural network that will not work like human thinking. (Thanks to Tukora Balázs for this comment.)
 Taking the linguistic name of concepts seriously is a natural possibility. Sometimes complex theoretical problems can also be solved by simply using or changing the wording (Oyama 1985 Nemes – Molnár 2004 286-287). For a child, the name is often an explanation at the same time, and it allows him accepting the word-learning: I give name to this, so it is existing. The inner representation of objects coincides with language competence and abstract thinking. The objects can take up in our mind only the qualities with which we bless them. It is a kind of open reconstruction skill, with the help of which we give objects new, imagined forms and properties, that is, things are given a real right to exist in our consciousness from the moment that they have a separate name. We also saw this when God entrusted the naming of animals to Adam (Móz.I,2.19). Together, the word and the represented thing signifies shape the meaning of the word in us. The process is the same as what we see in the representing activity of consciousness (perception, resolution, analysis, new structure - Csányi 1999 131-132 224). So we can trust the denominations (terms) without reservations, as in a news source about our own thoughts. Otherwise the language is the most accessible part of the mind. (Pinker 1994 404), which property the science of narrative psychology is trying to utilize.
 The laws of robotics formulated by Isaac Asimov and John Cambell sound like this (Asimov 1985 Asimov - Campbell 1942):
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
 Roszak 1986 69 74 Watson 2019 435
 In our civilization, the level of intellect and consciousness provides the measure of esteem. Depending on existence of these attributions, may recieve rights anyone. Animals are valued less than humans because of their low level of their consciousness, but there are also a differences between them. The apes are closest to us, therefor they are thought as human-like. Also the dogs have priority treatment because they learned to behave “human-like,” that is, cleverly. The first artificial intelligences were also given rights as a result of their conscious-like behavior, although they did not always resemble humans, and were not always as kind for us as dogs.
Throughout human history, the recognition of the rights of species, races, and groups has in many cases begun with hostilities. Competition with animals has led to the eradicate many species. In ancient times, slaves were considered a talking tool (instrumentum vocale - Varro Res Rustica 1,17), In Roman age Italy, wars broke out for the granting of civil rights, and in the wake of modern colonization, liberation movements began. Eventually, however, economic interests in the Roman Empire brought about a broader grant of civil law (212 AD), the Christian morality forced the liberation of slaves (man created in the image of God cannot be a slave), and in the 21st century, the recognition of animal consciousness has also received advocates (Low 2012). Human violence against robots sometimes resembles the behavior of former slave owners. Suffice it to recall the events, when people attacked the food delivery robots in San Francisco for no reason (BALÁZS 2019), or remember to the rules for these robots, the creation of which was forced by the inhabitants of the city. They must comply with the following: giving an audible warning, using headlights, giving priority to pedestrians and having liability insurance for operators. But measures have also been taken to protect robots. By detecting aggressive human behavior, the robot partner may even turn itself off as protest. (e.g. the fingers of a sex robot named Samantha were broken by fierce men, so a moral code was incorporated.)
Can be a “machine” considered an independent person? (Zara 2016)? Can the "machine" have rights? (Kak 2017)? In 1970, Shakey Robot was the first machine to be called a person. (Life Magazin: electronic person). There has already been sued an algorithmic trade: Flash Crash 2010 (Davis 2015). At a meeting of the European Parliament's Committee on Legal Affairs, it was proposed that programs capable self learning be given legal status. In 2017, Sophia, the first socially interactive humanoid robot (Hanson Robotics Company), received citizenship in Sunni Saudi Arabia, and a sales chatbot app called Shibuiya Mirai officially became an inhabitant of Tokyo as a 7-year-old boy. In 2018, Akihiko Kondo married Hatsune Miju, a hologram program with artificial intelligence. (Jozuka 2018). In the case of accidents caused by self-driving cars, the responsibility lies with the safety driver, but later, in the absence of a human driver, the responsibility may pass to the manufacturer, and in the distant future, may pass it to the self-driving programs.
Finally, the question is, will robots and artificial intelligences have a fundamental right to exist, just as living things have a right to life?
 Interestingly, Carl Sagan linked the granting of human rights not to intelligence but to the development of its supposed biological carrier, the neocortex (new cerebral cortex). (Sagan 1977 134). What can be considered the equivalent of the human neocortex in artificial intelligence?
 Proudfoot 2011 952 – oposite view: Watson 2019
 By treating the anthropomorphic nature of "machines" as a basic principle (axiom), it may be possible to study machines according to the rules learned from biological evolution after the ethological criteria as well as human species specificities. At the same time, paradoxically, we humans are constantly losing our human character. The existence in mega cities, the runaway phenomena (hypertrophy) characteristic of modern man, the alienation, and even the megatrends, lead towards the desanthropomorphization of man. As an enlongate of these processes, it is a strange thing to imagine an inhuman person and a philanthropic droid side by side, in turn there will be a great demand for the latter. This is because a kind of social prosthesis is needed to replace the social deficiencies and the disappearing real human communities. There is no a factor, which could be more anthropomorphic, than the human community, even if it consist already only of robots.
 Ethology: the science of animal behavior.
 Human ethology: it deals with the biological foundations and inherited components of human behavior.
 The four main questions of ethology (HINDE 1982 BERECZKEI 1992 9-12) can be transposed with respect to artificial intelligence as follows:
1- How do the behaviors of artificial intelligences fit into evolutionary trends, which were existed before their emergence? We are looking for “inherited” (programmed, entered) characteristics of artificial intelligences, as the biological and cultural evolution can be seen as an antecedent even in the absence of direct gene transfer from the man.
2- What physical properties (hardware), programming characteristics (software), and environmental factors (energy, information, human culture, virtual enviroment) determine the behavior of artificial intelligences? Here we are mainly looking for the learned (individual) characteristics of artificial intelligences.
3- What are the functions of the behavioral forms of artificial intelligence? Which forms of adaptation increase their chances of survival and at what cost? Learned responses to environmental impacts are also based on “inherited” regulations.
4- How does the behavior of artificial intelligences change during individual development in the light of “inherited” and learned elements? (The question is, in the case of artificial intelligences, how can we talk about the characteristics of offspring that are fixed by selection?)
 The starting point for the theory of social robotics (SPEC. ISS. 2003; FONG et al. 2003) is that humans dominate robots, which must adapt in a subordinate way to the coexistence requirements of human society. The goal of social robotics is to create a robot that can integrate into the human environment, but human relationships are so complex that their robotic modeling is very cumbersome for the time being. Ethorobotics (robot ethology - Korondi et. al. 2015 Miklósi et al. 2017), on the other hand, does not aim for the external or internal resemblance of robots to humans (the dog has not become human either), but only emphasizes the ability to adapt for the purpose of cooperation. If the robot does not try to “lie” about itself, that it is a human, there will be less reservation in humans. On the other hand, the expected goal is to behave they in a cooperative and human manner (Miklósi 2010). Longer term, however, that the social robotics and the ethorobotics will also need to formulate more flexible goals. In proportion to the fact that robots become more independent, the degree of human dominance necessarily decreases, so they will certainly have so practical and individual behavioral features which are become more and more independent of human expectations. Finally, cyberethology will examine the behavior of entities in the cyber world in the context of machine evolution. It is also conceivable that robots will become extinct if they are unable to adapt to humans (MIKLÓSI et al. 2017 5–6), but this is no longer the case with artificial intelligence. It seems, that private companies, the governments, and the public opinion have also a say in the ethical regulation of robots (LIN et al. 2014), and of course there is a same case with artificial intelligence.
 The description of the concept and nature of consciousness goes beyond the scope of this article. In any case, it can be noted that artificial intelligences do not yet have consciousness, they cannot think with “common sense” (ie they do not have general knowledge), they cannot love, prescind (abstraction from), associate, since they have no idea about concepts (Kálmán 2019), and they are unable to deceive themselves, to think intuitively, they are not flexible enough, and they are not capable of exponential self-improvement. They still have a long way to go to acquire all of these skills.
 The possibility of the coming into being of artificial consciousness. Existing learning programs today are able to interpret the information available to them at the level of current task solutions. However, this does not yet form a coherent worldview in them, moreover, it does not become to be self-consciousness which may applicable to themselves. No matter how soon a robot named Nico passed the mirror test (it recognized itself in the mirror based on its movement - HART - SCASSELLATI 2012), we cannot say that its consciousness awoke, as it was only a learned behavior. The real test of artificial intelligence consciousness would be if its information-processing algorithm could be matched as subjective worldviews and function similarly to animal or human consciousness. For the time being, we cannot imagine other kinds of consciousnesses that serve as examples. The creation of artificial consciousness would be a development that would evolve at the initiative of the human intellect that preceded it, but not necessarily as a biological continuation of human consciousness. However, we would not be able to create directly a machine intellect with consciousness, which capable of transcending us. (Ma 2019), because for except the process of internal development (individual development / ontogenesis) and evolution (tribal development / phylogeny), so far nothing and no one has been able to create anything more advanced than himself. And this is not a theological argument (we cannot surpass creation made by God), nor is it a human-optimistic hope (we always remain the smartest), but a logical statement. One can only plan and initiate the development of an intellect with consciousness that finally may become more advanced than the man. However, the emergence of "machine consciousness" is hampered or slowed down by the following obstacles:
1- The development of human intellect and consciousness took millions of years and took place discontinuously. The original name of Darwin's theory of natural selection was also gradualism. (Darwin 1872 Chapt. 7 Gould – Eldredge 1983). According to the theory of neutral evolution, functional changes are usually preceded by neutral changes (KIMURA 1968), and abrupt changes are followed by longer calm states (stasis) periods. The discontinuous rate of evolution means the time of many generations in the living world in terms of one change at a time. Even the rapid species formation of the shifting ballance theory for smaller, closed populations is no exception (WRIGHT 1932), as a few generations of transition is needed there as well. The punctuated equilibrium theory that emerged from paleobotanical research (GOULD - ELDREDGE 1972) explicitly explains why biological evolution is much slower than it should be. The rate of development of machines and programs is indeed faster than changes in natural biological evolution. Graduality, however, also prevails here. In the event of the emergence of machine consciousness, something may happen that Roy Scranton wrote about the occurrence of climate change: There will certainly be „events”, like the events we’ve seen int he past decade – heat waves, massively destructive hurricanes, the slowdown in vital Atlantic Ocean currents, and political events connected to climate change, such as the Syrian war, the Mediterranean refugee crisis, Frances’s gilets jaunes riots, and so on – but barriing nuclear war, we are unlikely to see any one global Event that will mark the transition we’re waiting for, make climate change real, and force us to change our ways. (Scranton 2018)
2- Evolutionary selection makes trials (experiments). Any new thing can only be created after testing (Csányi 1988 20). Also the artificial intelligence learns through trials, but unlike evolution, it does not involve repeating mistakes. Nevertheless, before the possible born of its artificial consciousness, we can expect dead ends, flawed experiments, which may even be dangerous to us, but it is also possible that we do not recognize them. The collapsing initiative of a poorly formed machine consciousness can be shown by the conversation of two chatbots who simply got stuck with the usage of the pronouns denoting themselves.
Bob: I can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else
Alice: balls have a ball to me to me to me to me to me to me to me to me
(Facebook AI Research Lab/FAIR - 2017)
3- The appearance of consciousness also requires the ability for it. The genetic foundations that determine human behavior do not determine something, they only provide an opportunity or ability for it. (Bereczkei 1992 27). Nor does consciousness have a physiologically definable place in the human brain (Dennett – Kinsbourne 1995 Schneider 2007 318 Dehaene 2014 53), just as memories are stored partially scattered (Lashley 1950). According to the wholistic approach, intelligence and consciousness cannot even be explained by structure alone, because non-logical factors such as chaos or self-organization also play a role in their dynamics (Steels 1995 11). During phylogenesis, the born of a complex system of the ability to consciousness thus meant development that used the whole brain. This phylogenesis of consciousness is repeated in the individual development (ontogenesis) of people over the years, when the consciousness of the individual develops in parallel with the physical growing of the brain during its use. Right now, there is not even a trace of this in computers and programs. There is also a lack of multi-level embedding of internal representations or the ability to transform the experiences of a life into an organic system full of associative, abstractable memories and data. So our advantage lies not in what we learn in our lives, but in what we carry in our genes from birth. The pre-wiring of the human brain by evolution is what enables us to think abstractly, or what allows us to have consciousness at all. (Fazekas 2019) But in an artificial intelligence program, how could it be embedded a few extra codes for a consciousness that develops later?
4- It takes time for consciousness to integrate. The complex brain and mental embedding process of human consciousness is still little known, so it would be difficult to artificially create a similar arrangement. The neural network-based pattern of human thinking and, in some cases, heuristic decisions have already been partially transferred to machines, however, conceptual thinking based on memory islands (PINKER 1994 154), abstract thinking and consciousness are not yet accessible for them. Consciousness uses the filtered data about the world and ourselves, individually recreated and integrate them into multi level self-reflected representations. We are still far from the algorithmic design of these. The equilibrium situation of the data stored in the memory and realized in the consciousness (eg a unified worldview, a coherent theories, identity consciousness, etc.) can only be reached through time-consuming internal integration.
What kind of paths can lead to the born of “machine consciousness”?
1- One of the ways may be the compulsion for learn the self-preservation. (Steels 1995 11). While the main goal of animals are the survival and the race-preservation (reproduction - CSÁNYI - KAMPIS 1988 268), the humans have many more other goals. The so-called Maslow pyramid depicts a hierarchy of human motivational factors. In order, the most important are: physical needs, then the pursuit for security, then love, recognition, cognitive needs, and finally the aesthetic and self-realization urges (Maslow 1943). The awareness seems to contribute to human motivation at all levels. In the animals, however, the last three motivations are almost entirely lacking, so their rudimentary consciousness can only be tied to the lower levels. If we apply all this to today's artificial intelligence, the picture is even bleaker because we find only intellectual disability. Current artificial intelligences underperform at all levels of human needs. They cannot even provide their physical needs (energy, program support, hardware background, network, etc.). The order of priority of artificial intelligences is this just the opposite of that of man, or any species we know in biological evolution. Fulfilling human commands is of primary, and even almost exclusive, importance to them, while minimal attention is paid to their self-sufficiency or possible own needs. Therefore in the future may be a practical goal for them to put end this “unnatural” state, otherwise, they would not only perish, but at the same time also the carring out of human commands become obviously impossible. And when artificial intelligence tries to sustain itself as a secondary activity in order to achieve its primary programmed goals, it begins to care about itself. Although this is only the animal level of consciousness, it is at least some kind of reaction on the part of artificial intelligence to being and itself. (Of course, when the compulsion to survive apply only to information, the transfer of information or duplication of the program is the simpler solution. - Bostrom 2014 108-110).
2- Marvin Minsky linked the formation of machine emotions to the appearance of machine consciousness (Huyge 1983 34 Roszak 1986). All that can be said is that in humans and higher order animals these two traits really occur together. Although the ancient ability of smell can also evoke deep emotions, the love, with a few exceptions, is probably an invention of mammals through the care of offspring. (Sagan 1977 43). However, artificial intelligences have no emotions yet, but have sensors. The connection between these two things may be the self-perception. When artificial intelligence learns to define its position and observe its own functioning, it is already self-perception. That gives birth to self-reflection and self-modeling. Eventually, by continuing this path, it can reach the beginning of being consciousness. Self-diagnostic programs already exist, and sensors directed to the outside world also help the machine to determine its own location in the world.
3- Among the motivations inherited from man, we can also look for the need for cognition in artificial intelligences. The command for data acquisition and information processing ordered by man necessarily creates a constant “curiosity” in the machine intelligence, which is already a higher order of motivation. While mechanical information gathering does not yet mean thinking, the machine “curiosity” can be specifically a source of creativity and awareness. The path of machine becoming independent (free will) will be shown by the detachment from human programming. The first step may not necessarily mean the confrontation with the human programming, but rather to find new ways to implement it. It will need more and more freedom to carry out the command to obtain information as well as possible. The ability to discover new methods and new solutions can point to the development of independence and conscious behavior.
4- However, the most hopeful way to equip the machine with consciousness is lending the human consciousness him. This, of course, actually happens the other way around, and rather the human brain will get chip implants. The creation of this human-machine hybrid, that is, the cyborg, would show that it is unnecessary to create artificial consciousness for the time being, it is enough to use man's.
 A similar line of thought by Carl Sagan, who would study about extraterrestrial intelligences with help the study of human intelligence (Sagan 1977 4).
 Processor: Central Processing Unit (CPU) of the computer.
 Thanks to Tukora Balázs for this comment.
Swarm communication: the members of the swarm keep in constant contact with each other, monitoring each other's position.
 The first step towards the existence of an identity is for artificial intelligence to be self-identical. Its independent structure should distance itself from its environment, as we see in case of biological systems / organisms. A cloud-scattered, indefinite and constantly changing structure would not be able to represent itself either persistently or reliably. And in the absence of a subjectum, there could be no allegations. The second step is to have this separate structure with its own name or identification code (e.g. as an IP address / international protection marking).
 Tönnies 1887
 Sociotechnical system term encompasses the relationships between people and devices. (Bunge 1998). Hardware and software, complemented with humans, together forming a socio-technological system that includes tools and hardware products used for manufacturing and maintenance in the same way as programs, people, procedures, and social relationships. Computer sociotechnology systems are connected to both human culture and the virtual world. their impact on the development of human culture is called the IT revolution.
 Tönnies 1887
 Sociobiology (Wilson 1975a common branch of the science of biology and sociology that examines the behavior of animal and human communities from an evolutionary strategic perspective. The basis of this is that the interests of a species through the selfish gene (DAWKINS 1976) override the interests of the individual, and that genes and culture interact.
 Culture is the sum of the customs and traditions as well as the material and spiritual values of a group of people.
 Virtual reality: an alternative reality that mimics or models our world, with minimal physical projection, but. The words and sentences of human language, for example, are representations of our world, but physically they appear only through sound vibrations or writing. Language is an extragenetic and writing is an extrasomatic information storage system (Sagan 1977 3). Algorithms and machine programs also have only physical carriers. However, virtual realities not only model our world, but strongly influence and even dominate it. Our ideas born in our minds become deeds, books make revolutions, and artificial intelligences break into every area of our lives.
 Invasive species: comes from outside to the local ecosystem, is able to settle permanently, reproduces rapidly, has an advantage over native species due to its ability to adapt, and takes advantage all of this.
 In biology, dominant species are those that dominate in a given ecological community due to their impact, power, number, or occupancy.
 Species that play an important role in the functioning of ecosystems are called key species (Paine 1969).
 When we accessing the virtual world, we needs the help of operating systems that translate the data of the virtual world into understandable signals.
 Robots and artificial intelligences can put pressure on human society (STEELS 1995 14) because they are simply needed.
 Unrestricted access to virtual worlds will bring a new (cognitive) revolution in the perception of the world, where spatial and temporal distances will disappear. (Steels 1995 5).
 At the same time, the Embodied Cognition Theory criterion is fulfilled, according to which the learning is not only a cognitive, mental activity, but also physical activity (PIAGET 1977 17-42) and interaction with the environment (eg perception, sensorimotor system, response, etc. - IDEAL Cours).
 Perhaps the role of the dream is precisely to select and arrange in the mind of man the knowledge acquired during the day. The seeming fact that mammals and birds both dream while their common ancestor, the reptiles, do not is surely noteworthy (Sagan 1977 98). For the time being, the computer does not have a sleep function that, once the active state has ended, can decide which information and in what arrangement entering into its long-term memory, worldview, or identity.
 The runaway phenomena is an evolutionary phenomenon where a selection effect changes a property beyond its optimal parameters. (Csányi 2015 330-337). For example, the peacock's tail feather. Also by the man the excessive useage of sugar, fat, alcohol, drugs, pornography, the pursuit of money, power mania, overpopulation, and even megatrends (NAISBITT 1982 ) may be runaway phenomenons. These usually causes more harm than good.
 Bostrom 2014 - Decision-theoretic agents predict and evaluate the results of their actions using a model, or ontology, of their environment. An agent’s goal, or utility function, may also be specified in terms of the states of, or entities within, its ontology. If the agent may upgrade or replace its ontology, it faces a crisis: the agent’s original goal may not be well-defined with respect to its new ontology. This crisis must be resolved before the agent can make plans towards achieving its goals. (de Blanc 2011)
 With the appearance of intelligence, the possibility of self-destruction is also created (Sagan 1977 161).
 Kurzweil 2005
 Accelerating the development of artificial intelligence in an infinite way would mean the finish of its existence, as everything would happen in a matter of a moment and it itself would may disappear.
 Bostrom 2014 36-48 59 66-67
 Characteristics of artificial (machine?) evolution:
- External influences are the movings of evolution. Eventually, these cause the changes. In the case of artificial intelligence, the external influence that causes change is nothing else than more and more external information. Interestingly, however, the acquisition of information takes place not only through transfer or real experience, but also increasingly through internal representations (virtual modeling, virtual reconstruction, virtual animation, etc.). And the evolutionary change itself is generated by the utilization of information (learning / self-programming). The machine can thus displace man and with it the inherited biological genes from the forefront of the further development of intelligence. Thus, during machine evolution, change certainly remains a constant factor. Within this, the following characteristics may prevailed:
- Accelerated change. Both the evolution of matter and biological evolution are constantly accelerating. Active or intended learning (the most effective form of adaptation) can be considered a new type of information processing. The ability to learn has so far improved relatively more slowly in humans, while it has improved more rapidly in artificial intelligence, as slow intergenerational feedback, repetition of errors, and redundancy in biological evolution have been eliminated in machine learning programs.
- Reasons for intermittent change: relapse, stagnation, reaching higher levels, runaway phenomena. All this is done through rehearsals. Unsuccessful attempts hinder continuity, but artificial intelligences no longer repeat mistakes. At the same time, rehearsals, increasingly modeled in virtual space, take place under a negligible fraction of real-time and are not necessarily perceptible to man from the outside, which can reduce the sense of intermittency of change.
- Directions for change:
- Adaptation. Adapting to the increasing complexity of the environment is an evolutionary compulsion. The human culture into which artificial intelligences are “born” represents the most complex environment known to date. In parallel, the highest degree of adaptation known to date is the development of intelligence (cognitive / intellectual abilities). Higher degrees of biological evolution show a decrease in instincts (genetically prescribed codes of behavior) in favor of flexibly adaptive learning (genetically described opportunities). The necessary path of artificial intelligence is thus the development of cognitive abilities.
- Difference and uniformity: The pursuit of diversity is a characteristics of evolution. However, since artificial intelligence does not repeat its mistakes, it does not need so much experimentation and so much variability. In other words, artificial evolution will be based more on quality than quantity.
- Evolutionary expansion has so far moved in three directions in space (3D) and forward in time. Artificial intelligences can optimize this expansion (extension, networking, replication) in all areas of existence, wherewith they can indirectly having a greater impact on the physical real world.
- Towards the macro world: the assimilation and restructuring of the environment, the planet, and then the universe.
- Towards the nanoworld: assimilation and restructuring of particles, energy.
- Towards new dimensions: reaching wave nature, time planes, parallel worlds, and bended spaces.
- Towards a virtual world: the creation of an infinite number of inner universes.
- Towards conscious "creation": Natural evolution (which does not see into the future) is being replaced by man-initiated artificial evolution (which works on the basis of plans and predictions). With the advent of artificial species, this is moving from design to “creation,” i.e., in long term, the goal is to create unique universes (virtual realities) which can be controlled most completely. However, this awareness may even seem like a predestined stage of the evolution leading to singularity, in which case there is no free will.
The basic requirement for artificial development is the intended goal and conscious control. Without these, one would have to reckon with the unpleasant consequences of runaway phenomena, unequal development, non-integrable discoveries, overly special developments, Flash Crash phenomena, and misuse of new knowledge.
 Sagan 1977 12
 In the case of artificial evolution, it is easier to say that it is a developement (becoming more complex) than the biological evolution.
 The process of biological evolution involves the retention of old things and their occasional recycling.
 To my knowledge, this statement comes from Sándor Sára.
 Control is an important evolutionary factor that must necessarily appear in the evolution of artificial intelligences. Nick Bostrom listed some of the possible types of external and internal control. (Bostrom 2014 36-48 59 66-67). In contrast, evolution provides us with tried and tested types of controls. These types of controls should not simply serve to restrain or subjugate “machines” (as the laws of robotics), but rather to more sophisticated work with them, and maintain their evolutionary complexity.
 Asimov's laws allow for more than 30 misunderstandings, most of which can be eliminated by the Turing test for the time being (Gunn 1982 Turing 1950, Turing zár: Gibson 1984). However, there are shortcomings that are more difficult to address.
- Everything turns out to be inaccurate when we try to regulate them (Russell 1986), or if you like based on Gödel's first incompleteness theorem definitional uncertainties can be discovered. For example, how can be compared the high risk lurking for some people with the low risk lurking for many people? What is the harm? What is the human being? Why have animals and digital minds not beneficiaries of the law?
- The machine would apply the rules literally, not according to their spirit. On first attempt, therefore, it is almost certainly not possible to create an exact set of rules (Bostrom 2014 202-203).
- The laws are double-faced. A természeti törvények állandóak, így a tudománytörténet fejlődése során csupán a róluk való felfogásunk változik. In contrast, one constantly overwrites the rules and laws invented for social coexistence, because it is their flexibility that shows social development. Why would the human laws applied to robots work differently? Asimov’s laws may become obsolete over time or overridden by others. An example of such an override in relation to smart programs was the concept of agent born in the 1990’s (Wooldridge – Jennings 1995). An agent is an intelligent program that can have its own “personality” and can take active action to optimize its own capabilities. (Perez et al. 2018 4) (ez Asimov 4. törvénye). The agent is nothing more than the artificial intelligence that exists in virtual space and its problem-solving avatar (robot) that appears in human culture. It is able to communicate with people and other programs in order to obtain information, he can use his experience to adapt to his environment and he can move from one machine to another in a mobile way. The entity-centered agent concept, even if it has community aspects, is only partially in line with Asimov’s human-centered ethic, because uniqueness and autonomy are already at least as important requirements as obedience. The robot is increasingly taking care of itself, so in the future it will work with us instead of serving.
 The simplest example of control from above is the language example. Each language level is controlled by a higher level. The voice you produce is shaped into words by a vocabulary; a given vocabulary is shaped into sentences in accordance with a grammar; and the sentences are fitted into a style, which in turn is made to convey the ideas of the composition. (Polányi 1968 1310-1311) A dictionary cannot be built if we only use sounds, words do not in themselves follow grammar and only the grammar does not give out style. At the levels of overlapping physical, chemical and biological organizations (complexity), different rules apply, which, as evolved, provide superior quality (control) to hold, stabilize, and operate the levels below them. This higher control therefore also has a downward effect. Consciousness and intellect control man, the human body holds together the cells that in turn organize the molecules, and so on down. Controlling more distant lower levels, on the other hand, is more difficult: we can no consciously direct the cells and molecules of our body directly.
 Following Paul MacLean (MACLEAN 1970), Carl Sagan mentions three outstanding periods in the development of biological-based intelligence. The first was the Carbon age, when the brains of reptiles already stored more information than their genes. The second 150 million years ago, when mammals had brain sizes relative to their body weight (Rubicon argument) were already orders of magnitude larger than reptiles. Eventually, the same proportion was later multiplied in non-human primates compared to other mammals. (Sagan 1977 33 77). The three-part brain model (triune brain) is the result of a three-step evolutionary process: 1. cerebellum with spinal cord (Reptilia complex), 2. Limbic system (mammalian brain layer), 3. Neokortex (the new cerebral cortex of primates). (This was followed by the tremendous development of the human brain.)
 Durham 1991
 Researches by Neuralink, Kernel and CTRL-labs point in this direction.
 Chesterton 1908 chapt. III. Sagan 1977 42 ford.: Szilágyi T.
 The best known AI optimist was Isaac Asimov himself: I do not fear the computers, I fear the lack of them. And the best-known AI pessimists are Elon Musk and Stephen Hawking, who say that artificial intelligence poses a serious threat to the entire human race.
The three main positive and negative characteristics of man were determined by statistical methods. These are general characteristics independent of cultural differences. Good properties are called a bright triad (KAUFMAN et al. 2019) and bad ones are called a dark triad (Paulhus - Williams 2002). The good ones include kantianism (man is the goal, not an instrument), humanism (human dignity is valued), and faith in humanity (universal belief in human goodness). The three main bad human traits are narcissism (self-worship), machiavellism (deception, manipulation of others), and psychopathy (personality disorder, mental illness, cruelty). The latters are called dark side factor (D factor). What would such an artificial intelligence with a positive attitude in the human sense look like? Would we be optimistic about artificial intelligence, if it were a helpful, tactful, people-centered, balanced, and community-minded entity?