Книга - Elements applications of artificial intelligence in transport and logistics

a
A


Warning: mb_convert_encoding(): Unable to detect character encoding in /var/www/u2150601/data/www/ae-books.online/wp-content/themes/twentyfifteen/content.php on line 442
Elements applications ofartificial intelligence intransport and logistics
Dmitry Abramov

Alexander Korpukov

Pavel Minakov

Vadim Shmal


Abramov Dmitry, Moscow Polytechnic UniversityKorpukov Alexander, Pirogov Russian National Research Medical UniversityShmal Vadim, Federal state autonomous educational institution of higher education Russian university of transportMinakov Pavel, Federal state autonomous educational institution of higher education Russian university of transport





Elements applications ofartificial intelligence intransport and logistics



Dmitry Abramov

Alexander Korpukov

Vadim Shmal

Pavel Minakov



Dmitry Abramov,2021

Alexander Korpukov,2021

Vadim Shmal,2021

Pavel Minakov,2021



ISBN978-5-0055-6674-4

Created with Ridero smart publishing system




The emergence ofthe science ofartificial intelligence


Artificial intelligence (AI) is intelligence displayed bymachines, as opposed tonatural intelligence displayed byhumans and animals. The study ofartificial intelligence began inthe 1950s, when systems could not perform tasks as well as humans. Artificial intelligence is the overall goal ofbuilding asystem that exhibits intelligence, consciousness and is capable ofself-learning. The most famous types ofartificial intelligence known as machine learning, which is akind ofartificial intelligence, and deep learning.

The development ofartificial intelligence is acontroversial area as scientists and policymakers grapple with the ethical and legal implications ofcreating systems that exhibit human-level intelligence. Some argue that the best way topromote artificial intelligence is through education toprevent it from bias against people and make it accessible topeople from all socioeconomic backgrounds. Others fear that increased regulation and concerns over national security will hamper the development ofartificial intelligence.

Artificial intelligence (AI) originated inthe 1950s, when scientists believed that machines could not exhibit intelligent behavior that the human brain could not reproduce. In1962, ateam at Carnegie Mellon University led byTerry Winograd began work on universal computing intelligence. In1963, as part ofthe MAC project, Carnegie Mellon created aprogram called Eliza, which became the first machine todemonstrate the ability toreason and make decisions like humans.

In1964, IBM researcher JCR Licklider began research inthe computer science and cognitive sciences with the goal ofdeveloping intelligent machines. In1965, Licklider coined the term artificial intelligence todescribe the entire spectrum ofcognitive technologies that he studied.

Scientist Marvin Minsky introduced the concept ofartificial intelligence inthe book Society ofMind and foresaw that the field ofdevelopment ofscience goes through three stages: personal, interactive and practical. Personal AI, which he considered the most promising, would lead tothe emergence ofhuman-level intelligence, an intelligent entity capable ofrealizing its own goals and motives. Interactive AI will develop the ability tointeract with the outside world. Practical AI, which he believed was most likely, would develop the ability toperform practical tasks.

The term artificial intelligence began toappear inthe late 1960s when scientists began tomake strides inthis area. Some scientists believed that infuture, computers would take on tasks that were too complex for the human brain, thus achieving intelligence. In1965, scientists were fascinated byan artificial intelligence problem known as the Stanford problem, inwhich acomputer was asked tofind the shortest path on amap between two cities inagiven time. Despite many successful attempts, the computer was able tocomplete the task only 63% ofthe time. In1966, Harvard professor John McCarthy stated that this problem is as close as we can incomputers tothe problem ofbrain analysis, at least on atheoretical basis.

In1966, researchers at IBM, Dartmouth College, the University ofWisconsin-Madison, and Carnegie Mellon completed work on the Whirlwind I, the worlds first computer designed specifically for artificial intelligence research. Inthe Human Genome Project, computers were used topredict the genetic makeup ofaperson. In1968, researchers at Moores School ofElectrical Engineering published an algorithm for artificial neural networks that could potentially be much more powerful than an electronic brain.

In1969, Stanford graduate students Seymour Papert and Herbert A. Simon created language for children Logo. Logo was one ofthe first programs touse both numbers and symbols, as well as simple grammar. In1969Papert and Simon founded the Center for Interactive Learning, which led tothe development ofthe logo and further research into artificial intelligence.

Inthe 1970s, anumber ofscientists began experimenting with self-conscious systems. In1972, Yale professor George Zbib introduced the concept ofartificial social intelligence and suggested that these systems might one day understand human emotions, in1972he coined the term emotional intelligence and suggested that one day systems might understand emotions. In1973, Zbib co-authored an article entitled Natural Aspects ofHuman Emotional Interaction, inwhich he argued that artificial intelligence could be combined with emotion recognition technology tocreate systems capable ofunderstanding emotions. In1974Zbib founded Interaction Sciences Corporation todevelop and commercialize his research.

Bythe late 1960s, several groups were working on artificial intelligence. Some ofthe most successful researchers inthis area were from the MIT Artificial Intelligence Lab, founded byMarvin Minsky and Herbert A. Simon. MITs success can be attributed tothe diversity ofindividual researchers, their dedication and the groups success infinding new solutions toimportant problems. Bythe late 1960s, most artificial intelligence systems werent as powerful as humans.

Minsky and Simon envisioned auniverse inwhich the intelligence ofamachine is represented byaprogram or set ofinstructions. As the program worked, it led toaseries oflogical consequences called aset ofaffirmative actions. These consequences can be found inthe answer dictionary, which will create anew set ofexplanations for the child. Inthis way, the child can make educated guesses about the state ofaffairs, creating afeedback loop that, inthe right situation, can lead toafair and useful conclusion. However, there were two problems with the system: the child had tobe taught according tothe program, and the program had tobe perfectly detailed. No programmer could remember all the rules achild had tofollow, or aset ofanswers that achild might have.

Tosolve this problem, Minsky and Simon developed what they called the magicians apprentice (later known as the Minsky rule-based thinking system). Instead ofmemorizing each rule, the system followed aprocess: the programmer wrote down the statement and identified the reasons for the various outcomes based on the words explain, confirm, and deny. If the explanation matched one ofthe reasons, then the program needed tobe tested and given feedback. If this did not happen, it was necessary todevelop anew one. If the program was successful inthe second phase, it was allowed tocreate more and more rules, increasing the breadth ofits theories. When faced with aproblem, he could be asked toread the entire set ofrules inorder tore-examine the problem.

Minsky and Simon system was incredibly powerful, because the programmer only gave several versions explanation. The researcher was not required togo through any procedures other than writing and entering the program requirements. This allowed Minsky and Simon tocreate more rules and, more important, learn from their mistakes. In1979, the system was successfully demonstrated inthe SAT exam. Although the system had two flaws that prevented it from answering two ofthe three SAT questions, it scored 82percent for Group 2and 3questions and 75percent for Group 4and 5questions. The system did not cope with complex issues that did not fit into the established rules. Processing large amounts ofdata was also slow, so any additional details were thrown away tospeed up the system.

The system also had some limitations due tothe rules. Rules can only be defined based on alimited number oflabels. For example, when rules are given, they should define what the labels mean. They can only be applied topositive results. However, as the systems ability toprocess information has grown, it has been shown that the system can make mistakes. Inparticular, if it had toapply the same label totwo different objects (and still detect an error), it could not make auseful distinction between the two objects, and then decide which label should be applied.

Minsky and Simon focused on applying their system tohumans. They developed asystem they called aliving program or projective computing system (PPAS). They used PPAS tocreate asymbolic approach tothe study ofpsychology. This would have an advantage that, unlike traditional programs, the teaching could be programmed. The program had touse symbols todescribe the human system, and then train the system through explanations. They later called this approach general computing, which allows you tostudy any problem with enough time and data.

For Minsky and Simon, the main limitation oftheir system was its ability toaccurately calculate the results ofthe system. This limitation was not related toany flaws intheir system; the system worked, but was slow and expensive. For this reason, they thought they could get around this byprogramming the results using so-called functional programming (FP). FP was about represented aBritish computer scientist the JCR Licklider in1950. He is about refers tothe programming style, which focuses on the main functions and behavior ofprograms, rather than the implementation ofthe program. Using FP, the system could compute the results, but then explain the cause ofthe problem using human language.

Over the next decade, PPS and PPAS continued togrow, and in1966, Minsky and Simon published an article titled Brain Activity Systems, which was the result oftheir research. Here they showed that there was aprogram that could be written that would read the brains ofanumber ofvolunteers and then track their brain activity. Each volunteer read apassage about how the brain works; they had tocomplete this task, and then measured the activity ofthe brain.

Specifically, the authors showed that their system is capable ofresponding tocertain brain waves (also called rhythms) and that it can combine these brain waves inways that help make sense ofasubject. They showed that if the brainwave was aslow rhythm, the system was able toremember information it was exposed toearlier and reactivate it when needed. If the brainwave was afast rhythm, the system could cure the forgetfulness ofinformation bycomparing it with another element.

When Minsky and Simon published their article, they have attracted alot ofattention, because they offered this kind ofexperimental system, which theoretically can be implemented. They were able toapproach the study from the practical point ofview.

In1972, Minsky and Simon founded the Center for Behavioral Neuroscience inAnn Arbor, Michigan. They designed and conducted aseries ofexperiments that led them tothe following conclusions: There was something different from the mind, something that distinguished it from any other organization; The data showed that our ideas about action and the brain were different; Our brain worked differently than other parts ofthe body; There is apossibility that the organization ofthe mind may be influenced bythe activity ofthe brain; The minds are based on basic physical principles.

They came tothis conclusion because they saw the relationship between specific brain activity and aspecific behavior or idea. Inother words, if you go tothe mind and see activity that looked like it came from the mind, and you saw behavior that looked like it came from the mind, then the behavior is likely tofollow the behavior. And if the mind was imprinted on the behavior, then it had tofollow the action, and not vice versa. They began toformulate anew theory about how behavior arises and how mind is formed.

Minsky explained:

The starting point was the work we did on the correlations between brain activity and human behavior. It was very clear tous that these correlations cannot be understood without first understanding how behavior is generated.

The authors came tothe conclusion that any inorganic system can act only on the basis ofits internal states. If the internal states changed, then the behavior ofthe system would change. When the authors thought ofabrain that responds tocertain types ofbrain waves, they noticed that the brain would produce acertain behavior, and that this behavior would correspond tothe internal state ofthe brain. This is auniversal principle ofnature. Since this principle ofnature made behavior universal, it should lead the authors tothe conclusion that if they applied these principles tothe brain, they could create acomputer program that would be able toreproduce the behavior ofthe brain.

Minsky believed that universal principles governing biological systems could be used tocreate computer software. However, Minsky admitted that his ideas were science fiction. It took Minsky and Simon another year tofind away tocreate acomputer that could mimic their discoveries. But by1972, they had developed acomputer program that could test their theories.

John B. Barg, professor ofpsychology at Yale University, was also instrumental inthe development ofMinsky and Simons research. Barg helped found the Center for Behavioral Neuroscience at the University ofMichigan in1972, where Minsky and Simon continued toexperiment with human and animal behavior.

The field ofartificial intelligence research began inaseminar at Dartmouth College in1956, where the term artificial intelligence was first coined. The following year, in1957, Massachusetts Institute ofTechnology, together with its research graduate students, formed anew organization ofAI researchers called the SIGINT-A (Intelligence and Scientific Computing) Committee. After creating many ofthe foundations ofartificial intelligence, members ofthis group did some research on asimilar program at Stanford University. The group decided tokeep the name SIGINT-A and develop anew research and development program inthe field ofartificial intelligence. SIGINT-A became the research and development group that eventually became the world famous artificial intelligence laboratory that now bears his name. SIGINT-A is alegendary research organization. There are many famous names inthis area inits history. Many famous names inthe field ofAI have been taken from SIGINT-A. Many projects have been implemented inthe laboratory. Tomeet engineering needs or tofulfill anew mission inanew era ofartificial intelligence, SIGINT-A has never been afraid totry new things. And many ofher ideas and directions have been accepted inthe generally accepted field. Many ofwhat we now regard as leading AI tools, such as neural networks and helper vector machines, were created or adapted inthe SIGINT-Aera.

Computer science defines AI research as the study ofintelligent agents: any device that perceives the environment and takes action based on what it perceives.

It is acommon misconception that artificial intelligence research focuses on creating technologies that resemble human intelligence. However, as Alan Turing wrote, the most important attributes ofhuman intelligence are not the pursuit ofmathematical knowledge and the ability toreason, but the ability tolearn from experience, perceive the environment, and so on. Tounderstand how these properties ofhuman intelligence can be used toimprove other technologies, one must understand these characteristics ofhuman intelligence.

AI researchers and entrepreneurs use the term artificial intelligence todefine software and algorithms that demonstrate human intelligence. The academic area has since expanded tocover related topics such as natural language processing and systems. Much ofthe work inthis area takes place inuniversities, research institutes and companies, with investments from companies like Microsoft and Google.

Artificial intelligence is also used inother industries, such as the automatic control ofships, and is commonly used inthe development ofrobotics. Examples ofAI applications include speech recognition, image recognition, language processing, computer vision, decision making, robotics, and commercial products including language translation and recommendation engines. Artificial intelligence is at the center ofnational and international public policy such as the National Science Foundation. Research and development inartificial intelligence is managed byindependent organizations that receive grants from public and private agencies. Other organizations, such as The Institute for the Future, have awealth ofinformation on AI and other emerging technologies and design professions, as well as the talent required towork with those technologies.

The definition ofartificial intelligence has evolved since the concept was developed and it is currently not ablack and white definition, but rather acontinuum. From the 1950s tothe 1970s, AI research focused on the automation ofmechanical functions. Researchers such as John McCarthy and Marvin Minsky have explored the problems ofgeneral computing, general artificial intelligence, reasoning, and memory.

In1973, Christopher Chabris and Daniel Simons proposed athought experiment called The Incompatibility ofAI and Human Intelligence. The problem described was that if the artificial system was so smart that it was superior tohumans or superior tohuman capabilities, the system could make whatever decisions it wanted. This can violate the fundamental human assumption that people should have the right tomake their own choices.

Inthe late 1970s and early 1980s, the field ofactivity changed from the classical orientation towards computers tothe creation ofartificial neural networks. Researchers began tolook for ways toteach computers tolearn rather than just perform certain tasks. This field developed rapidly during the 1970s and eventually moved from computing toamore scientific-oriented one, and its field ofapplication expanded from computing tohuman perception and action.

Many researchers inthe 1970s and 1980s focused on defining the boundaries ofhuman and computer intelligence, or the capabilities required for artificial intelligence. The boundary should be wide enough tocover the full range ofhuman capabilities.

While the human brain is capable ofprocessing gigabytes ofdata, it was difficult for leading researchers toimagine how an artificial brain could process much larger amounts ofdata. At the time, the computer was aprimitive device and could only process single-digit percentages ofdata on ahuman scale.

During that era, artificial intelligence scientists also began work on algorithms toteach computers tolearn from their own experience aconcept similar tohow the human brain learns. Meanwhile, inparallel, alarge number ofcomputer scientists developed search methods that could solve complex problems bylooking for ahuge number ofpossible solutions.

Artificial intelligence research today continues tofocus on automating specific tasks. This emphasis on the automation ofcognitive tasks is called narrow AI. Many researchers working inthis field are working on facial recognition, language translation, playing chess, composing music, driving cars, playing computer games, and analyzing medical images. Over the next decade, narrow AI is expected todevelop more specialized and advanced applications, including acomputer system that can detect early stages ofAlzheimers disease and analyze cancers.

The public uses and interacts with artificial intelligence every day, but the value ofAI ineducation and business is often overlooked. AI has significant potential inalmost all industries, such as pharmaceuticals, manufacturing, medicine, architecture, law and finance.

Companies are already using artificial intelligence toimprove services, improve product quality, lower costs, improve customer service, and save money on data centers. For example, with robotics software, Southwest Airlines and Amadeus can better answer customer questions and use customer-generated reports toimprove their productivity. Overall, AI will affect nearly every industry inthe coming decades. On average, about 90% ofU.S. jobs will be affected byAI by2030, but the exact percentage varies byindustry.

Artificial intelligence can dramatically improve many aspects ofour lives. There is alot ofpotential for improving health and treating illness and injury, restoring the environment, personal safety, and more. This potential has generated alot ofdiscussion and debate about its impact on humanity. AI has been shown tobe far superior tohumans inavariety oftasks such as machine vision, speech recognition, machine learning, language translation, computer vision, natural language processing, pattern recognition, cryptography, chess.

Many ofthe fundamental technologies developed inthe 1960s were largely abandoned bythe late 1990s, leaving gaps inthis area. Fundamental technologies that define AI today, such as neural networks, data structures, and so on. Many modern artificial intelligence technologies are based on these ideas and are much more powerful than their predecessors. Due tothe slow pace ofchange inthe tech industry, while current advances have produced some interesting and impressive results, there is little todistinguish them from each other.

Early research inartificial intelligence focused on learning machines that used aknowledge base tochange their behavior. In1970, Marvin Minsky published aconcept paper on LISP machines. In1973, Turing proposed asimilar language called ML, which, unlike LISP, recognized asubset offinite and formal sets for inclusion.

Inthe decades that followed, researchers were able torefine the concepts ofnatural language processing and knowledge representation. This advance has led tothe development ofthe ubiquitous natural language processing and machine translation technologies inuse today.

In1978, Andrew Ng and Andrew Hsey wrote an influential review article inthe journal Nature containing over 2,000papers on AI and robotic systems. The paper covered many aspects ofthis area such as modeling, reinforcement learning, decision trees, and social media.

Since then, it has become increasingly difficult toinvolve researchers innatural language processing, and new advances inrobotics and digital sensing have surpassed the state ofthe art innatural language processing.

Inthe early 2000s, alot ofattention was paid tothe introduction ofmachine learning. Learning algorithms are mathematical systems that learn byobservation.

Inthe 1960s, Bendixon and Ruelle began toapply the concepts oflearning machines toeducation and beyond. Their innovations inspired researchers tofurther explore this area, and many research papers were published inthis area inthe 1990s.

Sumit Chintals 2002article, Learning with Fake Data, discusses afeedback system inwhich artificial intelligence learns byexperimenting with the data it receives as input.

In2006, Judofsky, Stein, and Tucker published an article on deep learning that proposed ascalable deep neural network architecture.

In2007, Rohit described" hyperparameters. The term "hyperparameter" is used todescribe amathematical formula that is used incomputer learning. While it is possible todesign systems with tens, hundreds, or thousands ofhyperparameters, the number ofparameters must be carefully controlled because overloading the system with too many hyperparameters can degrade performance.

Google co-founders Larry Page and Sergey Brin published an article on the future ofrobotics in2006. This document includes asection on developing intelligent systems using deep neural networks. Page also noted that this area would not be practical without awide range ofunderlying technologies.

In2008, Max Jaderberg and Shai Halevi published Deep Speech. Init was presented the technology Deep Speech, which allowed the system todetermine the phonemes ofspoken language. The system entered four sentences and was able tooutput sentences that were almost grammatically correct, but had the wrong pronunciation ofseveral consonants. Deep Speech was one ofthe first programs tolearn tospeak and had agreat impact on research inthe field ofnatural language processing.

In2010, Jeffrey Hinton describes the relationship between human-centered design and the field ofnatural language processing. The book was widely cited because it introduced the field ofhuman-centered AI research.

Around the same time, Clifford Nass and Herbert A. Simon emphasized the importance ofhuman-centered design inbuilding artificial intelligence systems and laid out anumber ofdesign principles.

In2014, Hinton and Thomas Kluver describe neural networks and use them tobuild asystem that can transcribe aperson with acleft lip. The transcription system has shown significant improvements inspeech recognition accuracy.

In2015, Neil Jacobstein and Arun Ross describe the TensorFlow framework, which is now one ofthe most popular data-driven machine learning frameworks.

In2017, Fei-Fei Li highlights the importance ofdeep learning indata science and describes some ofthe research that has been done inthis area.




Artificial neural networks and genetic algorithms


Artificial neural networks (ANNs), commonly referred tosimply as deep learning algorithms, represent aparadigm shift inartificial intelligence. They have the ability toexplore concepts and relationships without any predefined parameters. ANNs are also capable ofstudying unstructured information that goes beyond the requirements ofestablished rules. Initial ANN models were built inthe 1960s, but research has intensified inthe last decade.

The rise incomputing power opened up anew world ofcomputing through the development ofconvolutional neural networks (CNNs) inthe early 1970s. Inthe early 1980s, Stanislav Ulam developed the symbolic distance function, which became the basis for future network learning algorithms.

Bythe late 1970s, several CNNs were deployed on ImageNet. Inthe early 2000s, floating point GPUs provided exponential performance and low power consumption for data processing. The emergence ofdeep learning algorithms is aconsequence ofthe application ofmore general computational architectures and new methods for training neural networks.

With the latest advances inmulti-core and GPU technology, training neural networks with multiple GPUs is possible at afraction ofthe cost ofconventional training. One ofthe most popular examples is GPU deep learning. Training deep neural networks on GPUs is fast, scalable, and requires low-level programming capabilities toimplement modern deep learning architectures.

Optimization ofgenetic algorithms can be an effective method for finding promising solutions tocomputer science problems.

Genetic algorithm techniques are usually implemented inasimulation environment, and many common optimization problems can be solved using standard library software such as PowerMorph or Q-Learning.

Traditional software applications based on genetic algorithms require atrained expert toprogram and customize their agent. Toenable automatic scripting, genetic algorithm software can be distributed as executable source code, which can then be compiled byordinary users.

Genetic algorithms are optimized for known solutions that can be ofany type (e.g. integer search, matrix factorization, partitioning, etc.). Incontrast, Monte Carlo optimization requires that an optimal solution can be generated byan unknown method. The advantage ofgenetic algorithms over other optimization methods lies intheir automatic control over the number ofgenerations required, initial parameters, evaluation function, and reward for accurate predictions.

An important property ofagenetic algorithm is its ability tocreate awild configuration ofparameters (for example, alternating hot and cold endpoints) that correspond toagiven learning rate (learning rate times the number ofgenerations). This property allows the user toanalyze and decide if the equilibrium configuration is unstable.

The downside ofgenetic algorithms is their dependence on distributed memory management. While extensive optimization techniques can be used tohandle large input sets and multiple processor / core configurations, the complexity ofthis operation can make genetic algorithm decisions vulnerable toresource constraints that impede progress. Even with the genetic algorithm code, intheory, programs based on genetic algorithms can only find solutions toproblems when run on the appropriate computer architecture. Examples ofproblems associated with agenetic algorithm running on amore limited architecture include memory limits for storing representations ofthe genetic algorithm, memory limits imposed bythe underlying operating system or instruction set, and memory limits imposed bythe programmer, such as limits on the amount ofprocessing power, allocated for the genetic algorithm and / or memory requirements.

Many optimization algorithms have been developed that allow genetic algorithms torun efficiently on limited hardware or on aconventional computer, but implementations ofgenetic algorithms based on these algorithms have been limited due totheir high requirements for specialized hardware.

Heterogeneous hardware is capable ofdelivering genetic algorithms with the speed and flexibility ofaconventional computer, while using less energy and computer time. Most implementations ofgenetic algorithms are based on agenetic architecture approach.

Genetic algorithms can be seen as an example ofdiscrete optimization and computational complexity theory. They provide ashort explanation ofevolutionary algorithms. Unlike search algorithms, genetic algorithms allow you tocontrol changes inparameters that affect the performance ofasolution. For this, the genetic algorithm can study aset ofalgorithms for finding the optimal solution. When an algorithm converges toan optimal solution, it can choose an algorithm that is faster or more accurate.

Inthe mathematical language ofprogrammatic analysis, agenetic algorithm is afunction that maps states into transitions tothe next states. Astate can be asingle location inashared space or acollection ofstates. Generation is the number ofstates and transitions between them that must be performed toachieve the target state. The genetic algorithm uses the transition probability tofind the optimal solution, and uses asmall number ofnew mutations each time ageneration ends. Thus, most mutations are random (or quasi-random) and therefore can be ignored bythe genetic algorithm totest behavior or make decisions. However, if the algorithm can be used tosolve the optimization problem, then this fact can be used toimplement the mutation step.

Transition probabilities determine the parameters ofthe algorithm and are critical for determining astable solution. As asimple example, if there was an unstable solution, but only certain states could be traversed, then the algorithm for finding asolution could run into problems, since the mutation mechanism would contribute toachange inthe direction ofmovement ofthe algorithm. Inother words, the problem oftransition from one stable state toanother will be solved bychanging the current state.

Another example might be that there are two states, cold and hot, and that it takes acertain amount oftime totransition between these two states. Totransition from one state toanother inacertain amount oftime, the algorithm can use the mutation function toswitch between cold and hot states. Thus, mutations optimize the available space.

Genetic algorithms do not require complex computational resources or detailed network architecture management. For example, agenetic algorithm could be adapted touse aconventional computer if computing resources (memory and processing power) were limited, for example, for simplicity insome scenarios. However, when genetic algorithms are constrained byresource constraints, they can only calculate probabilities, which leads topoor results and unpredictable behavior.

Hybrid genetic algorithms combine asequential genetic algorithm with adynamic genetic algorithm inarandom or probabilistic manner. Hybrid genetic algorithms improve the efficiency ofthe two methods bycombining their advantages while retaining important aspects ofboth methods. They do not require adeep understanding ofboth mechanisms, and insome cases do not even require special knowledge inthe field ofgenetic algorithms. There are many common genetic algorithms that have been implemented for different types ofproblems. Some notable use cases for these algorithms include extracting geotagged photos from social media, traffic prediction, image recognition insearch engines, genetic matching between stem cell donors and recipients, and public service evaluations.

Aprobabilistic mutation is amutation inwhich the probability that anew state will be observed inthe current generation is unknown. Such mutations are closely related togenetic algorithms and error-prone mutations. Probability mutation is auseful method for checking that asystem meets certain criteria. For example, aworkflow has acertain error threshold that is determined bythe context ofthe operation. Inthis case, the choice ofanew sequence depends on the probability ofgetting an error.

Although probabilistic mutations are more complex than deterministic mutations, they are faster because there is no risk offailure. The probabilistic mutation algorithm, unlike deterministic mutations, can represent situations where the observed mutation probability is unknown. However, incontrast tothe probabilistic mutation algorithm, parameters must be specified inareal genetic algorithm.

Inpractice, probabilistic mutations can be useful if the observed probabilities ofeach mutation are unknown. The difficulty ofperforming probabilistic mutations increases as more mutations are generated and the higher the probability ofeach mutation. Because ofthis, probabilistic mutations have the advantage ofbeing more useful insituations where mutations occur frequently, and not just inone-off situations. Since probabilistic mutations tend toproceed very slowly and have ahigh probability offailure, probabilistic mutations can only be useful for systems that can undergo very high mutation rates.

There are also many hybrid mutation / genetic algorithms that are capable ofgenerating deterministic or probabilistic mutations. Several variants ofgenetic algorithms have been used tocreate music for composers using the genetic algorithm.

Inspired byacommon technique, Harald Helfgott and Alberto O. Dinei developed an algorithm called MUSICA that generates music from the sequences ofthe first, second, and third bytes ofasong. Their algorithm generated music from asix-part extended chord composition. Their algorithm produced asequence ofbyte values for each element ofthe extended chord, and the initial value could be either the first byte or the second byte.

InApril 2012, researchers at Harvard University published the Efficient Design ofaQuality Assured Musical Genome, which described an approach using agenetic algorithm tocreate musical works.

Computer scientist Martin Wattenberg has proposed aproof ofconcept for an instrument based on agenetic algorithm capable ofnot only creating musical performances, but also composing them. His instrument, instead ofrandomly changing the elements ofthe performance, would keep certain similar elements constant. It performed both atraditional musical play and aharmonizing function. Wattenbergs instrument would be more accurate, and one could compose the same piece using many different generative algorithms, each with different effects. The technology that makes the instruments would be available tomusicians, allowing them toinsert amusical phrase into the instrument and make it play acomplete performance version.

Similar tomodern electronic music, instruments that generate music can also be used tocontrol light, sound, video, or displays.

In1993, two scientists at the University ofMinnesota developed asoftware package called Choir Designer tohelp researchers design scores for electronic musical instruments. With this package, the user creates fully detailed design plans for possible electronic musical instruments. The software allowed the user toenter aset ofmusical parameters into afolder-style document called adesign template, and then use the music program tocreate complete, detailed, three-dimensional designs for the instrument and its parts. The data for the design templates was generated byChoir Designer software inabiological manner using genetic algorithms. One template could contain data from Propellerheads Reason music production software, Audacity digital sound editor, as well as regular computer data. Inone template, for example, the SPL parameter could be changed tocreate asecond, different sound. Today, no electronic instrument has been created using adesign template, although intheory they couldbe.




Genetic programming


Inartificial intelligence, genetic programming (GP) is amethod ofdeveloping programs bymodifying them with DNA and modifying them with various proteins and molecules. GP was developed byJohn L. Hennessy at Carnegie Mellon University in1989and released as open-source software in1995. The most popular implementation is CUDA, created byAndrew Karp and Ben Shaw from the Massachusetts Institute ofTechnology.

According toHennessy, genetic programming is an evolving programming language with astrong focus on optimization, which is the core essence ofevolutionary algorithms. It is aprogram like all programming languages, except that it only includes basic lexical and syntactic predicates. Moreover, it is aprogramming language that the human brain uses todevelop programs.

While genetic programming can be thought ofas apattern matching technique inwhich asystem performs exactly the same task using only the mechanisms it has developed, it is much more general innature. Inevolutionary programming, the exact shape ofan adaptive program is not important. You can only target the behavior ofthe system.

Genetic programming adds limitations that guide evolution inthe form ofgene sequences (alphabetical or hierarchical). During evolution, the goal is toreplicate DNA at ahigh rate (or as fast as possible) inorder toproduce the desired proteins or nucleotides and toadapt the DNA tothe current needs ofthe body.




.


.

, (https://www.litres.ru/pages/biblio_book/?art=66736884) .

Visa, MasterCard, Maestro, , , , PayPal, WebMoney, ., QIWI , .



Abramov Dmitry, Moscow Polytechnic University Korpukov Alexander, Pirogov Russian National Research Medical University Shmal Vadim, Federal state autonomous educational institution of higher education «Russian university of transport» Minakov Pavel, Federal state autonomous educational institution of higher education «Russian university of transport»

Как скачать книгу - "Elements applications of artificial intelligence in transport and logistics" в fb2, ePub, txt и других форматах?

  1. Нажмите на кнопку "полная версия" справа от обложки книги на версии сайта для ПК или под обложкой на мобюильной версии сайта
    Полная версия книги
  2. Купите книгу на литресе по кнопке со скриншота
    Пример кнопки для покупки книги
    Если книга "Elements applications of artificial intelligence in transport and logistics" доступна в бесплатно то будет вот такая кнопка
    Пример кнопки, если книга бесплатная
  3. Выполните вход в личный кабинет на сайте ЛитРес с вашим логином и паролем.
  4. В правом верхнем углу сайта нажмите «Мои книги» и перейдите в подраздел «Мои».
  5. Нажмите на обложку книги -"Elements applications of artificial intelligence in transport and logistics", чтобы скачать книгу для телефона или на ПК.
    Аудиокнига - «Elements applications of artificial intelligence in transport and logistics»
  6. В разделе «Скачать в виде файла» нажмите на нужный вам формат файла:

    Для чтения на телефоне подойдут следующие форматы (при клике на формат вы можете сразу скачать бесплатно фрагмент книги "Elements applications of artificial intelligence in transport and logistics" для ознакомления):

    • FB2 - Для телефонов, планшетов на Android, электронных книг (кроме Kindle) и других программ
    • EPUB - подходит для устройств на ios (iPhone, iPad, Mac) и большинства приложений для чтения

    Для чтения на компьютере подходят форматы:

    • TXT - можно открыть на любом компьютере в текстовом редакторе
    • RTF - также можно открыть на любом ПК
    • A4 PDF - открывается в программе Adobe Reader

    Другие форматы:

    • MOBI - подходит для электронных книг Kindle и Android-приложений
    • IOS.EPUB - идеально подойдет для iPhone и iPad
    • A6 PDF - оптимизирован и подойдет для смартфонов
    • FB3 - более развитый формат FB2

  7. Сохраните файл на свой компьютер или телефоне.

Книги автора

Последние отзывы
Оставьте отзыв к любой книге и его увидят десятки тысяч людей!
  • константин александрович обрезанов:
    3★
    21.08.2023
  • константин александрович обрезанов:
    3.1★
    11.08.2023
  • Добавить комментарий

    Ваш e-mail не будет опубликован. Обязательные поля помечены *